Sunday, September 6, 2009
Interview with Barry Truax
from the interview,
Well, in my undergraduate years, I was officially studying physics and math—and actually using a computer at one point—but having this passion for music and the arts. I had been told, like a good, white, middle-class boy, “Keep your piano playing, your interest in music, as a nice hobby.” And it’s very good advice, you know. It kind of worked for my father, who was a wonderful marimba player and percussionist. But there I was, working in the physics department and then going over to the practice piano house and feeling these urges, not dissimilar to an adolescent’s sexual emergence, for composition. There was this force emerging that was mysterious and dark and dangerous and wild, and what could I do about it?
So I finished the degree in physics and math at Queens University in Kingston and got accepted to UBC in music. Did some make up courses in some areas and some composition and so on and so forth. And then I walked into the electronic music studio at UBC and never came out, essentially.
At the Institute of Sonology, Gottfried Michael Koenig and Otto Laske and a host of really excellent teachers were formulating the digital future. That may sound overly dramatic, but they had this wonderful set of analog studios, with a lot of custom made equipment and two and four channel machines for recording it and banks of voltage control equipment that defied description. It was very, very complex. A long way from the Buchla and Moog synthesizers I’d been weened on at UBC. Stan Tempelaars was teaching modern psychoacoustics that he had gotten from Reiner Plomp, which I now realize was pretty cutting edge at the time. Koenig was teaching composition theory but also programming and macro assembly language for the PDP-15, almost as fast as he was learning it himself. And suddenly, for the first time, I found myself with the mini-computer; that’s what they were called, even though they took up one huge wall of a room. But they were single user, not mainframe computers like Max Mathews had. Although the only means of interaction was the teletype terminal, you could have real-time synthesis and interact with it as a composer rather than writing programmes. And I developed this thing called the POD System for interactive composition with synthesis, which was a top down type of approach.
Also at this time I made two trips to the EMS studio in Stockholm, where I got to meet Knut Wiggen, the controversial director of that studio. Knut Wiggen is this very modest, introverted Norwegian visionary who’s one of the marginal figures now, unfortunately, in the computer music field, largely because when he got kicked out of EMS he went back to Norway and didn’t promulgate his work much. So even his pieces are not that well known. I have one of them that I managed to get him to send me, which I still treasure. It was Knut Wiggen who gave me the best reason ever why to use a computer, particularly since they mainly ruin your life! He said “You need the computer. Why? To control complexity.”
The idea that a computer could control complex systems, such as composition, and open up things that you didn’t foresee and specify and notate to the nth detail of the microsecond, that was very, very inspiring. You guide the general rules as it may be, or general parameters at different levels, but the details are unforseen, in the classic stochastic sense. And that’s always been very attractive to me. Computers are the best way to work with systems in the microsound or time/frequency domain—basically the domain of less than fifty milleseconds, where the conventional rules do not apply. It’s a different way of thinking about sound. It’s a different way of thinking about structuring it. You’re definitely not writing a score for granular synthesis. A thousand grains per second? “Would you like to transcribe those? I’m ready whenever you are!” It’s reductio ad absurdum, right?
Posted by Chris Mansel at 8:29 AM