This week I spent a day improvising and recording audio with Doktorb Robotnik (Adrian Lucas) and was reminded of one of the key aims driving my research. I am developing software devices which allow the user to manipulate audio, video and other data in real time. By creating these works I am attempting to give the user the same feeling of control that I experience when performing live audio, of creating order out of chaos and letting it fall apart again.
Adrian and I have been making improvised sound art / noise / music together, for about 9 years. During this time we have developed various methods for collaborative performance and our production methods have evolved considerably.
Initially we would assemble, manipulate and sequence sound objects on computer in to create finished ‘tracks’, often to accompany short video works. While our process was improvisational in many ways, the combination of unlimited levels of ‘undo’ afforded by the computer software and the goal of producing a piece of a set duration meant that hours of work went into seconds of sound.
A significant shift occurred when we agreed to perform live at a music festival at uni: Rusfest 98 . With no idea of exactly what we were doing, we assembled a very basic setup consisting of two multi-effect guitar pedals and two basic synthesisers. Each of the pedals was set to a 2 second delay which let us play various sounds on the synths and have them repeat endlessly. Rather than spending hours obsessing over a few seconds of meticulously cut up audio, we were improvising in real time – in front of an audience. It was exhilarating. We had no interest in melody and our pedals were keeping time, we were free to play with sound. We had constructed a kind of machine for the generation and manipulation of sound in real time and given ourselves a set of variables with which to control it. While our individual noise making machines have diverged technologically since then to consist of a series of interconnected guitar effects pedals which feed back on themselves and a computer running live audio sequencing / processing software, the processes at work in our first performance are still employed.
We build ‘machines’ with fixed number of variables and manipulate them to generate audio in real time. This is exactly what I am doing with images and data in both my Quicktime and Quartz Composer Vidgets.
Technorati Tags: vidget