When we were beginning the research on minimal improvisation, we started in the Phyla duo constellation, Jennie Zimmermann and me: doing very little, just repeating one sound, variating it over time.
We did not create very elaborate musical structures, but subtle variations of sounds in relation. The sounds we made could be imitative or contrasting, our activities could be synchronized, like a call-and-response or, more or less, unrelated.
When we began working with a group of musicians and dancers, the basic constellation of two people responding to each others actions remained and important topic:
At the time, I knew that these interactions demonstrate principles that are relevant to all time-based arts. Repetition gives a sense of continuity, variation keeps things alive. Within the minimal improvisation research, we put a lot of emphasis on the process, but at the same time, individual intentionality became very apparent. Within improvisations, it felt like we were witnessing an alchemic process: sounds or movements, that were initially unrelated started to become ‘music’ or ‘dance’, started to ‘make sense’.
So I was thrilled when I came across ‘comparing notes‘ by Adam Ockelford – he explains music using the very same categories that we concentrated on in our research (quoted from ‘comparing notes, page 61):
‘…imitation provides a sense of agency, which leads to the perception of musical structure, which enables music to make sense.´
Ockelford is not particularly interested in minimalism or improvisation, he comes from a classical background and mainly refers to written scores by dead male composers. Still, I feel that the minimal improvisation approach is almost like a practical demonstration of his ‘Zygonic Theory‘. A meta-music stripped of virtuosity, stylistic conventions such as harmony, voice-leading, even tuning, just concentrating on the very basic aspects of what constitutes music at all.
I had already done a lot of experiments of emulating minimal improvisation with Ableton live music software, focussing on moving from the quality of flowing, non-quantized time to a quantized rhythmical grid seamlessly. But now, I started revisiting the special constellation not of two performers, but two instances within a process, reacting to each other.
Looking at this constellation again, after months of minimal improvisation research, after reading about Zygonic theory actually blew my mind. Because, from a technical point of view, the two main components involved here are two very well known processes in electronic music:
feedback & delay
Being a guitarist, my first encounter with feedback was listening to Jimi Hendrix. Also I have played with the possibilities of the no-input mixer – but I saw it as a special technique. I did not grasp how fundamental the principle of feedback is to music.
In the context of of loud, amplified music, I had associated feedback with the notion of going to extremes: very fast, feedback can escalate, become louder and louder and wreck ears and equipment. To act as an agent of musical structure, feedback needs a mechanism to restrain this tendency of escalation: delay. Inserting a delay prevents the signals in the feedback loop from following faster and faster, getting louder each time around the cycle. In order to do this, the delay has to function not like an echo, which would add a – delayed – signal to the original. Instead, the delayed signal must be the only signal that is heard, while the original incoming sound must stay muted.
Here is an example of such a setup: two channels, each containing a listening device, a sound generating device and a delay are connected within a feedback loop:
Once you get past the awkward 80s charm of the Junatik synth sounds, the structural resemblance to the two duo improvisations in sound and dance should be obvious: variations of pitches are send back and forth, leading to a gradual, sometimes abrupt, developement. There is a rise in dynamics, then the ‘dialogue’ becomes more calm, sparse and finally ends.
The function of ‘listening’ on each of the two channels is solved by using VoiceKeys II, a free device for Reaktor by Bertrand Antolin. It analyzes the incoming audio and converts it to MIDI, which is in turn processed by the Junatik synth to make the next sound. It is a marvelous device, singing into Reaktor and hearing my voice accompanied by a synth sound is so much fun! It was created nine years ago but still gives me the feeling of living in the future.
Marvelous as it is, it does not function entirely correct. The sound that is created as a response does not completely match the incoming sound. In the case of my example that is a good thing, because it creates the dynamic of repetition and variation that leads to gradual developement of sonic material. In this case, the artefacts of incorrectness are the very spark of machine ‘creativity’.
This becomes evident, when the feedback loop is not set up on the level of audio signals. Instead of converting audio to MIDI, it is also possible to set up a loop between two MIDI channels. The lack of mistakes leads to an ever repeating loop of the same pitch. To avoid stasis and get the impression of interaction in dialogue, randomness has to be inserted intentionally.
MIDI is a standardized system for machine communication between musical devices. Working with MIDI means to work on the level of language, of symbols that stand in for sound, not the actual sound. The constellation I described above, utilizing feedback, delay and also randomness is a formal one, it may differ vastly depending on which sounds are actually used and result in totally different music. Still, it remains the very same constellation.
To me, at the moment, the best environment to approach music on such a meta-level is the coding-environment Sonic Pi, stay tuned for some examples of minimal improvisation code.