A few years ago (2013/2014), after having moved to Berlin, Jennie Zimmermann and me started to explore the improvisational potential of repetition and restraint in regard to the number of used material. Our aproach was interdisciplinary, we researched music and movement, on it´s own and in combination. together with a group of dancers and musicians.
This research would eventually result in the concept of Minimal Improvisation, click on the link for more background.
One of my major inspirations for working with gradual process in collective improvisation had been electronic music, it´s application of procedures and impersonal mechanisms. So, naturally, I continued in that direction when group activities came to a halt due to shifting artistic priorities.
It took me some time to figure out what an electronic version of the collective minimal improvisation process could be. Using Ableton live as my main daw, I first checked out many maxforlive devices like Adam Florin´s patter, Coldcut´s MidiVolve and also encoderaudio´s turing machine. I worked with OSCiLLOT by Max for Cats. I got some very interesting results by combining these and other devices in Ableton live and I learned a lot, but ultimately I felt caged within the confines of the daw.
So the next step was to move beyond the gui and check out patching environments. So I started building my own devices in maxforlive, but didn´t like it´s integration in Ableton live and starting patching in pure data, a freeware alternative to max. There are fantastic tutorials on making generative music with pure data online, if you are curious have a look at the algorithmic composer or Johannes Kreidler´s Programming Electronic Music in Pd.
It was obvious that patching environments like max and Pd offered a lot more flexibility than a daw. I needed to settle on one option and work out how to realize my musical ideas using this one tool.
But that didn´t happen, instead I keept looking for other options. I toyed around with IRCAM´s open music, tried out patching in Native Instruments Reaktor and started experimenting with coding music in SuperCollider.
Unfinished patch with Reaktor Blocks
Reading Jeremy Leach´s papers on Nature, Music, and Algorithmic Composition was very inspiring, see also his online algorithmic composer. Also Robert Rowe´s Machine Musicianship had an impact.
David Cope´s work on emulating the style of several classical composer´s was interesting to me not because of his goals, but his thoughts on musical structure, here´s a super condensed overview by Randolph Johnson. Being interested in rhythm and electronic music, reading `The Geometry of musical Rhythm´by Godfried Toussaint was unavoidable and I highly recommend this fantastic book.
But reading Alex Bellos´popular books on math, `The Grapes of Math´ and `Here´s looking at Euklid´ probably had the biggest impact on me, encouraging me to look at the mathematical aspects of generating and processing music not only as a means to achieve the intended music, but also absolutely fascinating topics by itself. The last book I have to mention is ‘comparing notes´by Adam Ockelford, more on that one later, in another post.
Apart from software (for OS X) and books I got involved in new machines: The Axoloti combines a patching environment comparable to Pure Data with a standalone microcontroller board. Also, I started using Apple´s iPad and some of the many many music apps that are availlable because of my teaching work for app2music.
Probably the two main dimensions in my search were flexibility and simplicity. I was looking for an instrument to realize my ideas that was more flexible than Ableton live when it comes to generative music, but still allow an easy workflow. A software like SuperCollider gives it´s users a maximum of flexibility when it comes to shaping sounds. At this point, my main interest is generating evolving musical structures, so I do not need all these options. It takes time to bypass them and find the functionalities that are important to me.
The Sonic Pi software was originally developed by Sam Aaron as a live music coding synth for children. Thus, it is design to be simple and accessible. I had already looked into it as a warm up exercise before getting into SuperCollider. Since the update to version 3. in 2017, Sonic Pi aims to be the live coding synth for everyone, including features like MIDI, thus morphing from children´s learning software to a musical instrument. To me, this version offers the perfect combination of flexibility and simplicity. Through MIDI, I can connect it to other music software or hardware and tailor a music generating system to my needs. For sure I´m connecting it with the levTools, patched by my colleague Marten Seedorf on the basis of pure data.
So how does my application of a collective improvisational process to music software actually work? I´m going to address this in the next entry of this blog.
Meanwhile, have a look at my Axoloti making all the sounds that Sonic Pi tells it to make. Of course the video is pointless here, but isn´t it cute? Well, it probably looks better when you know that this device cannot be bought from the shelve but that the enclosure was hand-made for me by my brother. Look here for a thread full of amazing self-made Axoloti versions.
To be more specific: The Axoloti runs a patch based on a Buchla Programable Complex Waveform Generator 259. This patch was made by maceq687, many thanks for sharing. Also, that patch gets some help from a Reaktor patch by me…