A quick browse to the beginnings of this blog will show that at one stage I was focused on researching AV performance approaches.
I stopped in part because I started to feel that my goal of a performance setup where sound and image could be synthesised simultaneously in real time was being limited by rising CPU/GPU requirements and a lack of money to pay for them. It also seemed a little ludicrous chasing the high end when my musical inspiration is more around small scale abstraction where individual perception joins the dots.
Today i’m happy to say the dream is close and I’m back experimenting with AV. Here is an early demonstration.
I’m using Lumen, an exceptionally capable real-time visual synthesiser, that just recently received MIDI control capabilities. I designed the Max For Live controller so that I can manipulate it from the Push, record automation and most importantly save parameters with PPTC which can store settings for any device on a per-channel basis. So I can improvise and save the more interesting combinations.
I’m also using the Encoder Audio Max For Live sequencers quite extensively here. Particularly the Source and Turing Machines going to the Synths and Polyrandom to Microtonic for percussion. The Encoder Audio stuff is great to control from the Push 2 (with some fiddling) and sends mapping data to other controls which is where the AV connection will ultimately arise: rather than doing the usual (and to me boring) envelope to opacity trick i’m more keen for sequenced modulation mapping from different elements to different elements.
All of this is early days. It seems to be fine to play with so long as I smooth the modulations before hitting Lumen. So far recording with Syphon has lead to a crash everytime so it’s not stable. There is a gig in a month or so to work to and maybe some AV recordings.