Part of a 2010 live performance at the Beetle Bar in Brisbane.
This is my usual performance setup.
I regularly use musical controllers to sequence and deploy synchronous audio and video in live performance.
As a sound artist I enjoy the challenge of providing a more holistic audiovisual experience for the audience than I could give with just my trusty laptop. I invite the audience to focus away from my unfashionable self and concentrate on the realtime mixing of sound with image.
My research examines audiovisual performance as an approach that is highly relevant to current cultural and technological concerns with a view to outlining the varied approaches made by audiovisual composers in order to establish a strong presence in real-time live performance.
The end result will be a typology of the performing audiovisualist backed by a number of studies that demonstrate the approaches observed.
In recent times AV has gamely struggled away from the outdated notions that seek to categorise it as just a pretentious form of VJing. So what distinguishes AV performance from other mixed media arts?
Ian Andrews, media artist and theorist published one of the first signs of an AV phenomenology that was not cinematically focused. Scan Journal, September 2009.
Material connection based on signal can be seen in experiments by Woody Vasulka from the mid 1970s.
Notion of transduction outlined by Bill Viola is integral to understanding a popular aspect of current AV practice.
Michel Chion defines “synchresis” as “the spontaneous and irresistible weld produced between a particular auditory phenomenon and a visual phenomenon when they occur at the same time.” (from “Audio Vision…” 1994, p63) It is a neologism formed from synchronism and synthesis, outlining an intentional synthetic application of synchronism.
Mitchell Whitelaw describes it as a “Cross modal binding [that] binds percepts into whole… that map on to ecologically plausible events” no matter how abstract.
With reference to the work of Pierre Schaeffer, Chion also theorised that we predominantly interpret sound as being “Causal” (demonstrating a clear cause/effect relationship) or “Semantic” (expressing a meaning that is culturally / linguistically understood).
Where a transduction between sound and image streams occur in AV performance, a causal relationship is established for the audience. In the following two examples, the relationship between audio and visual signal is conflated.
Robin Fox sends audio signal to the mechanical servos that control the lasers.
Botborg create a feedback loop connecting an audio feed to a video feed through a vision mixer.
Study in Direct Causal AV Transmission (work in progress)
In order to represent this approach to AV performance i’ve adopted a tool (currently on loan from Joel Stern) that Robin Fox has also been experimenting with lately: the Synchronator is an Audio to Video convertor designed by Dutch artists Gert-Jan Prins and Bas van Koolwijk.
Using Ableton Live software, 3 tone generators made using Max For Live, a MicroKontrol Midi Keyboard and the Native Instruments Reaktor patch “The Finger”, I can demonstrate a real-time transduction. Audio signal is sent to three different RCA inputs representing Red, Green and Blue. Video sync pulses and color coding signals are added to this input converting it to a composite video signal for projection.
Due to the nature of the device, the best results seem to come from a mixture of extreme signals. While not entirely consistent, frequency equates predominantly to the number and size of scan lines with timbre, harmonic, amplitude and phase modulation affecting the movement of the scan lines. The following is a work in progress demonstration, the final Study will presented sporadically throughout my research and included with my findings on completion.
A Study in Direct Causal AV Transmission (W.I.P.) from Performing Audiovisualist on Vimeo.
Some questions yet to answer:
Does the transduction have to occur in the analog domain or can it be equally effective digitally?
Does the transduction have to pass from one sensory domain to another: can it be medium to medium, context to context?
Does the power of AV synchresis make up for the extreme nature of sounds required to stimulate the synchronator… are there ways to mediate the sound signal into a more palatable form?