A super early version of what could be considered granular video inspired by:
- Granular Synthesis;
- granular synthesis;
- Phil Niblock’s Magic Sun film of Sun Ra;
- and the “Say No More” projects of Bob Ostertag.
My goal was to create a performance system that could be ‘played’ more readily than most source-material centric AV works. Referencing the “Surrogate Band” concept used by Pink Floyd in The Wall (in the handful of original live concerts they opened with a fake Pink Floyd band hidden behind masks) the initial idea was to mix together sound/image of improvisers to form a cohesive piece. The work comments on some of the issues with audiovisual (particularly laptop) performance by recontextualising the gestures musicians employ in the generation of spontaneous sound as a writhing, sound activated collage. Real-time control of captured footage creates a dialogue with the “tense” of performance – the “now” that is “then” becomes a new “now” through the ability to improvise with arbitrary sequences channelled through a simple control mechanism / compositional system that defines the work.
Ok so enough blather… how does it work?
I found Isadora too slow to work with the source material in an incisive fashion and chose VDMX for its flexibility and Audiomulch to feed sound through two ‘dlgrains’ granular objects. As evidenced by the above screen grab, VDMX windows can be positioned to allow access to windows behind… important as I needed to see the dlgrains settings. The videos are step sequenced in VDMX and I have control (via NanoKontrol) of the speed and direction of the files. Mouse pointer is only used to change the step sequence, a series of key presses were programmed to cycle through the source footage. Video files were rendered with audio as photo jpeg 640×360 @ 59.94fps in order that I could slow the footage and still maintain quality – i’ve yet to experiment with whether this makes any perceivable difference. Sound from the videos was routed, via Soundflower, to the two dlgrains objects in Audiomulch. Unfortunately VDMX does not solve the problem of selective routing of audio from video files that plagues Isadora. So a stereo output is sent which is then split in mulch and returned as 4 mono signals. These signals are sent to the Audio Analysis tools in VDMX which are used to define the opacity of videos on each of the four layers. Sound then carries out to the PA by forwarding the analysed sound through from the AA tools to the soundcard. This can also be achieved by ‘listening’ to Soundflower output.
On some levels (and at this early stage) the work has achieved all I wanted. Issues at this stage:
- need more footage / performers for greater variety;
- less need to control the dlgrains in real time as this is difficult to do – perhaps setting the metasurface in audiomulch to a series of recorded positions (value snapshots) which can then be interpolated between by connecting an x/y touch surface (either one of the iPhone OSC apps or the KP3 Kaosspad);
- research the ability for similar value snapshots to be set in VDMX to streamline the change of materials;
- consider implementing framework in Max5 / Jitter in order that a direct link between control data and audio can be made – also may allow for audio to be separated more effectively (having to reinstall Max5 as it seems I have only 3 Jitter objects… is this correct?)
- without overcomplicating, ability to change transitions and layer opacities to vary the interaction between videos from a collage effect to other..?
- I’ve been informed that Andrew Sorenson’s Impromptu may provide the ability to analyse and batch cut videos to make each sequence less arbitrary – this could be very useful as the “granularisation” of the video becomes more in depth;
- provide some kind of data structure (based perhaps on histogram analysis) of the “energy” of both sound and image that can be interpreted and used within the performance.
When i’ve fine tuned this system I would like to put it to work as a live improvisation tool in sessions and performances with some of the improvisors I have recorded.
Next up… N4rgh1l3 and the balance of “art”, music and context.