Performance Experiments Phase1: Left of Left

“Barrett’s set, for example, explores a very intriguing, almost linear, narrative from blurry, sun-kissed photography and warm spacious ambience to the subdued loneliness of melancholic imagery and trilling feedback before finally collapsing into a dark, droning portrait of utter desolation. Barrett orchestrates the imagery well with a blend of abstract sound, field recordings and live instrumentation intermingling throughout but there is a certain gracelessness to the transitions between tones that prevents the performance from the reaching the heights it should.” Matt O’Neill – Time Off

Yes there were technical difficulties… but i digress…

LOL_Blog1

The Setup:

A couple of months prior to this performance Tom Hall approached me with the idea for Lawrence and I to join him on the closing night of his installation for mixed AV performances.  The catch would be that we needed to use or approach his material, though it was up to our individual tastes to do this as we wished.

About one month prior to the performance I received around 20gb worth of source material from Tom to start playing with…

The Compositional Approach:

My approach was to work in a similar fashion to the way I have remixed sound for other artists; a somewhat subtractive fashion; therefore my influences were pretty close to home.  I wanted to simultaneously convert Tom’s footage to something more indicative of my own tastes, while maintaining an individual style / aesthetic that carried from image to sound and vice versa.  I made numerous passes of the source material through Isadora using Blends and Luma Key filters to reduce the visual elements to minimalist flecks of colour on black that would, in theory, mix well together.

As I was also experimenting with a Rutt Ettra simulation in Quartz Composer at the time I thought it would be appropriate experiment with the Z-Depth and zooming abilities to create depth in the compositions.

I also visited the installation during the day to obtain footage of the curtains which create an interesting visual simulation of the actual installation when blended with the other footage.

Ultimately I rendered out about 2hrs of prepared footage which would then be reduced to 3o mins of possible footage for the performance and sorted in 4 distinct pieces that would flow one to the other.

Sound was linked to the vision semantically (relating the sounds to whatever the image denoted) or symbolically (whatever I felt the image signified or connoted.)  A large percentage of the sound was generated by watching the footage and applying different values to a Waldorf Blofeld synthesiser until an appropriate correlation developed.  The sequences were then cut and connected to the prepared footage in Final Cut Express and rendered out as 640×360 (16:9) photo jpeg files to be mixed with in Isadora during the performance.

As a platform for doing strange things to video Isadora is somewhat peerless.  Modules are connected one to the next in a similar fashion to Max/MSP (but not at quite as low level) or Audiomulch work with audio.  As a performance tool there are some issues to overcome, particularly the inability for the video files to loop cleanly, something that works much better in programs like Resolume and VDMX.  In Isadora this can be worked around by using a loop patch put together by Fred Vaillant, however this takes over control of volume/opacity (something I needed control of) and requires files of more than 10secs length to work as, is.  So I hacked this patch for my performance and it didn’t quite work as the use of volume/opacity to trigger the envelopes does not work consistently for a reason i’ve yet to completely fathom (though I produce a more stable version for the N4rgh1l3 performance which will be reported on in one of the following blogs.)

The Performance:

As you can see Tom did a great job setting up the space with curtains and three projectors.  With regards to my material, the level of light coming in from the windows was somewhat problematic for my very dark footage and the quick addition of a gamma corrector brightened the footage but also reduced detail to horrible pixellated blocks.  The main problem was not so much the video as it was the issue with seemingly random volume spikes (not that obvious in the selected footage but that is the only footage I would want to keep) and a rather jerky set of controls that made it easy for me to accidently turn audio/video layers on/off but not so easy to smoothly blend them.

So the reviewer calling the performance “graceless” is accurate.  This was not a live setting where I really mattered how live I was, except when it was obvious that I was screwing up.  Even with the bright Notebook light it was quite difficult to tell which buttons on the NanoKontrol I was hitting. In fact it is something of a miracle I got a performance as good as what is recorded above – I place my faith in drones! 😉

The audience was very supportive but I was very disappointed as I felt like my good intentions/inventions were wasted on an unreliable control setup and a screen that made it difficult to actually see my work.  So I thought i’d recreate the pieces at home in our downstairs laboratory.  Here are two examples of what it was supposed to look like:

The first is a great example of the Rutt Etra at work, splitting and abstracting depth in the image. The second is some Data Moshing of Tom’s material with some of my recent (similarly drifting) footage that I did not get to reproduce live due to the technical difficulties and time constraints.

Thoughts:

The audience and context for this work was pretty-much perfect (as opposed to the following performances which I will discuss in later blogs) with the only real issue being that of the rear-light projection coming from the street.  Now I do recall Tom mentioning that he would close the blinds for the final gig – and this was the assumption I was composing my parts for.  What does one do as a performer when this situation arises?  Accept and adapt or demand a more appropriate setup.  I certainly feel that I’ve been through many a gig where I left unhappy with my performance in part due to the setup being inconsistent with my perforative goals.  I’ve put curated enough performances to know that some artists pride themselves in their stubborn demand for a checklist of contextual settings.  Working closely with the Other Film group I’ve seen first hand how variable the goalposts are, particularly with expanded cinema that often exists only within a prescribed setup.  In order to further expand on the flexibility, sustainability of AV performance I guess, in the end it would’ve been better to reorganise my setlist to focus on the brighter material.  I certainly am not the type to pick hairs when i’ve been invited to perform at someone else’s installation.

While the conceptual framework was simple, the construction of my performance interface left a lot to be desired.  In an effort to devise a system that would allow direct control of all elements, I failed to consider how much control I needed and how few hands I had.  In the end it would not have hurt to automate a few more processes, especially given that the audience was staring around my arse and could not have seen any of the micro-gestures employed.  Did the gracelessness of the performance, and the failure of some material to trigger as intended, demonstrate that I did have live control?  If so, did the audience really care?  Based on the review I feel that it might have been better to provide more of an illusion of control; all the better for the smoother performance that the material required.  Given that the interface was finalised about 4 hours before the performance, i guess I can’t expect miracles.  It would be like learning a chord in order to play it in front of an audience without any form of practice.  But worse 😉

It wasn’t all bad news though – bad gigs at this point in my research are… well…  good research – or at least points of contention for me to bust through.  So i need a more fluid interactive surface and more experience with solo manipulation of both audio and video in real time – with perhaps a small amount of automation / puppeting – enough to keep the performance smooth but interesting.  At very least I got a decent eyeful of Tom Hall’s VDMX setup, a program that seems to work more seamlessly in a live context (at least with regards to what I am trying to do here and now.) I promptly bought a copy and successfully deployed it in the next live scenario to be discussed…

Addendum:

Just read a review of Liquid Architecture in Sydney by Tom Ellard.  Particularly interested in his comments on Thomas Koner:

Last up was a German fellow, because the Goethe Institute sure seems able to pay for stuff. He was making a soundtrack for an unseen film, something a lot of sound students do because you can get a bunch of location recordings and play dramatic music underneath which hides your music inside Sound Art. Anyway his location recordings were pretty good and the film music was alright so I settled back into my snooze for a while. But on peeping I found he’d started showing video.

Now once you’re showing video you’re no longer doing the ‘unseen film’ – you’re bound by the same rules as any other soundtrack maker – relevance/resonance with the screened image. There wasn’t much. On screen we arrived at train stations in London in slow motion plus a difference layer in After FX. It looked quite nice for the first 5 minutes, after that, not so much. Sonically there was increasing layers of squoonsch – desperate really, as if squoonsch was the special sauce of Sound Art. That lost my interest.

What I learned: drones are the coward’s tool. Spurn drones.

Some interesting points:  when he played in Brisbane Andrew T and I spent most of the performance trying to catalogue his approach (we reckon multiple similar layers, some inverted and shifted, definately difference opacity.)  But spurn drones?  Come on Tom, I know Drones are kinda lazy, easy to mix with and synchronise with footage, but they are also the foundation of much of my compositional work and the work I chose to listen to.  Applied with footage have worked with material from Phil Niblock to the Hafler Trio, but some consideration should be made to how they are used in those circumstances avoiding blanket statements.  In particular i’m interested in the eyes shut trance state that drone music implies, how the inner cinema is broken by the combination of drones with vision and ways that this can be applied for good reason instead of novelty or laziness.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s