Metasynth and audiovisual composition

As part of my research i’ve been considering Metasynth as an interesting AV composition tool – particularly as it links to the early pioneers of visual music and the work of Iannis Xenakis.  You might have heard of Metasynth, the enigmatic OSX only AV synthesiser from this wired article about Richard D. James (Aphex Twin).

Here is the track in question:



That example utilises the image as a filter and is really only a very small part of the rabbit hole / time sink that is Metasynth.  My initial impression of it as “Paintshop for Sound” really only applies to the Image Synth section which allows you to paint shapes and colours.  When using it with the default waveform synth you get the typical sinusoidal sounds also available on Windows and for free with SPEAR.   What is missing from that equation is that the spectral image doubles as a compositional palette which can be applied to other sounds in a variety of ways.  Each pixel is a potential note – length on the X-axis representing the duration, height on the Y representing frequency or pitch.  They can drive a variety of different waveforms in single and multi wavetable synthesisers.  They can also be used to drive samples (more on that later).  They can be mapped to particular tuning systems, tuned to specific keys and set to specific scales within the key.

This notational representation can be edited in a similar way to images on Photoshop.  Blurred to provide Reverb, scattered for Delay, stretched up and down for various Glissando effects.  Volume relates to the brightness of the pixel so contrast defines dynamics.  Colour between red, yellow and green define stereo positioning so gradiant filters can be used to create panning effects.

It is a very niche tool however and while the official tutorials help I do feel like that mastery will only come from years of use.  I have found that unofficial tutorials can actually be a better starting point, particularly this one.   The ability morph sound-objects into each other is something i’ve been playing with for a decade or so and, “Example 1” from that tutorial identifies one of the more useful ways MS can interpolate sounds.

The following track on my Soundcloud account demonstrates a few different results from using the spectral information from one sample to play another.  It goes beyond the basic frequency and amplitude type morphing to include interesting tonal and textural elements that i’ve previous only achieved by chance with granular synthesis.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s