Forming Thoughts

After a month of back and forth I’ve finally got the K-Mix working.  Kudos to Evan from K.M.I. for his technical support.  Go buy his awesome Max For Live device.

So now the box looks like this:

Axoloti driven synth box

I’m still using the Macbook to run the Axoloti editor for now but hoping I won’t need it during gigs.
The Pad Kontrol is a great cheap Gumtree pickup that is perfect as a controller for the Axoloti as I can:

  • run it directly from the Axoloti USB port – no additional power supply or MIDI leads necessary;
  • edit MIDI CC and Note values on the fly for encoders and pads;
  • setup an X/Y controller for unique parameter morphing (X/Y also has an interesting roll function);
  • send program change values which curiously not many new MIDI controllers do (K-Mix included).

The fact that it sends PGM values means I can ditch the Macbook (or use it for visuals) once I work out a reliable way of creating patch-banks in the Axoloti (it’s there – just not 100% trustworthy yet).

At the moment the routing goes like this:

Inputs to K-Mix

  • 1 = SM58
  • 2 = System-1m
  • 3/4 = Nord Modular
  • 5/6 = Axoloti
  • 7/8 = Aira FX

Outputs from K-Mix

  • 1/2 = Main Outs to Desk
  • 3/4 = to Aira FX
  • 5/6 = to Nord Modular
  • 7/8 = to Axoloti

So plenty of scope for interesting routing – and then there is the modulation routing with the Quad LFO and Sys-1m.

My current direction is towards a series of improvisations with the Axoloti environment as a reloadable patching frame-work sending MIDI to the other synths.  These will be things I can setup and perform – producing multiple versions from the same template.

The first of these improvisations is at the start of this post.

Below is an improvisation with Joe using the setup (sans the K-Mix).

Project: Thought Forms / Artist: Makrotulpa

A quick browse to the beginnings of this blog will show that at one stage I was focused on researching AV performance approaches.

I stopped in part because I started to feel that my goal of a performance setup where sound and image could be synthesised simultaneously in real time was being limited by rising CPU/GPU requirements and a lack of money to pay for them.  It also seemed a little ludicrous chasing the high end when my musical inspiration is more around small scale abstraction where individual perception joins the dots.

Today i’m happy to say the dream is close and I’m back experimenting with AV.  Here is an early demonstration.

I’m using Lumen, an exceptionally capable real-time visual synthesiser, that just recently received MIDI control capabilities.   I designed the Max For Live controller so that I can manipulate it from the Push, record automation and most importantly save parameters with PPTC which can store settings for any device on a per-channel basis.  So I can improvise and save the more interesting combinations.

I’m also using the Encoder Audio Max For Live sequencers quite extensively here.  Particularly the Source and Turing Machines going to the Synths and Polyrandom to Microtonic for percussion.  The Encoder Audio stuff is great to control from the Push 2 (with some fiddling) and sends mapping data to other controls which is where the AV connection will ultimately arise:  rather than doing the usual (and to me boring) envelope to opacity trick i’m more keen for sequenced modulation mapping from different elements to different elements.

All of this is early days.  It seems to be fine to play with so long as I smooth the modulations before hitting Lumen.  So far recording with Syphon has lead to a crash everytime so it’s not stable.  There is a gig in a month or so to work to and maybe some AV recordings.

Cloudbounce Blind Test Results

 

JPEG image-32A07E328359-1

My mixing/mastering DAW of choice, Tracktion, has just added CloudBounce as a feature.  This is one of those online mastering services like Landr, centered around a machine-learning algorithm.  The results of the blind test are below.

 

I tested Landr also when it launched and the results indicated it was expecting a more conventional kind of track – not the variable dynamics and lack of consistent transients common to electroacoustic improvisation (or “noise” if you’d prefer!) While i’m happy to explore procedural generation as a creative tool for visual art and sound design i’m not entirely convinced it has a place in finalisation of tracks as these choices seem to me to be highly personal.

JPEG image-FC54F14E4F2F-1

 

Having said all the above – while I’ve studied some of the approaches to mastering – I have no specific qualifications.  So I thought i’d do a blind test.  The playlist below features a track mixed by Joe from an session we did a month ago.  I’ve uploaded his raw stereo file to Cloudbounce to see what it could make of it.  I’ve also done my own mastered version in the box using only the software I have regular access to.  The results can be streamed or downloaded.

My questions to listeners:

  1. Which is the Cloudbounce master and which is my version?
  2. Which do you prefer?

RESULTS (including poll data and PMs)

Note very few people who PM’d me chose to guess which was the CloudBounce master however of those that did, and including the poll results, 60% of the vote guessed the second track was mastered through CloudBounce.

The reality is that the CloudBounce master was track 1.

The response was pretty unanimous that track 1 was the preferred track with greater definition.  It was also considered to be “louder”.  One user responded that they thought the second track was “dull” whereas another, while suggesting track 1 was a better mix, liked the softer focus of track 2.

Personally I was pleasantly surprised at the quality of the CloudBounce master. Of particular interest to me was the increased sonic space it brought out, situating the spiky transients more effectively amongst the deep / distant reverberant sounds.

One caveat is that my mastering attempt was not thorough.  I was working with a pre-mixed stereo track and merely applied Airwindows Console4(Buss and Console) with Melda Spectral Dynamics and TurboComp flattening the overall sound somewhat which perhaps accounts for the dull / unfocused result.  Clearly I have some learning to do.

I’m certainly keen to use CloudBounce services again.  If you are reading this and would like to test it out please click this referral link and let me know how you go.

 

 

A space for sonic contemplation

With no specific projects on the horizon it is of course time to make some.  As is always the way – I’m refining my current setup.  Regular readers may remember this picture from times past…

oldsynthtable

Well I’ve returned to this idea somewhat and managed to find a neat way to fit everything and maintain a workspace for recording and mixing with room to also work on the educational stuff that pays rent/bills.  Having a focused desk makes it easier to focus on improvisation and development without mouse tweakage.

LoopstationjustTable

All the hardware runs through the MOTU which is controlled via Push and Ableton.  V-Synth is being used as a source and a controller.  I discovered the X/Y Pad is great for modulating the System-1m filter.  3 stereo channels out from the MOTU go to the R16 for recording.  There is also a Send/Return that goes via the Aira FX, one Send that goes to the Loopstation returning as a stereo feed to the R16 and one that goes to the V-Synth for resampling of material.

gearPano

I’m going to try and keep this arrangement for a while as it seems pretty comprehensive and useful.  With the upcoming Live 9.7 features, especially the ability to select ins and outs, will make this setup even more viable.  Here is an ambient improvisation I recorded.

… for the Noise Wall

I will be collaborating with Joe Musgrove and Adam Sussman at the closing night party of Ali Bezer’s Noisewall on Friday 24th June, 6-10pm.

The venue is The Walls Art Space on the Cold Ghost (address details in link.)

For this performance I’m going Push-less!  Thought I’d get some preparation in.

Joe has composed some superb concrète for the exhibition so it’s worth checking out during regular gallery hours: Wed to Sat 11-4.

Response to Black Mercury so far has been very encouraging.  It will be heading to regular distribution (iTunes, Google) and streams (Apple Music, Spotify) in a week or so but Bandcamp remains the best deal for all.

 

2016 – getting past the melancholy cycles

2016 – getting past the melancholy cycles

The best thing I did in 2015 was complete (to me) the ultimate ambient track.  Here it is, released as part of the fifth iteration of Ambient Online.

2015 brought on a number of personal stresses, not least the death of my father, that hampered my ability to focus, network and evolve.

I grief-shopped my way towards a number of excellent musical setups but have so far found it a challenge to settle on a strategy.  Perhaps this waywardness is actually a feature but then I realise i’m spending more time setting and resetting my setup than making.

the Bell Bower studio
the Bell Bower studio

Taking December off Facebook seemed merely to highlight my isolation and my newfound spare time was mostly spent playing Fallout 4 and Witcher 3.

It seems as soon as I announce I’m doing something (even with a relatively clear direction) the motivation and ability to do that thing falls by the wayside.  I’m still determined to finish whatever Black Mercury is but it may need to shift form slightly.

Resolutions just remind me of government/political machinations and rather than attempt to trick my mind into believing something to be of relative importance I’m suggesting my primary intentions for 2016 are:

focus on the art not the tools – much of my recent approach is extending a technique – this has become frustrating as the results are often unpredictable so… focus on the end product more than the unreliable process;

really use what i’ve got – stop grief shopping – synthetic enhancements, both cyber and IRL, are not helping me be more creative or productive.  A Push 2 is arriving soon and i’m particularly looking forward to using the new sampling / conversion tools with the synths directly from the device.  Keeping the setup up simple and focused will hopefully allow the uniqueness of my work to unfold naturally while I level up my synthesis skills;

research and production are parallel not serial – much of my time is spent researching, contemplating and executing new approaches and technologies.  I love this but I think it needs to be made clear (to myself) that the research can run parallel and influence the work but it does not have to directly precede and/or lead to results for public consumption. Spending too much time conceptualising and not enough actually doing;

create socially – this may be the hardest ask but for so long music making has been a solitary pursuit (excepting the occasional jams with Paul and Joe).  I’d like to find another outlet for compositional and performative collaboration – be it virtual or IRL.  So if you read this and have an idea I might be able to contribute to let me know because I’m keen to get out a little more.

help others create –  as a teacher i’m already kinda doing this so maybe that should read “others that give a shit” ;-P  Recently Stacey started playing her flute again and with the aid of my old iPad 2 running LoopyHD has been putting together exceptionally well formed audio poems (see below).  Given that we have collaborated in the past (and my incredible valuable skillset #incrediblyobvioussarcasm) it might make sense to produce her stuff but given this world is full of women produced by men only because they aren’t allowed to touch the man toys I feel that it would be more productive to help her learn how to use Logic.

Favourite things of 2015

The Laundry Series by Charles Stross.  Imagine a hybrid of “The Thick Of It” style political satire with a more Lovecraft-leaning “The X-Files”.  My disinterest in literary serialisation was shattered by this series which manages to be both hilarious and gripping.

The Golden Communion by Thighpaulsandra.  For someone as immersed in creative synthesis as I currently am, this is a no-brainer.  There are loads of great Bandcamp albums made with synths this year but most, for better and sometimes worse, focus narrowly on particular aesthetics with less consideration for the arrangement over time. I’d include my own releases this year within in this criticism and need to credit Stacey for helping me identify this issue.  I’ve always found it difficult to recommend TPS albums as his eclecticism tends to alienate, but it is precisely this that makes this more than just another great synth album.

Dangerous Orbits by Bérangère Maximin.  Very glad to know that concréte approaches have a place in a time when everything is digitally possible.  Also happy that it isn’t just a boys club.

This year for visual treats has not been good at all.  While there have been a few things i’ve enjoyed, there is nothing new and essential that I can recommend.  This is partly down to time and focus.

Directions, skills and motivations

Here is a sample from the ongoing Black Mercury project.

I’ve managed to get a combination of the Nord / System-1m rack and Loopstation working pretty well in studio situations.

But i’m feeling like this setup is only going to stretch so far.

My current mood of self-reflection has been influenced by two amazing things I stumbled across this week.

Firstly this performance by the enigmatic Thighpaulsandra.

This is everything I love about electronic music referencing the experimental approaches of the European school without being overly stuffy or alienating.

Thighpaulsandra’s “The Golden Communion”  is an epic work that hasn’t quite met my expectations in comparison to his Double Vulgar works but it’s unquestionably in my top 10 audio releases for this year.  I’ve been happy to accept his glam rock leanings without entirely embracing them and the live performance has nothing that makes me shift uncomfortably.

In fact the above performance is something I would aspire to deliver.  Yet I’d be foolish to think I could compete with the decades of instrumental and compositional chops this gentleman demonstrates.  Still, it’s nice to have a hero.

The other influence was this great experimental engagement with the System-1m.

It alerted me to the fact that I could do much more with the System-1m than I have been.  However the unfortunately named Tidal (thanks to JayZ et..al..) looks to be a codebase I can’t easily get my head around.

From these I get am reflecting that:

  1. my current musical outcomes are suffering from my approach which relies too much on instrumentalism I don’t adequately possess;
  2. the setup and technical process ensures interesting layering (horizontal) but the arrangement and development (horizontal) is lacking if not entirely absent;
  3. my interaction with the sound creators is only exploring them in a minimalist way with too much emphasis on destructive processing to hide instrumental inadequacies.

In jam situations i’ve been dealing with constant Ableton Live crashes and the Nord Editor dropping offline.

The latter seems to be somewhat reduced by picking the right USB hub – though it is still an issue (curiously only with the Macbook Air, not with the Mac Mini.)

The former is a result of the Air being tough but ultimately limited in CPU and Ram capabilities.  It’s almost always Crusher-X bringing Ableton down and so far i’ve not found a consistent way to predict it.  So while the jamming setup is pretty good – it’s not something I’d rely on for a performance setup.

Using the Zoom R8 to record Push/Ableton as Instrument jams works well with Joe – rather than hours of noise to pick through for the gold we get 8 tracks of semi-structured sound to more finely sculpt in a DAW.

In a studio situation the end result is much less compelling when it’s just iterations of me arbitrarily thumping the Push and Twiddling knobs to mangle the output.

So my goals are outstripping my means because my approach is relying on aspects of my musicality that are not optimal.

As a remedy to this i’ve been thinking about:

  1. leaving the push / mangling for the jam sessions;
  2. working on a comprovisational setup for live performance that does not push the boundaries of computing while remaining portable, accessible, musical (possibly Push with Falcon standalone using the PXT General)
  3. looking at non-realtime / rendered composition as an option for further exploring the hardware synths in my possession.

The issue with the latter is that none of the DAWs I own are accessible to do what I want, particularly with regard to algorithmic note composition and deep CC modulation.

The options are:

  • Logic X – whenever I try to do anything more than basic audio mixing or plugin usage it slows me right down.  I know it’s very powerful but it pisses me off so much;
  • Ableton Live 9 – now I have the Push I have been using it as an instrument, something the Push enables.  It also makes it even more difficult to use Live as a DAW.  I’ve never been a huge fan of the Arrange window and recording sections of audio in the session window makes for the kind of dull looping i’m trying to avoid;
  • Tracktion 6 – really like it for mixing audio – it’s very fast.  Haven’t had much luck composing with it however as it quickly gets unwieldy adding notation to multiple tracks with multiple CC for each.  I haven’t yet upgraded to Reaper 5 but my experience is it is much the same problem;
  • Usine Hollyhock 2 – I’m really waiting for the 64bit update at this stage.  This could well provide the happy medium between algorithmic composition and improvisation i’m looking for but it’s still somewhat flaky and unreliable and being 32bit it won’t run Falcon or Reaktor 6 😦
  • Renoise 3 – Curiously this über-tracker has so far yielded the best results.  Very good with QWERTY input of notes, very low latency hardware returns, impressive FX modulation which should be readily transferable to MIDI CC.  I’m not sure if it does Euclidean though.

My issues with the above lead me back towards a program called Opus Modus that I’ve eyeballed a few times over the last couple of years.  It doesn’t appear to have the depth of something like Extempore, but looks much more accessible with great tutorials and a curious Scrivener vibe.  Could this be my new amanuensis?

After watching the following video I’m pretty convinced I can effectively explore using Opus Modus to create interesting (and possibly Euclidean) patterns with CC modulation simultaneously for all synths.

 

UPDATE:

Just a reality check.  It seems that I really need to spend more time with Logic – not just because I “should”.  My partner Stacey has started using Garageband on the iPad to record and arrange audio poems.  In helping her out with basic pre-mastering techniques before uploading to Soundcloud I’ve been thinking about my resistance to Logic and how it might be metaphorical and psychological – as in a reaction against logic.

My major issue is that ALL the material i’m currently making focuses on polishing of material generated with Ableton Push in real-time.  Surely the solution is to balance that out with some non-real-time and Logic has the largest set of tools for that.

I also found the Edgar Rothermich tutorials I bought in an attempt to learn LPX a year or so ago.  So I’m giving it another try for my offline composition.  Tools like Opus Modus, Sundog Scale Studio, Metasynth etc will extend this work in various ways.

Limitations are great when they can be effectively worked within but sometimes they feel like a prison.