Artificial Artificial Intelligence OR There's Something about Randomness (acting on things with inherent structure)
(Bear with me on this - at the end I'll be giving you two MAX/MSP Runtime patches that will work with the special, free, mono edition of MegaDelayMass - MegaDelayMassCM that we made for Computer Music UK magazine - I’ll even include a copy of MegaDelayMassCM in the download bundle)
We've been thinking about randomness a lot around the office. (Okay - honestly, I don't think Edmund thinks about randomness that much - unless he considers the odds of a successful Windows boot as he waits for the login screen, but I do, and we talk about it from time to time, so I'm sticking with this thought) While it may be looked upon as a way of unintelligently parroting the result of unbelievably complex processes, giving a starting point to something or just creating a mess, it's everywhere. Randomness is an essential part of so many different fields. It's as important to cryptography as it is to the lottery, electronic music (don't want all those hi-hats to sound the same? randomize them!), to the shuffle function on your iPod. It interesting to note that something so ubiquitous isn't that easy to do with a computer. That is to say that computer generated random numbers are really "pseudo random" numbers. They mimic randomness. Why, just last week scientists figured out a way to make pseudo-random numbers twenty times more random! Fancy that!
In our own case, here at Intelligent Devices world HQ, we allow randomness to do things like affect the uniformity of chunks of processed or unprocessed sound (look at Slip-N-Slide), the placement of delay "taps" over time (see MegaDelayMass) or even LFO waveshapes (notice The Marshall Time Modulator), to name a few.
In thinking about and messing with randomness for a long time now, and its applications in music and art, I'm struck by how, in a weird way, it's almost not as important what happens (relative to the randomness) as the context in which it is perceived to have happened and how we react to that.
I've come to this conclusion because I feel, rather obviously, that as people we are constantly trying to relate the new to what is already known. We catalog EVERYTHING. When we are able to relate something to something else, we feel that we have a better understanding of the new thing we've just cataloged in our minds. The psychological term for this is... I don't really know, but I'm sure there is one.
Anyway, as an example, while I was in Amsterdam a few years ago, working on my wearable interface, the manDrum I had an idea.
First, I made lists of words in columns (written here in rows to save space):
I want, I need, I feel, I think, I am, I am not, I taste, I touch, I sense, I forget, I remember, I don't need, I don't want, I eat
love, hate, bread, water, soap, a cigarette, a tongue piercing, a lounge chair, a microscopic speck, an antelope, a watermelon, a musk ox, a beer, tired, happy, elated, desperate, recycled, plastic, paper, entropy, decay
because of, next to, beside, around, beneath, over, under, until, between, on top of
death, friendship, amnesty, taxes, terror, fear, kindness, invisibility, emotion, ambiguity, seduction, intrusion, exploitation, derision
Then I recorded Rene Wassenberg, a friend from STEIM in Amsterdam who sounds a little like a dutch Sean Connery, reading those words three different ways:
1. in a normal voice
2. in a whisper up close
3. from across the room shouting
After editing the recording into 165 samples (individual words), I then wrote a program in MAX/MSP that randomly put those words together. Additionally it can loop them, play them forwards and backwards, and change the speed and panning. In this example you will hear only the words put together and panned randomly and automatically. Originally, I was going to have the computer choose a word from each column and put them together, but I decided I didn't really feel like it, so instead, each word/sample has a number, from 0 - 165, and the computer assembles the words as strings of random numbers which correspond to the different samples. Easy, no?
Here's a recording of a minute from one run of the program (This is the recording referenced below), used to generate the spoken text in my piece "notmares" (more on that later), transcribed reads like this:
(Parenthesis = whisper, Bold=shout)
(I am desperate, friendship), a musk ox until hate around (to be decay)
(next to) seduction (on top of an antelope)
(between), next to, (emotion): A beer
(beside) entropy: Derision
I am decay - a musk ox
on top of - I want (elated for because of) decay
(seduction) - kindness
soap, taxes, water, friendship (to be) until tired
a beer between (to be)
a lounge chair, exploitation, (terror, a cigarette)
a musk ox, (derision I don’t need because of) next to derision
(kindness) taxes I don’t want
There are a couple of interesting things going on here: First of all, because this deals with words that adds a whole layer of impossible to ignore meaning associated with each and every one. Then, beyond that, hearing this, it's interesting - the result of this is not, to me, some weird and incoherent thing as one might expect. It actually has an implied underlying cohesion: Because the same person says all the words in three different ways and the editing is sufficiently good that you don't think about how each word or two is a different recording, the words have the illusion of being a continuous thought and have at least three distinct levels of urgency. Finally, there's also the acoustic of the room - you HEAR the lovely wood for the STEIM studio, and that environment alone ads an emotional component. This is exactly where we get fully into the idea of what i call AAI: Artificial Artificial Intelligence. I call it that because AI seems to frequently involve tremendous amounts of raw computing power, examining every possible outcome and weighing them all against one another. This is the so-called brute force approach.
See, I have an unfair bias against the brute force approach. While it is irrefutably true that brute force computing can do intelligent things, (for a fascinating account of the truth of this, see this article by World Chess Champion Gary Kasparov), I just hate the approach. It lacks finesse. There is nothing elegant about it, and that bugs me. To be fair, what I'm advocating in this post is sort of the antithesis of brute force. I believe that often, things that seem miraculous are the result of very clever trickery.
...A middle ground; a third way.
See, IMHO almost anything by itself is less interesting than a lot of a few things. As an example, lets talk about generating random piano music.
Immediately we'd probably talk about pitches. Since the range of MIDI note numbers is expressed as 0-127 (128) this is easily understood by the computer w/ no extra intermediation. We can pretty easily generate random numbers between 0 and 127.
Next, we could talk about rhythm and duration. Now, I could very simply write a MAX patch that generates pitches in a fixed rhythm of a fixed duration. And, as long as we stick with, let's say 16th notes @ 1/4note =120bpm we have a steady stream of pitches @ 480bpm, it *sounds* random, and very "computer like".
The moment we begin adding random spaces (aka rests), the thing is infinitely more interesting, though no "smarter". If we start randomizing durations, that's another intelligent sounding layer. Mind you, we haven't even gotten into rules about pitch constraints or any sort of rules really. If we randomize pitch, duration, rhythm, density (number of events over time) and dynamic (volume), we have basically arrived at ground John Cage trailblazed and rendered trivial for us a long time ago. Using the sound of the piano, again a sound with a history, cannot help but add gravitas. The only way it could not is if the listener had never heard a piano before. Which reminds me of the story of a man from a very remote village listening to orchestral music over headphones for the first time. I don’t know what the piece was, but he asked “why are they all whispering?” For whatever reason, to that guy, the instruments sounded like whispering. I would guess it’s because he had to relate this new thing to something, and that's what he had. Which reminds us of the beginning of this article and leads toward the end.
Welcome to the last stop on this trip. Mind the gap. In 2004 I wrote a MAX patch that allowed me to randomly jump through the entirety of a concept album I had made about "random access". It ended with 30 seconds of randomly jumping through the whole 53 minute piece, with the random chunks of audio getting smaller and smaller and the net effect being one of acceleration to disintegration. I was struck by two things :
1. It sounded a lot like spinning a radio dial (which, oddly, I suppose generations of people will never hear, since digital tuning blanks the frequency shmear we used to be able to hear w/ an analog dial... sad)
2. Because the listener had just heard the piece, a 30" trip thru it plays on their memory, creates a sort of instant nostalgia, and makes those 30" seem super important and organized, which of course, they were not, as it was a random jaunt thru organized sound.
And so, that's where we finally end up; Applying random processes to organized sound makes the randomness seem planned. In the case of performance, check out this moment in the manDrum demo where I’m triggering random bass notes when I hit the bass drum trigger.
Because the randomized notes are tied directly to the concept of the bass drum in a drum kit AND because they're played by a person, me, they don't seem so much random as, perhaps, effortlessly complex?
This same principle works with conversation.
While working on my piece "notmares" (for the 40th anniversary of the Peabody Electronic Music Studio) I got to thinking about this process.
Here's the piece:
Because the "notmares" deals with odd, non-threatening dream-imagery, as well as touching on things I've done since Peabody, I remembered the patch I had written back in 2004 and decided to revisit it. I revised it to work with MegaDelayMass in this way - when a new random chunk of audio is triggered it also changes the MegaDelayMass patch. I also have explicit control over the duration of chunks, from ASAFAP (as small and fast as possible - “I'm an acronym guy - letters are funner” - thanks LDOTD... ;) to up to 10 second in length. I also made it work with my Korg Nano mixer.
With this new patch, I first processed a recorded conversation with my 3 year old daughter, Sofia (aka FiFi)
then, I made a much longer recording with singer/violinist Bonnie Lander -here's an example
So, lets recap what happens here: a random amount (length) of audio is selected from a random time position within an audio file. At the same time, a random patch on MegaDelayMass is chosen. That is repeated, manually changing the upper bound chunk length when desired (of course, it occurs to me now that THAT could be randomized too, along with some way of planning the 'big picture". Sigh - it never ends.) Anyway, because this example comes from a conversation and a rehearsal, the randomized audio gives you a serialized view of different, discrete moments in the conversation/rehearsal, coupled with crazy efx that, depending on what eeks through, have a dramatic weight to them. Notably when you hear "Time for you to go to bed!" in the recording of me and my daughter it cannot help but have weight and seem like a touchstone.
Where does this all leave us? I think here: Randomization is a cool way to very simply get a complex result, most especially when used with something "human and linear". I could be just blowing smoke up my own skirt, but I don't think so. You know, this is computer music. Shouldn't it exploit the strengths afforded by the computer?
The first patch, called whatTheyLeft20CM plays random chunks of audio from a soundfile, processing them as you hear them. It is available in CM version (meaning it uses the free version of MegaDelayMass known as MegaDelayMassCM which is included in the patch bundle)
The second patch called GPGCM (Gravitas Phrase Generator CM edition) plays numbered sound files in a random, serialized order and effects them, again using MegaDelayMassCM. Both of these patches will work with latest MAX/MSP Runtime, so you do NOT need to own MAX to make use of the patches,
Also included are the 165 samples of Rene Wassenburg's voice to get you started.
Here's a sound example to whet your whistle:
I encourage you to download these and mess around with them. They're pretty cool, and while their "point" may not be immediately obvious, they can definitely lead you in some interesting directions. I like to let them run in the background sometimes and be surprised by what they come up with - of course, there I go again - I'm anthropomorphizing random behaviour. Go figure.
As ever, feel free to contact me with questions or comments: [email protected]
Posted March 5th, 2010
Next post: Welcome Back