MACCHINA I

MACCHINA I

TL;DR: Infinitely random music box. Experience endless algorithmically built textures, right in your browser!

Now for the long-attention-span readers:

I heard about DemoJS a few weeks ago. The idea of a demoparty focused on promoting Javascript-based productions stroke me as brilliant, as I have long held the view that 'traditional' native-run demoscene works are becoming less and less accessible, and thus the demoscene --or more exactly that part of the demoscene-- is getting stagnant, more ghetto-ish than ever.

An example: I haven't downloaded and executed a demo for years. Either they are compiled for systems I do not run (Windows) or they require an obscene amount of processing power that you can only get from the latest and greatest graphic card, or they are just a hassle to run. I end up watching the video captures instead, but they obviously can't compare to the crispy and detailed real thing.

So? Modern browsers + Javascript to the rescue! Specially: HTML5 + V8 (i.e. Chrome) + Firefox. They are truly pushing the web forward, providing more and more features that were available for native code traditionally: hardware accelerated graphics and real time audio. Add this to the already great possibilities of Javascript and what do you get?

A great environment for creatives!

Accessible everywhere, from every sort of operating system --as long as you're running a decent browser. No configuring/installing hassles, no DLL issues, no people running a 'party version' when you already released a fix, or twenty. Plus you can really see how many people watched your work, and even make it interactive (or interact with other data sources).

To sum up: browser based productions are great.

As DemoJS were accepting remote contributions, and to prove my point and support that forward-thinking mentality, I wanted to contribute with something new and shiny to the party. The problem was that I didn't know what to do! I had some ideas already started but they were a bit ambitious and I needed more time to properly develop them. On the other hand, I was experimenting with porting my C++ synthesiser, Sorollet, to Javascript with the Web Audio API, so I had this notion that I could maybe do something with it. After all, 4K/64K coders have been doing one-effect intros with a nice sounding synth for years, so why not me too?

But what, what to do?

I was also travelling back to London. There, I visited several museums. And the cavernous Turbine Hall at the Tate Modern was where I finally connected all the ideas that I had floating on my mind! I would build an autonomous machine! It would be playing music, on its own, forever. The type of stuff that occupies a wall on an art gallery and no one really knows how it works, but keeps people looking at it, mesmerised.

The player

I had the basis of the synth working (the Voices), but I was missing the player. Voices don't play if they aren't told to do so--that's what the player does, it tells the voices when to sound, what to play and how, and when to stop playing. The original player was quite tied to Renoise and supported many things such as pattern effects and envelopes, but I didn't want to port all that for this experiment, so I built a simpler player that would just play one pattern after the other, without fancy stuff.

It was quite a refreshing change to build this with Javascript instead of C++. Data structures are way easier with JS. I can focus on getting actual results instead of thinking about managing memory.

The composition

Now I had a player, and I was randomly inserting notes on the patterns. Sometimes it was great but most of the time it sounded really awful! An evident solution was to use some sort of musical scale as reference when composing the score. But although I must have been using them intuitively for years (by ear, I should say instead), I couldn't enumerate a single scale. Ah, Wikipedia to the rescue. Exploring their links I ended up gathering a nice collection of scales.

Now my experiment would pick one of the scales at random, and build a series of patterns with the notes in that scale only. This is when things started to sound more pleasant!

Back to the cavern

Though it sounded harmonically well, it didn't truly fill rooms. Something was needed... something like reverb or delay effects! The Web Audio API has support for reverb through the use of its convolution engine and ConvolverNode objects.

The only friction point in adding a ConvolverNode is that you need to load the impulse response file (a WAV, OGG or any other supported audio file) via a dynamic request. I have to experiment to find out if they can be read from an <audio> object, base64 data or even dynamically generated (!!). But for the time being, an XMLHTTPRequest was ok. It also allows me to avoid loading the file if the client doesn't satisfy the requirements (Web Audio and OGG support, for the impulse response file).

These impulse responses are standard in the audio industry, so there are sites offering already recorded files for you to download. I experimented with several of them but at the end I just used one that came with the Chrome audio examples and sounded cavernously deep enough :-)

WHY U KILLIN MAI EARS?

cat

I showed this early version to my sister, who has two cats. She liked it, but her cats seemed to be quite disturbed by the high frequencies in the output! I also thought that maybe it was due to me using 'proper' monitors and she using standard computer speakers. I tried listening with standard headphones and she was right: ears hurt with all those treble sounds! Bass was almost non-existent as it was obscured by the piercing sounds that dominated the output. Ouch!

Since the Web Audio API has support for Biquad pole filters, I thought that could be the answer. I would just filter the highest frequencies out and that would be it, right?

No, it wasn't that easy. I tried with several combinations of high pass filters on their own, notch filters, band pass filters and high pass combined, also implemented some sliders to test and adjust the best filtering frequency and parameters, but nothing sounded right and nice to the ears. As they always say about mixing: it's not about filtering the output, but about having a balanced input. No amount of filtering can correct a bad input. Simply discarding frequencies ended up with a sort of muffled result.

I decided to act more thoughtfully: instead of randomly choosing octaves for the voices, which could end up generating too many high pitched voices overall, I distributed them evenly. The left-side voices would be lower pitched, and the right-side voices would be higher-pitched. That solved the bleeding-ears issue, and allowed me to get rid of the filter (thus saving some processing power!).

Visuals

I also had grand plans for the visuals. They would feature a different one for each pattern, and respond to the music, and... take a lot of time to program! Probably I would finish them when the party was over already!

So I stepped back and decided to keep it simple. I built a palette of pleasant combinable colours, and used it throughout the experiment. Moving blocks would represent the pitch and volume of each note being played by the voices. Here I am eating my own dog food and using Tween.js for calculating the blocks' positions ;-)

I also showed the current pattern, in a grid, as a way of modernly looking, slightly abstract (if you don't know what that is) score. Or as a way of looking into the machine's brain, if you want. Now that I was on that, I also decided to show the current pattern number.

But how did I synchronise the events in the player (notes, pattern changes...) with the visual side? Was I constantly polling for changes in requestAnimationFrame? No, that would have been bad! I used the excellently simple EventTarget.js. This allows you to dispatch and listen to DOM style events from custom objects. That way the player dispatches custom events such as orderchanged, and the only thing left is to listen to them and trigger changes on the visualisation.

Eight

At this point I noticed I could easily stare at the machine for minutes. I would want to debug something but instead of focusing on that I would just sit and look at the screen, listening to the music. Sometimes I even left the machine playing while I wasn't in front of the computer-it made for a nice background music!

I thought it would be great if it could evolve itself, and change the song as it played, so it wouldn't be playing the same loop the entire day. But when should it stop playing the current song and create a new one?

The answer was somehow evident: after eight repeats, since the machine has eight voices and the song is composed by eight patterns.

This recreation uses the same scale we were using, so as not to break the atmosphere and to make the transition smooth enough. If you click on the 'randomise' link, it might choose a different scale instead.

Talk about bad timing

After polishing a couple of things here and there I compressed everything and sent it to the demojs organisers, and awaited the night of the compo...

After the compo, no one talked about my entry, no one mentioned it at twitter whatsoever. Some people didn't even recall having seen it! What was wrong with people!?! Ah whatever, I would just post the online link, I thought. I'll test it once more and...

... it didn't work!! No sound at all!

The latest release of Chrome (a couple of days ago) had broken the experiment! GRRRRRRR!

But thankfully I found the solution by comparing my code with the code from the Web Audio examples. I had to change


jsAudioNode = audioContext.createJavaScriptNode(4096, 0, 2);

into this:


jsAudioNode = audioContext.createJavaScriptNode(4096);

The parameters 0 and 2 were the number of input and output channels, and the specs said they were optional. But they are not used in the latest examples, I don't know why.

However, that fixed it and now you all can experience the sound of MACCHINA I!

Wait, what about that name?

Ah yes! When in London, I met with Francesca (who you might remember from ruby in the pub 4, a couple of years ago). We had tea and pizza and talked about lots of things, and I don't know why we ended talking about cars and she told me a car was 'macchina' in Italian. But it was also a machine on it own. It sounded similar to our Spanish 'máquina', but as Spanish is way more familiar to me for evident reasons, 'macchina' seemed more exotic and inspiring.

And as I'd like to build more macchinas, this is number one. Watch out for the next ones!