And now... what?

If you paid close attention to every single word I placed on the latest demo sources releases, you may have probably noticed that there was something like a count down at the end of each one of them. Some people were placing high hopes on that, specially since the biggest party in Spain (euskal) was getting closer. They thought I was preparing a demo, but they were slightly misled :-)

Consider this the ZERO plus the BANG in the countdown. The key question here is: now that I have released all these freak lines of code, what am I going to do?

Well, the answer is way fuzzier than the question. While I was making sure that the demos worked on my system, I realised that I was getting increasingly annoyed by the usual demo development loop:

  1. change something
  2. compile
  3. execute
  4. wait for loading
  5. verify results are satisfactory; if not, go back to step 1

We are now in the instant feedback age; we as users and even developers expect everything to react pretty much in real time. Having to wait for all those things to happen is just cumbersome; delays are unacceptable. So, the wish number one for any future developments is that I would like to avoid said development loop. Which gets translated into I would like to have some kind of visual/interactive environment.

Another thing that really frustrated me was the fact that I have a nice 23" screen, able to show up to 1920x1200 pixels, but pretty much all my demos were prepared for a resolution of 640x480; 1024x768 in the very best case. So trying to watch them full screen here was at least annoying, because when the textures were scaled, they looked really ugly, even with interpolation and all that. This could be solved with a couple of very cool features: procedural texture generation and using vector graphics for everything else. That way you never lose detail.

As musician first, programmer second, my other main concern is the music, and the way it is generally used in demos, and how inefficient is everything. It usually works like this:

  • Musician creates a song, saves it in MP3/OGG format
  • Coder takes the song, places it in the demo, looks for 'synch' points, codes everything around those synch points

Usually the 'synch' points end up being an approximate duplicate of notes in the song. So effectively what the coder is doing is redoing the song, which I think is silly, apart from tedious and boring and a direct attack to the DRY principle that I strongly advocate.

Why not use directly the song's data instead of trying to replicate it? Of course, this means I would be using synthesized music as well, but I do not think that it would be a limitation, but a feature instead. You could be able to add new events, change the BPM, etc... without having to worry about keeping the classical 'events list' up to date in the code or demo metadata. And the visuals could be way more reactive than they are nowadays by just using the FFT information in the best case.

And while we are on it, why should we stop at the non-interactive line? Imagine all the above features were already implemented. Would not be a pity to not to be able to experiment with new angles and values? Add new stuff on the fly, new events, new effects, new scenes... something like a jam demo session? It could get deliciously crazy, since your creativity would not be blocked by the demo development loop, and would get instant feedback instead, inspiring you to experiment with more ideas (it could also be mentally exhausting, but that's another story).

I wonder if the demoscene is negating itself some expressiveness by limiting itself to non-interactive pieces. As someone said once, you can't really convince anyone of the greatness of the demoscene/real time way if you don't allow them to change the camera angle. Up until that point, they might be easily suspecting you are simply streaming HQ video from your server.

I'm already imagining it: some kind of visual audio real time sequencer and synthesizer. It is well far away in the future but it's what I would like to do. I think that would be my master piece!

And before you diligently inform me, I do already know that there are several projects, like vvvv or PureData, which aim to do exactly (or pretty much) the same I am thinking of.

But arguing that is just negating the pleasure of learning, of investigating and researching how things work, and then make them happen or make them even better than existing alternatives. I do not want to allow consumerism permeate my mind: I strongly oppose the belief that we should just go and buy something that already exists, then when we get bored of it, go buy something more shiny. That's just not me :-)

So after all these lucubrations I think you may have already deducted that there won't be a complete demo/intro/whatever in the near future, since I'm going to take things very easy. There might be proofs of concept, samples of things that I implement, and etc, but I do not want to force myself to release stuff just because there is a party, specially after painfully realising that pretty much all my demos are in an atrocious unfinished state due to the fact that they were quickly hacked assembled together in order to meet a party deadline. I'm going to embrace John Carmack's philosophy: things will be released when they are done!