Using (and abusing) Renoise as a demosequencer
Soledad Penadés
ASSEMBLY 2010
A bit of background about me
- Discovered trackers in 1995
- Have used Scream Tracker, Fast Tracker, Impulse Tracker, Buzz, Reason, Renoise...
- Programmed first demo in 2003
- Suffered the pains of synchronizing. Several times.
- Don't want to go through that again.
- I use Linux - I value multiplatform and flexibility
Hold on!
What is Renoise?
What is a "demo" and why would I want to sequence it?
Renoise is ...
- ... a (modern) tracker
- ... a Digital Audio Workstation (DAW)
- ... multiplatform
And also has an amazing community
- engaged users
- involved, responsive developers
But more importantly...
- it's able to save songs as XML files!
Demos
... AKA demonstrations
- Basically, a technical showcase
- ... or an amazing mix of audio, graphics and coding skills
- ... but only if done “right”!
So... how do we synchronise stuff?
Tracked modules
Using certain tracker effects to synch demo parts (e.g. S2X in IT modules)
Although it's not very precise
Using pattern, row and order numbers
Way more precise!
But tracked modules sound awful
The musician might prefer to use VSTi, huge samples and what not
- he/she delivers an MP3 file
- the programmer cannot easily sync anymore
- drama ensues :'-(
Poor man's synchronisation
The mighty FFT trick
- Unfortunately it works for bassdrums and little else
- You shouldn't synch to bassdrum only - it's tiring for the viewer (re: flashing)
- Predictable: it gets boring quite easily (unless you're very good)
Manual synchronisation
Identify every interesting moment in the song and write down its timestamp
- Boring, error prone, tedious...
- You either write a tool for it or try to make do with Audacity's labels or similar (not very reliable)
- If the musician changes the tempo or makes any change, everything needs to be redone again
Ultimately, it is silly: you're transcribing the song again, albeit in a bad way
Clever synchronisation
Automatically extract timestamps from the song
And use only the values you want, when you want them
Flexibility == Happiness for all
- Musicians can change tempo and add changes at will.
- Programmers/Designers can focus on the demo.
The Renoise XRNS file format
It's a compressed ZIP file containing a Song.xml file with the actual song data
Lots of advantages
- ASCII! -- Multiplatform (bye bye to big/little endian issues)
- Kind of human readable/debuggable
- Lots of XML parsing tools
- Lots of specialised tools for dealing with XRNS files
Disadvantage
- ASCII! Not optimised size-wise, like oldschool trackers (although there are work-arounds)
XRNS dissected (I): GlobalSongData
(For brevity's sake, only relevant stuff is shown)
<GlobalSongData>
<BeatsPerMin>135</BeatsPerMin>
<LinesPerBeat>4</LinesPerBeat>
<TicksPerLine>12</TicksPerLine>
XRNS dissected (I): GlobalSongData
XRNS dissected (II): Instruments
<Instruments>
<Instrument>
<PluginProperties>
<PluginDevice>
<PluginIdentifier>YOUR_VSTI_ID</PluginIdentifier>
<Parameters>
<Parameter>
<Value>FLOATING POINT NUMBER (0..1)</Value>
</Parameter>
(more parameters)
</Parameters>
</PluginDevice>
</PluginProperties>
XRNS dissected (II): Instruments
XRNS dissected (III): Tracks
<Tracks>
<SequencerTrack type="SequencerTrack">
<FilterDevices>
<Devices>
<InstrumentAutomationDevice>
<LinkedInstrument>INSTRUMENT NUMBER</LinkedInstrument>
<ParameterNumber0>PARAMETER NUMBER (IN THE VSTI)</ParameterNumber0>
<ParameterValue0>
<Value>FLOATING POINT NUMBER (0..1)</Value>
</ParameterValue0>
.
.
.
<ParameterNumber13>PARAMETER NUMBER (IN THE VSTI)</ParameterNumber13>
<ParameterValue13>
<Value>FLOATING POINT NUMBER (0..1)</Value>
</ParameterValue13>
</InstrumentAutomationDevice>
</Devices>
</FilterDevices>
XRNS dissected (III): Tracks
XRNS dissected (IV):
Patterns 1 (Envelopes)
<Patterns>
<Pattern>
<NumberOfLines>INTEGER NUMBER</NumberOfLines>
<Tracks>
<PatternTrack type="PatternTrack">
<Automations>
<Envelopes>
<Envelope>
<DeviceIndex>AUTOMATION DEVICE INDEX</DeviceIndex>
<ParameterIndex>PARAMETER INDEX IN AUTOMATION DEVICE</ParameterIndex>
<Envelope>
<PlayMode>Points|Linear|Cubic</PlayMode>
<Length>Envelope length in lines</Length>
<Points>
<Point>ROW,FLOATING VALUE (0..1)</Point>
...
<Point>ROW,FLOATING VALUE (0..1)</Point>
</Points>
</Envelope>
</Envelope>
<Envelopes>
</Automations>
XRNS dissected (IV):
Patterns 1 (Envelopes)
XRNS dissected (IV):
Patterns 2 (Lines)
<Patterns>
<Pattern>
<NumberOfLines>INTEGER NUMBER</NumberOfLines>
<Tracks>
<PatternTrack type="PatternTrack">
<Lines>
<Line index="ROW NUMBER">
<NoteColumns>
<NoteColumn>
<Note>NOTENAME-OCTAVE, i.e. C-6, A#2...</Note>
<Instrument>INSTRUMENT INDEX</Instrument>
<Volume>VOLUME (HEX)</Volume>
</NoteColumns>
<EffectColumns>
<EffectColumn>
<Value>VALUE (HEX)</Value>
<Number>EFFECT NUMBER (HEX)</Number>
</EffectColumn>
</EffectColumns>
</NoteColumns>
</Line>
</Lines>
XRNS dissected (IV):
Patterns 2 (Lines)
XRNS dissected (V): Order list
<PatternSequence>
<SequenceEntries>
<SequenceEntry>
<Pattern>PATTERN NUMBER</Pattern>
</SequenceEntry>
<SequenceEntry>
<Pattern>PATTERN NUMBER</Pattern>
</SequenceEntry>
...
XRNS dissected (V): Order list
Feeding XRNS data into our demo/intro
Two cases
- Musician writes the song with Renoise, huge samples, CPU eating VSTi's
- Musician writes the song using our own software synth
Case 1: Musician writes the song with Renoise, huge samples, CPU eating VSTi's
We get an MP3 render of the song PLUS the original XRNS file
- When the demo is loaded, we also load and parse the XRNS file
- "Pre-play" the song and find events' timestamps
Case 2: Musician writes the song using our own software synth
We get an XRNS file only
- When the demo is loaded, we also load and parse the XRNS file
- Run the demo as usual, feeding the events (notes, envelopes,...) to our software synth as they happen, and using them for controlling the visuals too
- Suitable for intros and size limited scenarios
The player
Start with the simplest one, add features later
- The most basic stuff: playing patterns in the right order, triggering notes with the right instruments
- Then envelopes
- And finally the effects (sample based effects can be ignored unless your VSTi's support them)
The player: timing
It's just two formulas
- secondsPerRow = 60.0f / (linesPerBeat * BPM)
- secondsPerTick = secondsPerRow / ticksPerLine
We already know linesPerBeat, BPM and ticksPerLine.
If you're not implementing tick-based effects, you can ignore the second formula!
The player: events list
t = 0
for each pattern in the order list:
add event == pattern change
for each row in the pattern:
add event == row change
for each track in the pattern:
if cell not empty:
add event (note on, note off, volume change...)
t += secondsPerRow
Warning: if you implement BPM change effects, you have to update secondsPerRow and secondsPerTick!
The player: reasons for using an event list
(versus dispatching events on the fly)
- More precise (better synch)
- Avoid potential latency issues (soft synth case)
- Easier to integrate into demoeditors workflow
- Pretty much ZERO computational cost
Using the events list with an MP3
Just play the song as usual, with FMOD/BASS/etc, and...
- Use current song time (as provided by FMOD/BASS) to detect current events in the event list
- The trick is to sort in advance the events by time, ascending
Using the events list in your custom demoeditor
Draw it in another layer (synch layer?)
- either one-off, "isolated" events,
- or longer events, e.g. for pairs of note on + note off events in the same instrument (harder)
You could then dynamically feed events into your assets/scenes
Some ideas
- Cycle cameras when a certain instrument is playing
- Tilt cameras / scale objects / control particle systems emitters
- Spatialisation: objects appear "panned", as indicated in the note events
- Slow down/accelerate camera movements depending on current BPM
- etc...
Using the events list with your software synth
Each time the callback function is called...
- process envelopes
- process samples until the next event happening in the buffer
- process all events happening in that time (send events to soft synth)
- repeat until the buffer has been filled
Potential improvement (soundwise): process envelopes per sample (specially for pitch envelopes - human ear notices rough changes!)
In the main thread, synch to the event list using a different 'currentEvent' pointer
Because the audio thread and main thread do not work in the same events at the same time, due to latency
Oldschool synch is alive
once again!
In the main loop:
find out which events do we need to process
(events between last processed event and possible event at currentTime or less)
for each event to process:
if (
event.type == ORDER_CHANGE ||
event.type == PATTERN_CHANGE ||
event.type == ROW_CHANGE
):
// Change scenes, do whatever
// Another obvious example, 4x4 flash
if event.type == ROW_CHANGE && event.row % 4 == 0:
// FLASH!!
// But remember, we said this was boring! ;)
Some more non-Renoise
specific advice
Listen to the music
- Close your eyes, let your mind wander. Open eyes condition what you think.
- You could associate a motif in the song with something happening in the screen.
The rhythm
- Learn about patterns, bars, notes... Understand how songs work.
- Pattern changes tend to call for scene changes. Count 1, 2, 3, 4; 1, 2, 3, 4...
- Don't break the musical rhythm by having the visuals go their own way - it's disturbing
- If the song "runs" fast, the visuals should follow accordingly (i.e. quickly)
- Otherwise music and images will be disconnected. You don't want that.
Some more non-Renoise
specific advice (II)
... and experiment with breaking the rules!
Awesome future: Renoise 2.6+
- Native Lua scripting in Renoise
- OSC (Open Sound Control) support
- I'm not selling you this software, I swear!
Thanks for coming!
Any question?