Tools for the
21st Century Musician
Let’s start with a proposition:
You all are artists
and very edgy!
So you want to break fronteers;
go where no one has been before:
bring your art
to the web!
Of course,
with <audio>, eh?
We could use it...
<audio src="cool_song.ogg" controls preload></audio>
This would...
- initiate a network request for loading
- deal with decoding/streaming/buffering
- render audio controls
- display progress indicator, time...
It also exposes methods
- load
- play
- pause
And can dispatch events as well
- loadeddata
- error
- ended
- ...
But it has shortcomings
- no accurate scheduling
- hacks for triggering multiple instances
- they're associated to a DOM element
- output goes straight to the speakers
- no fancy visualisations
- some OS display a fullscreen player
This won’t do
for edgy artists
Is it all over?
Do we just give up and
start writing native apps?
NO.
Web Audio
to the rescue!
Web Audio
accurately places sounds in space and time
- fully controllable with a JS API
- interoperable with other Web APIs
- not attached to the DOM
- runs in a separate thread
- supported everywhere except IE
So how does it work?
1) create an audio context
var audioContext = new AudioContext();
“Where everything happens”
2) use instance methods to
create audio nodes
var oscillator = audioContext.createOscillator();
3) connect nodes to
build the audio graph
oscillator.connect(audioContext.destination);
The audio graph
Nodes taxonomy
- audio emission
- audio manipulation
- audio analysis
4) control nodes with their JS API
oscillator.start();
All together
var audioContext = new AudioContext();
var oscillator = audioContext.createOscillator();
oscillator.connect(audioContext.destination);
oscillator.start(audioContext.currentTime);
It’s a matter of time
In order to get smoothly playing sounds,
Web Audio runs in the future
and everything is scheduled ahead of time.
Starting oscillators
// start it 'now'
oscillator.start(audioContext.currentTime);
// start it 3 seconds from 'now'
oscillator.start(audioContext.currentTime + 3);
Stopping oscillators
// stop it 'now'
oscillator.stop(audioContext.currentTime);
// stop it in 3 seconds from 'now'
oscillator.stop(audioContext.currentTime + 3);
Restarting?
// start it now
var now = audioContext.currentTime;
oscillator.start(now);
// stop it 3 seconds later
oscillator.stop(now + 3);
// start it again another 3 seconds after
oscillator.start(now + 6);
That won’t work.
Because of performance reasons,
some nodes are one-use only.
Nodes that have been stopped
are automatically disposed of
as long as you don't keep references
I use
Firefox’s Web Audio Editor
to watch out for memory leaks
Leaked nodes
But you can
write your own wrappers
to work around this.
example:
Oscillator.js
Oscillator.js (1/3)
function Oscillator(context) {
var node = null;
this.start = function(when) {
ensureNodeIsLive();
node.start(when);
};
Oscillator.js (2/3)
this.stop = function(when) {
if(node === null) {
return;
}
node.stop(when);
node = null;
};
Oscillator.js (3/3)
function ensureNodeIsLive() {
if(node === null) {
node = context.createOscillator();
}
}
Using it
var ctx = new AudioContext();
var osc = new Oscillator(ctx);
var now = ctx.currentTime;
osc.start(now);
osc.stop(now + 3);
osc.start(now + 6);
// ^^^ works!
Oscillator.js
Look at that sound
Analyser Nodes
provide frequency and time data about their input
example:
an oscilloscope
Oscilloscope (1/2)
var analyser = context.createAnalyser();
var analyserData = new Float32Array(
analyser.frequencyBinCount
);
oscillator.connect(analyser);
analyser.connect(context.destination);
Oscilloscope (2/2)
function animate() {
requestAnimationFrame(animate);
analyser.getFloatTimeDomainData(analyserData);
drawSample(canvas, analyserData);
}
Oscilloscope
Node instances are just JavaScript objects
They have properties
and we can set them!
for example,
oscillator.type
- sine
- square
- sawtooth
- triangle
oscillator.type = 'square';
Wave types
But if we try to change the frequency...
oscillator.frequency = 880;
... it doesn’t work!
Why?
oscillator.frequency
is an
AudioParam
It is special
// Access it with
oscillator.frequency.value = 880;
So what is the point
of Audio Params?
Superpowers.
Superpower #1
Scheduling changes
with accurate timing
What not to use
when building curves and fades
- setInterval
- setTimeout
What you expect is not what you get
Instead,
use event lists
There is an event list per parameter
schedule events
using
Audio Param methods
- setValueAtTime
- linearRampToValueAtTime
- exponentialRampToValueAtTime
- setTargetAtTime
- setValueCurveAtTime
Web Audio will interpolate between events
and you get effortlessly smooth transitions.
Go from 440 to 880 Hz in 3 seconds
osc.frequency.setValueAtTime(
440,
audioContext.currentTime
);
osc.frequency.linearRampToValueAtTime(
880,
audioContext.currentTime + 3
);
A practical application of timed events
ADSR envelopes
ADSwhat...?
Attack / Decay / Sustain / Release
Relatively easy to configure and compute
(so very common in substractive synthesis!)
ADSR part 1: Attack/Decay/Sustain
param.setValueAtTime(0, t);
param.linearRampToValueAtTime(1, t + attackLength);
param.linearRampToValueAtTime(sustainValue,
t + attackLength + decayLength);
ADSR part 2: Release phase
param.linearRampToValueAtTime(0, t + releaseLength);
More natural sounds with an
ADSR envelope
and
Gain Nodes
Gain nodes
multiply their input by their gain value
so they can be used to
- quieten or
- amplify sounds
A gain node in practise
var ctx = new AudioContext();
var osc = ctx.createOscillator();
var gain = ctx.createGain();
osc.connect(gain);
gain.connect(ctx.destination);
gain.gain.setValueAtTime(0.5, ctx.currentTime);
If we schedule changes to the gain parameter, we get
a volume envelope
Of course,
cancelling events
is possible
param.cancelScheduledEvents(when);
removes all events from the list, from when onwards.
Superpower #2
Modulation
Connect the output of one node
to another node’s AudioParam
and get interesting effects
Modulation in practise
LFOs
(Low Frequency Oscillators)
We can’t hear those frequencies...
but can use them
to alter values
in a range
we can hear
Example:
SPOOKY SOUNDS
alter an oscillator’s frequency
with another oscillator’s output
Watch out!
var context = new AudioContext();
var osc = context.createOscillator();
var lfo = context.createOscillator();
var gain = context.createGain();
osc.connect(context.destination);
lfo.connect(gain);
gain.gain.value = 100; // [-1, 1] => [-100, 100]
gain.connect(osc.frequency);
Keep watching out
osc.frequency.value = 440;
lfo.frequency.value = 1; // 1Hz = once per second
osc.start();
lfo.start();
Spooky LFOs
Playing existing sounds
AudioBufferSourceNode
for short samples (< 1 min)
MediaElementAudioSourceNode
for longer sounds
Each
AudioBufferSourceNode
requires an
AudioBufferSource
1) Loading data for the AudioBufferSource
var context = new AudioContext();
var request = new XMLHttpRequest();
request.open('GET', samplePath, true);
request.responseType = 'arraybuffer';
request.onload = function() {
context.decodeAudioData(
request.response, loadCb, errorCb
);
};
2) Instancing the AudioBufferSourceNode
var bufferSource = context.createBufferSource();
function loadedCallback(bufferSource) {
bufferSource.buffer = bufferSource;
bufferSource.connect(context.destination);
}
An uncanny similarity to oscillators
bufferSource.start(when);
bufferSource.stop(when);
They even have to be recreated like oscillators!
Despair not
You can create them again and reuse the buffer
pewpewmatic
MediaElementAudioSourceNode
Takes the output of <audio> or <video>
and streams it into the audio graph.
var video = document.querySelector('video');
var videoNode =
context.createMediaElementAudioSource(
video
);
videoNode.connect(context.destination);
Media element
This is just an introduction
There are many more
built-in node types
- delay (echos)
- filter (low/pass/high frequencies)
- panning (3D sounds!)
- convolver (reverberation)
- splitter & merger (channel manipulation)
- waveshaper (distortion effects)
- compressor (for maximum ooomph)
Possibilities abound
- getUserMedia + MediaElementAudioSourceNode
- Web Audio Workers - generate audio in realtime with JavaScript
- OfflineAudioContext - Beat detection, audio rendering, etc...
- ...
- You set the limit
- (you are EDGY!)
And there is
STILL more!
I've been hacking on Web Audio stuff for the last 3 years
so I've done the same things over and over
in different ways
I've also spoken to many people about audio stuff
- Angelina Fabbro
- Jordan Santell
- Max Ogden
At some point the stars aligned:
- I finally understood AudioParams
- I found the way to simulate custom audio nodes
- and I was going to speak about music in the 21st century
Suddenly everything made sense
It was, at last, the moment for...
OpenMusic
Modules and components
for Web Audio
github.com/openmusic
OpenMusic right now:
- web components: oscilloscope, slider
- audio components: oscillator, sample player, clipper, dcbias
- eventing: tracker player
- audio generation: noise functions (white, brown, pink)
All based on npm, dependencies sorted out on npm install
How it looks like
var Oscillator = require('openmusic-oscillator');
var ac = new AudioContext();
var osc = Oscillator(ac);
osc.connect(ac.destination);
osc.start();
i.e. pretty much like other AudioNodes
Principles
- behave like standard AudioNodes
- one functionality, one module
- composable
Our wish
- People use these bits and pieces
- Or they look at them and build their own and we can use theirs
- Bits and pieces become tools
- A web audio ecosystem forms...