⏈ Phase Music in JavaScript ⏈
by Devin Lane
What is this?
Following inspiration from Tero Parviainen's writing on "systems music", I've created phase music in the spirit of Steve Reich's It's Gonna Rain.
Systems music explores how a process applied to sound creates slowly evolving changes. Phase music explores how two identical musical phrases played with a steady, but slightly different, tempo on two different instruments creates new rhythms and harmonies over time.
I discovered Tero's work when he was a judge for BitRate, Google Magenta's music and machine learning hackathon. My team won the contest for our work on Dear Diary in 2020, which I detailed here.
Why make this?
I'm interested in exploring web audio as a compositional tool. This includes, for example: systems music, generative music, data sonification, and effects processing--viewing code as a musical instrument and the web as a musical medium.
I intend tools like these to enable new musical expression for both experienced and novice musicians.
How does it work?
- I wrote a piece of music and took a small snippet as a loop.
- Two identical loops of this piece are played, one panned to your left speaker, one panned to your right speaker (because of this, the effects will be more pronounced on headphones or two individual stereo speakers.)
- The playback speeds differ slightly between the two loops, which causes their phase to shift over time, resulting in new harmonies and rhythms as they grow out of alignment and back again.
How did you make it?
The Web audio API and JavaScript play central roles here. As previously mentioned, Tero's blog was an invaluable resource in building this.
- We set up an
AudioContext()
- When
play
is pressed, a functiongetSound
is called, which uses the Fetch API to grab the audio file. - The audio file is fed into a raw
ArrayBuffer
object, then we call thedecodeAudioData()
method to decode the audio data from theArrayBuffer
. This is ouraudioBuffer
. - We call a separate function
startLoop
twice, panning one loop hard left, one hard right. We set the playback speed of one loop ~0.005 faster than the other. - Within
startLoop
we callAudioBufferSourceNode
, which reads ouraudioBuffer
. - We then set up our panning, our loop points, and we connect the nodes to the destination of our
AudioContext
. - Importantly, we build a
stop
button, which calls the.stop()
method on oursourceNode
.
What optimizations would you build with more time?
Some features that I haven't built that would be nice:
- Allow users to upload their own audio files
- Add user control for the loop start and stop points
- Add user control for the audio playback speed
- Add user control for the panning
- Add a record feature, allowing downloadable audio files
- Allow users to create accounts and save their work
- Allow users to share their work with a link to the site