First, a brief stroll through history. When multitrack tape recording became affordable in the late 1970s, it became practical for the first time for musicians who didn’t have a band not only to compose music but to hear what they had written and even overdub to correct mistakes. In fact, “written” isn’t quite the right word. For the first time, you could put together complex pieces of music without writing down a single note, or even knowing how to.
But you did still have to be able to play a guitar, piano, or whatever. When MIDI debuted in 1983, even that requirement was lifted. Equipped with a modest home computer and a couple of MIDI synthesizers, you could plink out arbitrarily complex pieces of music by clicking the mouse. Parts could be copied and pasted, transposed to new keys, made louder or softer, and so on.
At that time I was an editor at Keyboard magazine, and I was right there in the thick of this revolution — testing instruments and software and writing about what I discovered. It’s entirely possible that I had the very first MIDI-equipped home studio in the world, but that’s not actually much of an accomplishment. I was in the right place at the right time, that’s all.
The main market for MIDI software was, and continues to be, pop musicians. So it’s not surprising that this type of software is optimized to do pop songs and related musical structures. Today, these programs — they’re called DAWs (“digital audio workstations”) are beyond amazing. Logic, Ableton, Reason, Bitwig, FL Studio, Cubase, Digital Performer, they all have their own strengths and their own quirks, and you can make great music with any of them.
Alongside the DAWs, another fertile development in music software has sprung up. Starting with Csound and Max, we’ve seen a more anarchic but in some ways more interesting group of programs that are designed not for producing pop music but for experimental composing. Both of those programs are still around. (I wrote a how-to book about Csound, in fact, though the book is somewhat obsolete by now.)
Programs of this type tend to challenge the user. They’re capable of amazing feats, but they’re not necessarily point-and-click or plug-and-play. The assumption is that you’re willing to roll up your sleeves and learn stuff. There’s a lot to learn with Reason or FL Studio too, but because a DAW is more a commercial product, the developers will try to make it easier for you. Experimental software, not so much.
Recently I’ve become a big fan of VCV Rack. It’s mostly freeware, though some modules have a modest cost. It’s a modular synthesizer running in software. If you don’t know what a modular synthesizer is, I don’t know if I ought to try to explain it. Let’s just say it’s a great big set of components (modules). You make connections between the modules using virtual patch cords by dragging the mouse from an output jack to an input jack, and then something happens. Not infrequently, what happens is, sound comes out of your computer’s speakers.
And that’s the preamble. Here’s what I want to talk about today.
Composing music with VCV Rack is quite different from composing in a DAW. I’ve done a lot of composing over the years in Cubase, FL Studio, and Reason. (You can find the results at midiguru.bandcamp.com.) Right now I’m thinking it may be worthwhile to do a “CD” of music using VCV Rack, but I’m not quite sure how to approach the music, because the tools in VCV Rack are so different.
At the highest level, music in a DAW is organized in sections and phrases. Music in VCV or a similar program is organized in events and processes. For instance, a process might involve letting a step sequencer iterate through a pattern of notes over and over, while making small changes (randomly or non-randomly) in the pattern.
What kinds of changes? Pretty much anything you can imagine. The pattern could go on for thousands of years without ever repeating. Yet it could easily remain identifiably self-similar the entire time.
Just to be clear, you can make process-oriented modular music in Reason or Ableton, and you could record a three-minute pop tune in VCV Rack. It’s possible, but you’d be using the wrong tool for the job.
In a DAW, there’s a timeline. The music begins at the left end of the timeline (conventionally, bar 1, beat 1) and ends when there’s no more data to be played back. There’s only one timeline; it’s global. Every section and every phrase has a starting time and a length with respect to the timeline. This makes perfect sense for pop music, or for jazz or classical music. It’s exactly how conventional sheet music is organized.
VCV Rack has no timeline. Things can start and stop, but they start and stop at arbitrary points in relation to one another. It’s a bit like the theory of relativity. There are modules with which you can trigger a certain event (the beginning of some new process, let’s say) at a time 2 minutes and 17 seconds after you’ve pressed the Go button to start playback, but it’s up to you to figure out whether you want it at 2:17, 2:19, or some other time. A calculator applet may come in handy. You can build sensible multitrack phrases using this procedure, but it’s much, much more laborious than it would be in a DAW.
Setting up elaborate modulation curves, on the other hand, using a curve generator that will run happily by itself for hours, is a lot easier in VCV than it would be in a DAW. You can mess with the filter cutoff of one sound, the effect mix of a different sound, and the length of your sequencer’s looping pattern all from the same modulation curve. (Multiply that example by about a thousand to get some idea of what you can do just by dragging a few on-screen patch cords around.)
And what does it all mean musically? This is the core of the conundrum.
I know harmony theory. If I’m in the key of A major and I write a chord progression that goes like this — D, Dm, A/C#, Cdim7, Bm7, Bb7b5, Amaj7add9 — it has a sensible structure. I won’t say it has a specific meaning, because it could be used to make happy music or sad music, but it has structural meaning within the context of harmony theory, and most listeners who are familiar with Western music will sense intuitively that it means something. It’s going somewhere. Jumble those chords up in a different order and the meaning will be lost.
But what does it mean for a complex modulation curve to change the decay time of one envelope generator, the pan position of another instrument in the stereo field, and the rate of an LFO that’s modulating something else, all at the same time? The musical meaning, as apprehended by a listener, is entirely undefined.
Because of this, modular music that relies on processes of this sort tends to lack human relevance. You can sit back and experience it, and that’s what people do, but you can’t understand it. You can’t build a sensible mental picture of what’s going on, the way you can when you listen to, let’s say, Beethoven’s Seventh Symphony. You can’t anticipate. You can perceive that the music is getting busier or more dissonant, but no moment-to-moment meaning-building can occur, because there is no known syntax from which to build meaning.
Or at least, not much syntax. Sure, you can sequence a strong 4-bar bass line that anchors the music in a conventional chord progression, but if you’re going to rely on that as the main foundation on which to create something meaningful, why not write the music in a DAW instead?
A lot more could be said about this topic, and I may get around to it. I’ve recorded a fair amount of music using VCV, so I have some idea what it’s capable of, though there are a lot of fascinating modules that I’ve barely touched. Maybe I just need to do some more experimenting.
But experimenting, while wearing a metaphorical lab coat, may not do much to build meaning for listeners. I’ll leave you with one of my favorite observations:
When Mozart sat down at his fortepiano, all of the jazz harmonies of Bill Evans and Thelonious Monk were right there in front of him, laid out across the keyboard. But he couldn’t find any of them! The larger cultural context that would have given those chords meaning didn’t yet exist. When it comes to making music using modular synthesis, I feel we’re in a similar situation. We can make things that sound cool just by throwing our fingers at the machine, but it’s difficult — for me, anyway — to see how to put it all together in a way that conveys any sort of coherent meaning, even in the abstract way that music usually conveys meaning.