When my cello is making funny noises, I like to remind myself of the adage, “It’s a poor workman who blames his tools.” On the other hand, when I finally got around to getting my bow rehaired, the squeaky noises stopped. Sometimes it is the tools.
I’ve been contemplating, in a vague, fuzzy sort of way, the possibility of composing some synthesizer music in a more open-ended, less pop-based style. Music that’s shaped more like clouds, or flowers, or the stones in the bed of a mountain stream. My usual impulses, the unconscious promptings that produce bass lines, chord progressions, and alternating verse/chorus/bridge structures, seem to be leading me down a blind alley.
At this point, I confront the stark fact that sequencer software is designed for composing and recording pop music. Whether we’re talking about Cubase, FL Studio, Reason, Live, or some other program, it’s the same deal. These programs make some basic assumptions about your music that, while valid for 99.9% of the folks who use them, are quite limiting should you want to go off in a different direction.
One big assumption is that your music will be built on a metrical grid of bars and beats that is global — shared by all of the instruments, in other words. These days, decent sequencers will let you change the time signature and also the tempo at any point in the music, so the grid is elastic and reconfigurable. But it’s still global. None of them, to my knowledge, will let you run several simultaneous, independent metrical grids.
Charles Ives once wrote a piece that involved two marching bands playing in different keys and, more to the point, at different tempos at the same time. I’m not excessively fond of Ives. I don’t mind writing difficult music, but I want it to be enjoyable while it’s being difficult. But that’s a good example of what I’m talking about. What if I want to layer two blocks of material, one of which is gradually speeding up while the other slows down?
To do that on any modern sequencer would embroil you in an almost endless process of fiddly data editing, because sequencers assume that the tempo and meter are global.
Doing this kind of tempo warping with Csound would be a bit easier. Unfortunately, most of the other things you have to do with Csound are harder. Sound design, in particular, can take days. With a sequencer and a good plug-in synth, you can flip through dozens of presets, find one that intrigues you, make a few judicious edits, and be ready to record in five minutes. And then you record by throwing your hands at the keyboard. The feedback loop between your unconscious intuition and the musical result is basically instantaneous. In Csound, that loop is anywhere from ten seconds to ten minutes long, because you have to stop playback and type some new code in order to hear your idea. That break in the process does make a difference in the nature of the composing experience.
Another assumption built into sequencers is that your music will consist of a stream of notes, and that the notes played by any given instrument will have a more or less fixed sonic identity from one end of the piece to the other. Introducing sonic modifications with modulation data is not very burdensome, but the temptation, since the sequencer is sitting there panting for you to play notes, is to play some more notes instead of morphing the notes you already have. With Csound, you really could make an entire piece whose score consisted of one “note,” all of the sonic adventures being designed into a single instrument’s internal processes. That’s not likely to be a desirable way to organize a piece, but you could do it. More likely, a soundscape piece would contain 20 or 30 “notes,” each lasting for a couple of minutes.
You could do this kind of music on a sequencer, at least if you have a synthesizer with lots of parameters you can modulate. But even then, there are limitations. Suppose you want a single modulation signal to drive several parameters in different instruments at the same time. Sequencers do not like to do this. They assume that each instrument is going to receive its own modulation data. If you want three instruments to respond to the same modulation, you duplicate the modulation data three times. Then if you want to edit it, you have to edit all three copies.
There are ways to set up both Reason and FL Studio so that one mod source can drive parameters in various instruments. I don’t mean to imply that it’s impossible across the board. I don’t think Cubase or Pro Tools can do it, though, and I think Live could only do it if you’re running Max For Live. And if you should happen to want to use the audio output of one instrument as an input for another instrument in order to do a little ring modulation … good luck with that. Most sequencers and sequencer plug-ins are not configured to allow this type of signal routing. In Csound, you just type a few lines of code, and you’ve got it up and running.
Where Csound falls short is the big-picture workflow. In configurability, it gets an A+. In user friendliness and intuitive operation, it gets a D.
Oh, yeah. Choosing the right tool does make a difference.