So you’re describing the human species to a xenosociologist named Erbq, who comes from a planet somewhere near Aldebaran. You tell him (actually, Erbq is a him/2, but let’s not get into that) that we humans use tools and communicate using spoken words, which signify objects, actions, and relations. Also, we have a glandular system that releases chemicals into the bloodstream to stimulate quick action — emotions, in other words.
From this data, Erbq might reasonably conclude that humans will express their emotions by making sculpture and paintings. (I suppose we’d better tell him we have color-sensitive eyes too. That’s important information.) He might also be able to predict that in order to depict our social relations, we will tell stories, write poetry, and enact dramas, either onstage or in a film/video medium of some sort. All of this seems pretty natural, given the raw material of the human organism.
But would Erbq be able to predict the existence of music?
Music is a very odd art form. It’s almost entirely abstract, and yet it communicates. Music is a language whose words and sentences are, technically, meaningless. But somehow, the listener is able to perceive and feel the essence of what the composer and/or performer intended.
We’re a long way from understanding how that process works. I’m very curious about what neuroscientists are learning about the brain’s perception of music, but I’m a lot more interested in looking at the process from the outside, from the point of view of the composer and listener. When we listen to Bach or Beethoven, what is it that Bach and Beethoven say to us?
This post is actually about writing electronic music with Csound. I’m just getting there slowly. Bear with me.
The period in classical music from roughly 1700 to 1900 is called the “common practice” period. Composers during that era — that is, European and American composers of concert music — used a consistent and well understood set of techniques. These techniques were (and remain) a remarkably fertile tool-kit. Bach, Rachmaninoff, and the songwriters in Nashville use the same tools, yet their results are wildly different.
After 1900, classical composers tore up the rulebook. As a result, audiences didn’t care for their music. The experiment, which lasted for decades and occupied the minds of a lot of very intelligent people, was pretty much a flop. A few composers from that period (Shostakovich, Bartok, Stravinsky) are still heard today … but Bach and Beethoven are heard a lot more often! To the extent that 20th century composers managed to capture the ears of listeners, it was because they drew on and extended the techniques of the common practice period rather than erasing the hard drive and starting over.
The procedures of the common practice period encompassed melody, harmony, and rhythm. To some extent, the details were learned. A fugue is not something you’ll find in the music of Africa or Asia; nor is a chord progression using secondary dominants. Nevertheless, such details were firmly based on the bedrock of how the brain perceives and interprets music. And they were widely shared by an entire culture. Innovations occurred, certainly, but within a shared and well understood framework.
The wonderful thing about a computer-based composition system like Csound is that in a formal, compositional sense you can do anything with it that you can possibly imagine. Complex polyrhythms, microtonal scales, “instruments” that morph smoothly from one tone color to another, singers whose voices dissolve into a sea of sifting sound grains — just type a few lines of code and you have it all.
The difficulty, which seems nearly insurmountable, is that no one who hears your music will understand what you’re saying. You won’t be speaking a language that is in common practice; you’ll be speaking your own private dialect. Speaking in tongues, as practiced in certain obscure Protestant churches, may be very moving to those who hear it, but nobody is transcribing these outpourings, binding them between covers, and putting them up on the shelves at Border’s.
It’s possible that a new form of common practice may develop within the world of electronic music. But the elements on which this future language could be based are so numerous, the technology changing so rapidly, and the audiences so small, that I’m a bit skeptical whether it will happen. Also, I would speculate that the radical new sonorities made possible by electronic media may not map nearly as well onto innate brain mechanisms as diatonic melodies do.
Meanwhile, in other news, next week I’ll be playing cello (an instrument developed in the 17th century, and not greatly changed since then) in a concert in San Jose, at which we will play orchestral pieces by Bach, Beethoven, and Brahms. Several hundred people will pay $20 each in order to listen to us do this. Similar concerts are presented every week in the Bay Area, and most of the tickets cost more than $20.
I hardly need to add that the number of concerts of recently composed electronic music per week is far smaller. The electronic music that is heard at all will be played in smaller venues before smaller audiences, and the ticket prices (if the presenters even charge for tickets) will be lower.
I’m sure some composers of electronic music feel that this disparity is the result of a vast media conspiracy, or perhaps of the dullness and apathy of the listening public. But I don’t think so. I’m pretty sure it’s because the composers of electronic music are babbling in a private language rather than speaking a common language that listeners will understand.
It’a painful to think that all of the power of Csound, SuperCollider, and these other amazing tools is destined to go to waste. And in some sense, it doesn’t go to waste. If you enjoy what you’re doing in your personal composition workshop, writing poetry in your own private language, that’s all that matters. But I’m old-fashioned enough to want to say something that maybe someone else will understand, or even care about, when they hear it.