Posted by midiguru on November 13, 2012
Barring a few isolated examples such as the lightning-flashes in Beethoven’s Sixth Symphony, instrumental music is entirely abstract. The sounds we hear evoke responses in us, yet those responses are private, and the methods by which the sounds evoke the responses is rather mysterious.
I play in community orchestras, so I spend a lot of time rehearsing and performing music written by dead Europeans. I also listen to experimental electronic music from time to time, by living composers who are in some sense my colleagues.
I think I may have figured out why I like the dead Europeans a lot more. It isn’t just a matter of familiarity, and it isn’t just because I’m an old fogey (though there’s considerable merit in that accusation). It’s because classical music is a coherent language — a language with a syntax that can be known and understood in considerable detail.
Consciously developing a new language is all but impossible. The dismal history of Esperanto, an invented spoken and written language, illustrates the difficulties all too well — and Esperanto has an explicit vocabulary and grammar, which you and your listeners can learn if you care to. Inventing your own musical language is likely to be useful only if you’re satisfied to talk to yourself, or to a tiny coterie of listeners who have nothing better to do than sit around and congratulate themselves on their grasp of the ineffable. If you want to communicate with a wider group of listeners, you’ll have to say something using syntax that they already know, at least in an intuitive way.
My use of the word “language” is not merely by way of analogy. When we look at the music of various cultures, what we find is that the musical events in all of them (excepting only the modern academic “culture”) have a great deal in common with human speech. This is not an accident. Our brains are exquisitely evolved to understand speech.
Speech has pitch inflections, which convey meaning. Most music relies quite extensively on variations in pitch. There is some intriguing evidence, in fact, that some version of the pentatonic scale is hard-wired into our brains. Scales with from five to seven pitches are found around the world. The rules by which these pitches are organized vary somewhat from culture to culture, but the existence of a scale containing a small, knowable set of pitches is a universal, a given. Except in experimental electronic music, that is.
The grouping of pitches into recognizable patterns (melodies and motifs) may have another evolutionary basis than speech. Pattern recognition is a vital skill for all animals more complex than sponges and jellyfish. Without the ability to recognize patterns, we wouldn’t be able to find edible foods or tell the difference between predators and prey. Music that lacks pitch patterns (that is, repeated groupings of scale-based pitches) is bewildering. Other than in certain schools of free improvisation, it’s unknown, and there’s a reason for that. People like to hear patterns.
Speech has natural rhythms, and the range of rhythms found in speech closely matches the range of rhythms found in most music. Speech rhythms typically fall in the range of from one to six events per second, and that is the usual range of rates with which notes are played. Faster flurries of notes are, I think, interpreted by the listener either as single gestures or as a display of virtuosity.
Speech also has tone color, and our brains are finely attuned to detect it. Imagine you’re sitting at a table with a few of your friends, who are conversing, and imagine that your eyes are closed. You will have not the slightest difficulty figuring out who is saying what, based entirely on the subtle differences in vocal timbre. Speech timbre typically changes (that is, one speaker is replaced by another) at a somewhat slower rate than speech pitch or rhythm, because we usually speak in whole phrases or sentences. And indeed, changes in timbre — say, a phrase on flute followed by an answering phrase on oboe — occur at a somewhat slower rate. Few of us, however, would want to listen to one speaker for 20 minutes, unless the speaker were saying things that we cared about. Timbral variation is not a trivial component of good music — and again, it has a natural range of rhythms.
A fair amount of post-minimalist electronic music dispenses with the notion of notes as discrete events. The music often consists strictly of long sustained washes of tone, which change pitch and/or timbre in a slow, dreamy way. Now, there’s nothing wrong with writing a piece of music whose cognitive content is “lost in the fog” … except that there’s only one piece like that. Writing it over and over, or writing it because your friend wrote it and you liked it, is utterly pointless. You’re not saying anything.
You’re not saying anything because you’ve deliberately cast aside the rhythms and pitch inflections of human vocal behavior. Without those elements, music is sadly impoverished. Worse, it’s boring.
The impetus behind experimental music seems to be that one ought to be saying something new. This notion took hold in painting and music in the early years of the 20th century, and we don’t seem to have been able to shake it off. On the whole, I would tend to agree that rehashing Mozart or Tchaikovsky without adding anything new would be pointless. Or would it? Did Mozart write all of the pieces that he could conceivably have written? Well, no, he didn’t. He died at the age of 37. If you were actually to write Mozart’s 42nd symphony as you envision it, arguably that would be a terrific contribution to the world of music — assuming you did a solid job of it.
To do a solid job, of course, you’d have to spend a few years studying Mozart. And here, I think, is the less defensible reason for writing experimental music: Learning the language of notes well enough to say interesting things in that language is damned hard work! It’s a lot easier to just throw the baby out with the bath water and write something whose only virtue is that it’s new and strange.
If you honestly feel there’s nothing more to be said in the conventional language of notes and rhythms, I invite you to look into alternate tunings. The harmonic resources of microtonal equal temperaments are vast and almost entirely unexplored. Of course, orchestras won’t be able to play your music — but they weren’t going to play your Tchaikovsky-inspired tone poem either. Armed with a halfway decent computer, you can use notes and rhythms that are fresh, while still retaining the range of rhythms and pitch inflections that human beings can relate to.