Lingua Franca

So you’re describing the human species to a xenosociologist named Erbq, who comes from a planet somewhere near Aldebaran. You tell him (actually, Erbq is a him/2, but let’s not get into that) that we humans use tools and communicate using spoken words, which signify objects, actions, and relations. Also, we have a glandular system that releases chemicals into the bloodstream to stimulate quick action — emotions, in other words.

From this data, Erbq might reasonably conclude that humans will express their emotions by making sculpture and paintings. (I suppose we’d better tell him we have color-sensitive eyes too. That’s important information.) He might also be able to predict that in order to depict our social relations, we will tell stories, write poetry, and enact dramas, either onstage or in a film/video medium of some sort. All of this seems pretty natural, given the raw material of the human organism.

But would Erbq be able to predict the existence of music?

Music is a very odd art form. It’s almost entirely abstract, and yet it communicates. Music is a language whose words and sentences are, technically, meaningless. But somehow, the listener is able to perceive and feel the essence of what the composer and/or performer intended.

We’re a long way from understanding how that process works. I’m very curious about what neuroscientists are learning about the brain’s perception of music, but I’m a lot more interested in looking at the process from the outside, from the point of view of the composer and listener. When we listen to Bach or Beethoven, what is it that Bach and Beethoven say to us?

This post is actually about writing electronic music with Csound. I’m just getting there slowly. Bear with me.

The period in classical music from roughly 1700 to 1900 is called the “common practice” period. Composers during that era — that is, European and American composers of concert music — used a consistent and well understood set of techniques. These techniques were (and remain) a remarkably fertile tool-kit. Bach, Rachmaninoff, and the songwriters in Nashville use the same tools, yet their results are wildly different.

After 1900, classical composers tore up the rulebook. As a result, audiences didn’t care for their music. The experiment, which lasted for decades and occupied the minds of a lot of very intelligent people, was pretty much a flop. A few composers from that period (Shostakovich, Bartok, Stravinsky) are still heard today … but Bach and Beethoven are heard a lot more often! To the extent that 20th century composers managed to capture the ears of listeners, it was because they drew on and extended the techniques of the common practice period rather than erasing the hard drive and starting over.

The procedures of the common practice period encompassed melody, harmony, and rhythm. To some extent, the details were learned. A fugue is not something you’ll find in the music of Africa or Asia; nor is a chord progression using secondary dominants. Nevertheless, such details were firmly based on the bedrock of how the brain perceives and interprets music. And they were widely shared by an entire culture. Innovations occurred, certainly, but within a shared and well understood framework.

The wonderful thing about a computer-based composition system like Csound is that in a formal, compositional sense you can do anything with it that you can possibly imagine. Complex polyrhythms, microtonal scales, “instruments” that morph smoothly from one tone color to another, singers whose voices dissolve into a sea of sifting sound grains — just type a few lines of code and you have it all.

The difficulty, which seems nearly insurmountable, is that no one who hears your music will understand what you’re saying. You won’t be speaking a language that is in common practice; you’ll be speaking your own private dialect. Speaking in tongues, as practiced in certain obscure Protestant churches, may be very moving to those who hear it, but nobody is transcribing these outpourings, binding them between covers, and putting them up on the shelves at Border’s.

It’s possible that a new form of common practice may develop within the world of electronic music. But the elements on which this future language could be based are so numerous, the technology changing so rapidly, and the audiences so small, that I’m a bit skeptical whether it will happen. Also, I would speculate that the radical new sonorities made possible by electronic media may not map nearly as well onto innate brain mechanisms as diatonic melodies do.

Meanwhile, in other news, next week I’ll be playing cello (an instrument developed in the 17th century, and not greatly changed since then) in a concert in San Jose, at which we will play orchestral pieces by Bach, Beethoven, and Brahms. Several hundred people will pay $20 each in order to listen to us do this. Similar concerts are presented every week in the Bay Area, and most of the tickets cost more than $20.

I hardly need to add that the number of concerts of recently composed electronic music per week is far smaller. The electronic music that is heard at all will be played in smaller venues before smaller audiences, and the ticket prices (if the presenters even charge for tickets) will be lower.

I’m sure some composers of electronic music feel that this disparity is the result of a vast media conspiracy, or perhaps of the dullness and apathy of the listening public. But I don’t think so. I’m pretty sure it’s because the composers of electronic music are babbling in a private language rather than speaking a common language that listeners will understand.

It’a painful to think that all of the power of Csound, SuperCollider, and these other amazing tools is destined to go to waste. And in some sense, it doesn’t go to waste. If you enjoy what you’re doing in your personal composition workshop, writing poetry in your own private language, that’s all that matters. But I’m old-fashioned enough to want to say something that maybe someone else will understand, or even care about, when they hear it.

This entry was posted in evolution, music and tagged , . Bookmark the permalink.

3 Responses to Lingua Franca

  1. Tom Mulhern says:

    I love Beethoven and Bach, but I love Xenakis and Bartok as much. There are also some electronic music compositions that I find resonate just as well, at least with me. “Earth’s Magnetic Fields” by Charles Dodge and “Silver Apples Of The Moon” by Morton Subotnick come to mind. One of the things that strikes me about Bach, Beethoven, et al, is that they were able to find the “sweet spot” in what people respond to. Sort of like the right blend of salt or sugar in a recipe. Rhythm, pitch, and harmony seem optimized to the Western ear and are masterfully blended. Electronic music tends to focus on more complex events and sonic transmutations that may be too far from the comfort zone that’s ingrained into the typical 12-tone classical psyche, creating a bit of cognitive dissonance, which then dooms it to a certain extent.

  2. Conrad Cook says:

    I’m a musical illiterate, but I suspect you’re right. I can add two factoids to the mix — research them properly as you find them interesting, or not.

    First, human animals are the only ones that can perceive rhythm. You can’t teach dogs to dance.

    My comment — it seems very likely that other animals have other perceptual filters that enable them to organize complex patterns and extract information simply from them. Birdsong is incredibly complex, and not understood. Probably birds have some kind of perceptual filter, as we have rhythm, that allows them to make sense of the patterns.

    Factoid two — as you probably know, there is a learned element to sound perception. Was it Stravinsky that wrote a piece that caused a huge riot? And then a few decades later Disney set dancing hippos and alligators to it in _Fantasia_?

    The reason nobody rioted to Fantasia was not the pacifying effect of the hippos, but that the new musical “memes” became part of the culture, and being used in derivative works, the population built up an “immunity” to them.

    End factoids. Personal note:

    I’ve lately gotten into 50’s era music. Great stuff. This strange thing happens where it’s in the musical idiom I understand, because it’s part of my musical culture, but it’s alien enough from what I’ve been listening to that I experience it as being completely fresh. Retro innovation!


  3. I find this one of the most interesting questions we can ask.

    I always, first, go to the idea of music of the spheres, the rythms of the universe, the vibration of all radiations and masses.
    But this is a very general metaphorical concept, the question of whether an alien, or even a dolphin, can understand, or can even be made aware of, our idea of what music is, much less why have the tastes of one human as opposed to another, is about the deepest thing we can ponder.

    So much of the conventions seems based on the common threads of reality that we share with birds, other living things, and even the shifting of sands and orbits, which create evocative rythmic sounds and tonalities, yet the recent distinction between heavy metal and classical is dissappearing before my eyes, and will have little relevance, even to humans, in only 100 years or so.

    One day we will have an answer to whether our idea of music is universal to living things or a very peculiar and esoteric concept, but this answer may well be inconcievable by what we now define as “human” capacity for thought. When we have truely transcendeded human cognitive limits, then the answer will be obvious, untill then, not so much.

    On Earth, there seem to be some more universal basis for musical taste and meaningful sounds. High tones have certain connotation, low ones others. Volume has obvious connotations, or ARE they obvious to most earthlings? Are these universal to carbon based life in the prescence of oxygen, sunshine, and water, on the crust of a planet with about the same mass as Earth? Are there core priciples that can also be applied to a solar being composed of energy organized by small amounts of mass (as opposed to our mass organized by small amounts of energy)?

    Any computer can be programmed to write music, but none has yet to like it, or anything else for that matter. I postulate that emotion, the prefference for survival to start with, is critical to intelligence. For reason to apply, there must be a preferable outcome. When computers develop personal tastes. will we then have an aswer to this question in an earthly context?

    Pandora’s algorithm does an awesome job of finding me new music that excites me, it also associates many of my old favorites with the hated that have always been liked by people who like my favs. Pandora “understands” music in ways that I haven’t even tried, and it keeps getting better at mediating this human social experience, but can it tell music from random noise I might like? can it define music?

    Some while ago I noticed that it is impossibnle to knapp stone tools without using rythm, keeping time also makes pounding a mortar and pestle more effective and tolerable. Are these issues with fine motor movements needing rythm as unique to us as the precision grip and finger nails that are not at all claws? Rythm is part of human nature, and most animal locomotion, but is it’s perception and enjoyment not likely to be universal? or is it a dialect of homo spp. primates only?

    For most of us the definition of music, as opposed to noise, is quite a bit like the congressional definition of obscene material, I know it when I see (hear) it.

    What is universal? what is esoteric and peculiar? what is music?

    I think the question of how universal is music strikes at the core of the human quest(ion)(s).

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s