The Act in Interactive

Yesterday this year’s Interactive Fiction Competition (IFComp) drew to a close. Seventy games were entered. My entry (“The Only Possible Prom Dress”) finished at #8. Privately I had been betting on #10, so I have nothing to complain about.

Anybody can be a judge in the competition, but of course the judges are drawn from the tiny online community made up of people who already have some interest in and knowledge of the medium of interactive fiction (IF). The only rules are, you have to rate at least five games for your votes to count, and you can’t rate games that you wrote or beta-tested.

I had started trying out the entries, with the intent of voting, but I quickly grew discouraged. I ended up not filing my ratings. Nor did I comment publicly on what I was encountering. I didn’t want to look like a grouch — and of course a few of the authors might be rating my game, so I didn’t want to be a victim of retribution.

In what follows, you need to bear in mind that I’m biased. Naturally I think my game was the best, but that’s not the bias I’m referring to. In no particular order, my two biases are as follows. First, I’ve written a lot of conventional fiction. I have a pretty clear idea, or at least some well developed opinions, about what makes for a good story, or doesn’t. Second, the games I’ve written (“Not Just an Ordinary Ballerina,” “Lydia’s Heart,” “April in Paris,” “A Flustered Duck,” “The White Bull,” “Mrs. Pepper’s Nasty Secret,” “Captivity,” and now “The Only Possible Prom Dress”) are parser-based. I like and understand parser-based games, but they’ve fallen somewhat out of favor.

Parser-based IF was, at one time, the totality of interactive fiction, but in recent years it has been overtaken in popularity by what’s known as choice-based IF. The difference, briefly, is that in a parser game the user/reader/player has to type commands in order to move the story forward. You don’t know what commands will work and what don’t, so you have to think. In a choice-based game all you have to do is click on links. A monkey could play a choice-based game. Quite possibly a human player would be more likely than a monkey to reach the story’s happy ending, if there is one, so we can’t say no thought is ever required. But in a choice-based game, if an option is not visible in the browser window as a clickable command, the option does not exist. In a parser game, you’re encouraged to try oddball commands. You have to engage with the scenario and try to imagine what may work.

Of the top ten finishers in the comp, only three were parser-based — numbers 5, 6, and 8. As it happened, none of the top three finishers was among the flock that I had tried during the judging period. So today I had a look at the top three. I felt that due to my poor opinion of the games I tried, I might be giving choice-based games short shrift. The big winners might be brilliant. And if I didn’t feel deep admiration for any of them, I felt it would be interesting to find out what the judges were thrilled by. To be crass about it, maybe next year I can do better if I know what’s rated as high-quality.

All three (“The Grown-Up Detective Agency,” “The Absence of Miriam Lane,” and “A Long Way to the Nearest Star”) are polished. Their user interfaces are attractive, and the prose is error-free. (A post-comp tester has found an embarrassing number of typos in my game, which will be fixed in the post-comp release.) But to my way of thinking, all three games are dull. In all three, I found myself poking around looking for some reason to be engaged with the story, or even some way to move the story forward. In none of the three did clicking on dozens of links lead to more than a tiny pinch of rising action.

I had rather assumed that one of the things judges would respond well to would be fresh scenarios. And indeed, there were detectable bits of freshness in “Grown-Up” and “Absence,” but neither of them swatted the ball out of the park. Grounded out to shortstop? Maybe that’s too negative. Let’s leave the metaphor aside and move on.

Briefly, “Grown-Up Detective Agency” is a private-eye story about a missing man. I have no idea what has happened to him, because I haven’t finished reading (and I may not bother). The science fiction angle is that the detective, a 21-year-old woman, is somehow accompanied by her 12-year-old time-traveling self. The time travel is the interesting element in the story, but not only does the author fail to explain it, the characters themselves don’t seem to be curious about how the girl has jumped forward by nine years. The dialog between these two versions of the same character is fun, and it’s written in the form of a movie script, so it’s snappy and believable — but meanwhile the story goes nowhere.

“Absence of Miriam Lane” is also a mystery with some sort of fantasy or science fiction element. Something seems to have gone wrong with the light. But the action consists of wandering around in a house looking at things — the piano, things tacked to the refrigerator, a gap on a bookshelf where a book has evidently been taken away — none of which provides even a meager clue about what has happened with the light. To add to the boredom, the things in the house are not described in detail. When the reader/player is interacting with objects, one would naturally hope that the objects would be described in loving detail. Alas. Here again, after poking around for half an hour and getting nowhere, I’m disinclined to go on. What would be the point?

“Long Way” is a hackneyed trope of IF: You’re the only living being on an abandoned spaceship. As in the first and second place winners, we’re plunged into a mystery of sorts. What has happened to the crew of the ship? The ship’s AI is available for conversation, and it seems to have something to hide, as it won’t let you into certain of the rooms. But after exploring the available rooms in a fairly exhaustive fashion, I have found no way to move the story forward. The doors to the crew’s quarters are locked, and I have no way to get into any of them. Adrift in deep space with terse descriptions of objects and no clear objective, other than to be polite to the AI on the off-chance that it has killed its own crew — but the implementation is smooth, and the conversations with the AI are rather interesting.

In sum, it seems to me that the judges tend to be impressed by the games’ presentation, not by the content. “Detective Agency” and “Absence” both make good use of graphics, and “Absence” has a very effective music track that I allowed to loop for 20 minutes before it started to annoy me.

What I suspect (I could be wrong) is that none of these authors has written much conventional fiction. They have, in each case, an idea for what could be developed into a good story, but they haven’t grasped the brutal mechanics of how to move the story forward. This is a difficult challenge in any kind of IF (and in conventional fiction, for that matter), but it helps if you give reader/players some fresh goodies on a fairly frequent basis so as to keep them engaged. Richly written text would be a good place to start.

It’s also possible, if we’re to judge by the effusive comments made in the forum by some of the judges, that many of the judges don’t read conventional fiction. The criteria with which they judge may not be fully informed.

Even with good text descriptions, clicking on link buttons is, inevitably, less engaging than typing commands. Also, the choice-based user interface generally pops up a new window of regrettably sparse text each time you click on a link. A parser game scrolls the text up off the screen, so you feel, if only obscurely, that you’re engaged in an ongoing process, not being spoon-fed disconnected bits.

But that’s the fashion. Maybe next year I’ll enter a choice-based game in the competition. Finding a good source of graphics might be tricky, though. For better or worse, I’m a writer.

Posted in Interactive Fiction | Tagged , , , | Leave a comment

Easley We Roll Along

If Wikipedia is to be believed, Easley Blackwood Jr. (born 1933) is still alive. We should all be so lucky! Forty years ago I interviewed him for an article in Keyboard. He was so articulate that the interview ended up as an “as-told-to” feature in his voice.

The occasion of the interview was the release in 1980 of his LP Twelve Microtonal Etudes for Electronic Music Media. This was a self-published LP. I believe it was later reissued as a CD, but I can’t find my copy of the CD, all I have is the LP. The audio is available on YouTube.

What you won’t find on YouTube is Blackwood’s detailed explanation of how the recording was done. To learn about it, you’ll need to download the scan I just uploaded of my 1982 article. You’ll find it at As a bonus, I tossed in MIDI files of the etudes. I didn’t create these files, and I no longer remember who sent them to me. I can’t testify to their accuracy, but I may sit down and try orchestrating a couple of them, because I’m curious. I also have a complete scan of the score. It’s out of print, but it’s still under copyright, so I won’t post a download link here. Offering a free download would be entirely illegal. But if you can track him down and send him fifty bucks, maybe he’ll give you permission to download it. In which case, let me know.

I can’t help thinking it’s odd that so few composers have explored the harmonic resources that are to be found in these scales. Two or three factors may be at work. First, to get really good renditions of any of these scales, you have to use synthesizers, and there is still, I’m sure, a bias against composing for synthesizers in the university music departments where “serious” composers get their training. Second, even if you’re set up to make microtonal music on your computer (as I am), finding the notes of the scale on a conventional 12-note-per-octave MIDI keyboard is a bit of a mind-bender. And once you’ve mastered all that, you’re finally ready to start learning how the chord progressions can work in the scale you’ve chosen.

Blackwood’s goal was to address this question. The National Endowment of the Arts gave him a grant to do exactly that.

The technology he used was, looking back on it from 40 years on, terribly primitive. He mapped out the chords on an odd beast called a Motorola Scalatron, which was never sold publicly. But the Scalatron had no choices of timbres, so it wasn’t suitable for doing a recording. The recording was done with a monophonic Polyfusion modular synth, one instrument line at a time. That process itself puts him in a league with Wendy Carlos (who has also done some wonderful composing with microtonal scales, by the way).

I’ve done some composing with microtones. A couple of brief piano pieces (not recorded on a physical piano) are in this blog post. I also have a CD on bandcamp that’s mostly microtonal pieces in various scales. But this isn’t about me. I just wanted to let people know that the Blackwood article from Keyboard is now available online.

God knows there are some other articles from those days that ought to be kept available. I have a complete collection of back issues, of course, but I’m not planning to do any more scanning — not unless there’s some specific article you want. I believe Tom Rhea has turned his Electronic Perspective columns on synthesizer history into a book. If I can find out anything about that, I’ll put the link here.

Posted in microtonal, music | Tagged , | Leave a comment

Ghost Stories

No, I’m not going to tell you how to write a story with ghosts in it. What interests me are the ghostly ideas that drift, unexamined or even unnoticed, across the pages of even the most realistic fiction. I don’t know whether it’s possible for a story or novel not to be haunted, plagued, possessed by ghosts. Maybe my offhand observations about how that works will give you a fresh way to look at a story you’re writing, or a story you’d like to write.

In his book Sapiens, Noah Harari makes a case for the idea that much of what we humans think of as real and important is, in fact, fantasy. There’s no such thing as money. There are no such things as laws. If we didn’t have, as a species, an overwhelming capacity to agree with our fellow humans that such things are real, society would collapse overnight.

Not all of our agreed-upon cultural fantasies are as stubborn as money and the rule of law. Some fantasies change from century to century. Consider, for instance, the concept of honor. Honor was once a vital part of European culture. Duels were fought over it. Men died defending their honor, and if you were dishonored, your life would change in drastic and unpleasant ways. Everyone around you would treat you differently because you had been dishonored.

Today we still call a judge “Your Honor,” but in daily life few of us place a high value on honor. The desire to be respected still exists, and we can still get angry if we sense that we’re being disrespected, but only in a few isolated subcultures are you likely to fall victim to violence if someone feels you’ve disrespected them. If anything, defending your honor has come to seem childish. If you’re insulted, laughing it off or just ignoring it is felt by many of us to be the mature thing to do.

Nonetheless, our culture is still awash with fantasies about what’s important, true, or unquestionable. The fantasies are different than they once were, but they’re just as powerful. Today, for instance, many people place a high value on individual freedom. If I tell you that your freedom is largely a fantasy, you may even be upset that I dare question it.

These shared fantasies — call them myths if you like — are what give shape to our fiction. Sometimes we write novels to try to debunk a cultural myth that we feel is misguided or dangerous. Sometimes we write novels to try to support and promote a myth that we feel, consciously or subconsciously, is on shaky legs. And sometimes a novel is permeated with a myth that the author simply accepts as true.

And then the wheel turns, and the book may not be relevant to a new generation of readers.

This happened to Horatio Alger. In the late 19th century his books were very popular. Today they’re all but unreadable. Alger’s theme, which he promoted in one book after another, was that a young man who was honest and virtuous would ultimately be rewarded. The world, in the person of a rich older man, would eventually see the young man’s inner worth and would reward it.

I’ve never studied or even read Alger, but it seems to me he was defending a cultural value that he felt was under attack. He must have observed that people didn’t always see much point in being virtuous, because virtue so often went unrewarded. So he set out to prove to the world, or at least to young men, that the right thing to do was to be honest, virtuous, and trustworthy, even if there seemed to be no immediate advantage in it, because in the end their fine qualities would bring them the reward they deserved.

Today we’re much too cynical to take that notion seriously. We’re more inclined to say, “Virtue is its own reward.” Meaning, you may never gain any external advantage by being virtuous. All you’ll get is the inner knowledge that you’re a good person. And that’s supposed to be enough. That’s the new myth.

And then there’s my favorite bad author, Erle Stanley Gardner. His Perry Mason mysteries sold like hotcakes in the 1930s and ’40s, but today most of them are out of print — and not just because Gardner’s prose was wooden and his plots preposterous. His books are redolent with a myth that was, I’m sure, quite believable at the time, but we can no longer buy into it.

In Gardner’s imaginary world the courts could be relied on to mete out justice. Once the murderer was identified (by Perry Mason’s clever tactics), the justice system could absolutely be relied on to slam the culprit in prison and probably send him to the gas chamber or the electric chair. But by the 1970s that myth was no longer believable. We began to see mystery novels in which the knight in tarnished armor had to enact swift retribution by killing the bad guy himself. Taking the bad guy away in handcuffs was no longer seen as a reliable way to rid the world of evil-doers.

Gardner’s respect for the judicial system shows itself in a second way. The prosecutors and police in his novels are almost always wrong, because that gives Perry Mason or Donald Lam the scope to fight for justice. But while Gardner’s cops are overzealous and sometimes dumb, they are never corrupt. Gardner was a lawyer, so he had a basic belief in the machinery of justice. When it failed, as it often did (even in the real world in which he lived — Gardner devoted considerable resources to helping people who had been wrongly convicted), the failure was an individual failure on the part of a prosecutor or a cop. It was never a failure of the system.

The idea that the justice system could generally be relied on to work was a myth, but it was in Gardner’s blood. He never questioned it.

If you’re a writer, I invite you to consider this. What myths and fantasies about the world have permeated your fiction because you never questioned them? What cultural beliefs are you trying to reinforce because you sense that they’re under attack?

Love conquers all? (It doesn’t.) Individual freedom is a supreme value? (It isn’t.)

How about, “Money is the root of all evil”? A lot of people think that. Some authors, including one of my favorites, Donald Westlake, tend to portray rich people as immoral, self-absorbed, and basically contemptible. But I’m fond of a quote that I’ve seen attributed to blues singer Bessie Smith: “Honey, I been rich and I been poor. Rich is better.” Smith had been dirt-poor as a young woman, and she knew what she was talking about. Westlake was probably aware that the view he was peddling was one-sided, but it’s an enduring myth, and the plots of his Dortmunder novels rely on it.

One of the reasons I’m having trouble generating any enthusiasm for writing another novel is that I no longer take any of the myths of my culture seriously. If you don’t have a fixed and passionate idea about what makes for a worthwhile human life and an idea about how such a life can perhaps be achieved, what are you going to write about?

Posted in fiction, society & culture, writing | Tagged , , | Leave a comment

Heretics R Us

In 325 A.D., the emperor Constantine arranged a conference among the Roman world’s Christian bishops. Accounts vary, but about 300 bishops traveled from near and far to the city of Nicaea, in what is now Turkey. Constantine paid for their travel and lodgings, and showed up himself to lead the first session.

This was an event of great historic importance. The bishops issued a document now known as the Nicene Creed, whose purpose was to codify Christian doctrine, and which (in a slightly altered form issued 50 or 60 years later) provided a firm foundation for the Christian religion throughout antiquity and, among Catholics, down to the present day.

But the bishops didn’t know what would happen in the future. They gathered together for reasons that seemed cogent to them at that time. And in fact one of the heresies that they were troubled to stamp out, Arianism, was favored by a couple of Constantine’s heirs. Things might have gone quite differently.

The question that interests me is, why did the bishops think it was worthwhile to travel hundreds of miles in order to participate in this conference? Travel in the ancient world was common enough, but it was slow and sometimes difficult. What did they think they were trying to accomplish?

On a conscious level, they were trying to set forth the truths of Christian doctrine because they were certain that adherence to the truth was somehow a matter of importance to individual believers, who might all too easily be led astray, thereby causing irreparable damage to their precious eternal souls or whatever. But that explanation simply won’t wash. Given that the entire doctrine of Christianity, in all of its variants and offshoots, was nothing but a free-floating fantasy, with not a shred of evidence to anchor it, a sociologist from Mars would have to wonder, what are these people really up to?

This is a question that can’t quite be asked from within the cultural framework of Christianity — not even from within modern Christianity, and certainly not from anything in European/American culture prior to the 20th century. Only an atheist can see the real sociological issue. If you think Christian doctrine has even a speck of actual relevance to you and other people, you won’t be able to ask the question in an intelligible way.

The bishops were men of power and prestige. And like men (yes, it’s mostly men) of power and prestige throughout history, they wanted to solidify and if possible augment the basis of their power and prestige. Yet the entire basis of their power and prestige was (and remains) a gimcrack fantasy. A flimsy, floppy fairy tale. There being no truth in it, insisting on a unified “truth” was all the more important! We can’t have people thinking God is a glowing pillar of unicorn farts, can we?

If everybody was free to have their own variety of Christian belief, free to reject any idea that they didn’t find personally appealing, it would be all too easy for dissenters to just tell their local bishop to go fly a kite. What was at stake was the bishops’ reputations as men of importance, men whom you had better pay close attention to if you knew what was good for you.

They voted on what it meant for Christ to be the Son of God. There were two prevailing theories about this. One held that, like a human son, Christ had been created by God. That was the Arian heresy. The orthodox view, on the other hand, was that Christ had always existed. A perplexing question, I suppose, but if there had been any such entity as God, the bishops wouldn’t have needed to vote, would they? The true answer to the question would have been made apparent to them somehow — and to everybody else, including the Arians. An actual God, about whom facts could be known, would not have been so sloppy as to leave the question open to debate.

No, the bishops weren’t trying to nail down a religious truth. Truth wasn’t even on the table, though of course they thought it was. What they were really doing was creating a big baseball bat with which to hammer dissenters, because dissenters, if tolerated, would soon decimate their flocks. People would all go their own way! The bishops would have to go back to being shoemakers or something equally unattractive.

The entire strategy of monotheism is to squash dissent. By violence, if nothing less will avail. The sin of Adam and Eve was to think for themselves rather than obeying orders.

This isn’t the only reason why the human species is a dreadful failure. It isn’t even the biggest reason. But it’s very, very sad.

Posted in random musings, religion, society & culture | Tagged , | Leave a comment

Processed Art Substance

A couple of weeks ago I participated in a Zoom panel discussion about the use of AI (artificial intelligence) software in the arts. This was hosted by Nick Batzdorf, who is the content director at the Synth & Software website, and it’s now available (or at least the audio is) on that site.

It was an interesting discussion, and a number of points were touched on. I got a little irate about the need to know music theory. I don’t like it when I get irate; I sound like an old guy shouting, “You kids get off of my lawn!” What upset me, I think, was that while Olivio Sarikas, who had a lot to say during the podcast, is a fan of AI, notably of an image-generating service called Midjourney, he doesn’t seem to know much about music.

I have no firm opinion about Midjourney except from a philosophical perspective, but it’s clear to me that the process of generating appealing images using software tools is different in some basic ways from the process of generating appealing music using software tools. From the word ‘go,’ images have content. Music data is entirely abstract. And because the millions of starter images uploaded to Midjourney already have content that the human eye and brain can interpret in meaningful ways, the machine doesn’t have to do as much to fit things together. To be specific, the machine doesn’t have to know what the meaningful content actually is. Since music is an abstract language, it requires a higher degree of human perceptiveness and intervention.

I’m pretty sure a human who is using Midjourney is also engaged in finding the most meaningful products that the AI has generated — picking one image that seems especially apropos and tossing out five dozen others that don’t quite do the job. Or at least I hope so. But I do maintain that an AI that combines images in various ways is a lot more likely to come up with something that is meaningful in human terms than an AI that is tasked with combining musical ideas.

But that’s not what I want to talk about.

Early in our discussion, Olivio was talking about how there’s a greater need today for images that can be created quickly. He referred to the use of cameras (you remember cameras, I’ll bet). Once upon a time, you had to take the film into a darkroom, develop the film, make contact sheets, choose the images you wanted to enlarge, go back into the darkroom to print the enlargements, then half-tone them and send the half-tones off to the printer. This could take a week. In the internet age, people need to be able to create and upload an image in half an hour.

That’s true, but he framed it, quite correctly, in economic terms. Corporations want to reduce costs. If a process is labor-intensive, it’s also expensive. If a software tool can produce striking imagery in half an hour, and if no one with an expert eye has to be employed, there’s a significant cost savings.

But is the result art? Well, no, it’s not.

Art is the process that the artist goes through, mentally and emotionally, in creating the work. I’ll repeat that: Art is the process. It’s not the outcome or the product. Art is not and can never be a product. If you have a brilliant piece of software that can produce eye-catching imagery at the push of a button, what it’s producing is not art; it’s processed art substance. It’s a fraud.

I don’t like the word “spiritual.” I try never to use it. But I think it’s good shorthand in this discussion, so I’m going to set aside my reservations for a minute or two. When the person tasked with producing the art, be it image, music, or what-have-you, is not intimately engaged from moment to moment in every decision about what is to go into the work of art, the person at the helm of this metaphorical boat is not an artist at all. The person at the helm is spiritually bereft.

And what’s worse, anybody who then encounters the work of “art,” no matter how compelling it might appear to be, is also spiritually deprived. Cheated. Denied the full experience of their humanity.

I once asked Wendy Carlos if modern music technology had made it easier to compose music. She said no. She maintained, I think correctly, that the process of creating new music is as difficult now as it was before. So I think Wendy would agree with the point I’m making. The process of creation is an interior process in the artist, and requires both thought and feeling. You can’t offload it to a machine.

Also, it takes time. If you produce art too quickly, either because you’re under financial pressure or because you’re just too lazy to do it properly, you’re not an artist at all. Maybe a bullshit artist, but nothing beyond that.

Also, there’s no excuse for not knowing the theory, the aesthetic underpinnings of your medium, whatever they happen to be. That’s not quite the same thing as saying all composers of music need to know the conventional harmony theory that was developed in Europe between the 16th and 19th centuries. There are many theories about music, and Western harmony is only one of them. But if you’re planning to write music that uses chords and melodies within the European tradition and you think you don’t need to know harmony theory — or worse, if you imagine that harmony theory will somehow inhibit your awesome creativity — you’re just being stupid. There’s no reason to sugar-coat that. I said “stupid,” and I meant it.

But that’s just “you kids get off of my lawn.” My central point is that art is not a commodity. It’s something that happens within the artist. You can’t offload it to a machine.

We have to acknowledge that, unlike the visual and literary artist, music has always been a collaborative art. Beethoven did not play all of the instruments in the orchestra, nor did he build the instruments! When I compose music in the computer, I’m quite willing to collaborate with software developers and sound designers I’ve never met, whose names I may not know. And yes, I sometimes use loops. But the aesthetic decisions are always mine, and they may take time. A loop may need to be edited in some way in order to be usable. I need to know how the editing tools work, and I also need to have some inarticulate feeling about what’s working and what’s not.

If you’re using a drum loop as a short cut because you don’t know how to make good beats, you’re cheating. After finding a good loop, you may need to work with it in arbitrarily complex ways in order to make it fit within your own artistic vision. Filtering. Trimming. Quantizing. There are dozens of ways to work with a loop, and if you’re composing into a computer you really do need to know them all. Or the loop may be perfect as is, but you’ll still need to surround it with other carefully chosen components.

Listening closely and learning how those components fit together takes time — and not just a few hours or a few days. It takes a lifetime.

Any of your musical experiences, as either a listener or a player, may become relevant as you listen to your mix for the 20th time. If your bank of past musical experiences is slim, your music will be shallow. If you imagine you can slap together some loops and then upload the finished track in an hour or two, I don’t even want to talk to you. You’re not a musician. Go away. Or at the very least, get off of my lawn.

Posted in music, random musings, technology | Tagged , | Leave a comment

Let Me Count the Ways

I first encountered a synthesizer in 1975. I had just been hired as an assistant editor at a startup magazine called Contemporary Keyboard. (The name was later changed to Keyboard.) My boss, who was living down the walkway in the same apartment complex, had an ARP 2600.

At the time, that was pretty much a state-of-the-art instrument. You could make all sorts of sounds with it, either by wiggling the knobs and sliders or by plugging in a few patch cords to change some of the internal signal routings. What excited me was the fact that sound had suddenly become plastic. Unlike a piano, whose sound is pretty well set in stone, the sound of a synthesizer is whatever you make it. Any synthesizer has, let’s admit, a limited sound palette, but the limits are extraordinarily broad.

Fast-forward. Today I have on my hard drive about 150 synthesizers. Not kidding; that’s an actual count. With, I think, not a single exception, all of them are immensely more complex and powerful beasts than the ARP 2600. I have, in fact, a very nice software emulation of the 2600. While basically authentic in design, it has a number of features not found on the original hardware instrument. Oh, wait — there it is now!

I’m not going to try to explain this technology, not today. Either you know about it, or you don’t. Today I’m contemplating a conundrum to do with creativity. The conundrum is, what is one to do with this magnificent mass of musical muscle?

When you’re young, it’s easier to become passionate about aiming for some particular musical goal and pouring your heart into it. There are, I think, two or three reasons for this.

First, you don’t know as much about music as you’ll know 40 or 50 years down the line. You may know hip-hop, or blues, or punk, or folk, but that may be all you know. When you set out to create something new, your choices are narrower and more straightforward. Later, you’re discovering new kinds of music and exploring them to learn what they’re all about.

But I’m now over 70, and I spent 20 of those years reviewing records every month at Keyboard. I’m familiar with Jon Hassell, Kraftwerk, the Residents, Weather Report, Robert Rich, Wendy Carlos, and quite a lot of other artists who have used synthesizers in various musical styles. I know what I’m good at and what I’m not so good at compositionally. I know a lot, but as a result, there’s not so much left for me to explore.

Second, when you’re young you just have more energy, period. You get excited about stuff and stay excited more readily.

Third, when you’re young you quite naturally have some hope that you’ll achieve something — that you’ll become famous, or at least join a band and catch the eye of a few potential romantic partners. I no longer expect to accomplish anything by making music. It’s just something that I do.

A friend posted recently on Facebook some information about how thousands of new tracks are being uploaded to Spotify every single day. And who is going to sort through all that to find the good stuff? Musicians have vanishingly little chance anymore to get noticed. It’s just not gonna happen, not unless you’re young, playing in a trendy style, and have the right haircut. And probably not even then.

Now about those 150 synthesizers. Each of them has editable parameters. Most of them have hundreds of preset sounds, which are provided when you buy and download the instrument — and all of the presets can be edited in thousands of arbitrary ways. To be sure, there’s a lot of redundancy in the sound libraries of various instruments. You’ll find wobble basses, screaming leads, gauzy pads, and Wurlitzer electric piano imitations galore. Even so, it’s safe to say that I have instant, click-of-the-mouse access to several million different sounds.

Given any five of these sounds, one could compose and record literally millions of different pieces of music. Frank Zappa once remarked that there are only twelve notes, but that didn’t stop Bach or Beethoven, nor Bill Evans and Thelonious Monk, nor the Beatles and the Beach Boys.

The number of things I could do with the resources that are at my disposal is infinite. The conundrum is, what would make any one musical concept more worth pursuing than another? Sure, I can upload fresh files to my bandcamp page. I have half a dozen CDs worth of material up there already, and I actually sold a digital download this year. To a friend. For $9.

Been there, done that. What’s next?

I did a whole “CD” of rearrangements of Beatles songs. (It’s on the bandcamp page. It’s called Reimagine.) I’ve done some exploring of microtonal tunings. (On the bandcamp page too — click on Werewolf Bathtubs and Forked Clarinets.) Concepts can be a useful way to organize one’s activity, but I don’t have any sterling ideas for fresh concepts that I’d like to explore. What I have are millions of sounds, lots of free time, a wide range of musical knowledge, not as much energy as I’d like, no expectations for the future, and no compelling vision.

Posted in modular, music, random musings, synthesizers, technology, vcv rack | Tagged , , | Leave a comment

You Have Been Eaten by a Grue

Having entered my new text adventure game (“The Only Possible Prom Dress”) in this year’s Interactive Fiction Competition (IFComp), I figured I really ought to check out some of the other 70 entries and rate them as a judge. Authors are allowed to do this; you just can’t rate your own game.

The majority of entries (around 50 of them, at a guess) are browser-based hypertext (branching) stories written in Twine, Texture, or something similar. I’ve never written a browser-based game, and I’m not sure why I would want to. It’s just not my thing. I write parser games. That being the case, I felt I ought to take a close look at a couple of dozen browser games in order to understand how I might rate them.

I’m not going to mention any games by name here, first because the judging is still in progress and I wouldn’t want it said that I was trying to influence the judges, and second because I don’t want to be too sharply critical of any of the authors, almost all of whom are younger and less experienced writers than I.

A couple of trends, however, are apparent and perhaps worth commenting on.

A broad swath of the entries are nightmare dystopian stories. Only two of the 25 I’ve looked at have actual zombies, but in my notes for a couple of others I commented that adding some zombies would actually have been an improvement.

A couple of stories are about escaping from (or failing to escape from) a bad impending marriage. Another story is about being depressed, sick, broke, and friendless. A story in which someone you love has died. A story in which you’re a social worker knocking on doors, but you’re accomplishing nothing — and that’s the whole story. A short story in which you’re having no luck talking to a real human in an automated telephone help line — and that’s the whole story. A Kafkaesque nightmare involving a nosebleed in a modern office setting. Something with hungry creatures pursuing you. A story where you’re being pursued through Scotland by people who evidently want to do you harm.

I find myself wondering why so many of the authors are attracted to grim, unpleasant topics. It’s not that I’ve never written a nightmarish, downbeat story. In my story collection (The House of Broken Dolls) two of the 15 stories are absolutely negative, with no glimmer of hope or joy. Certainly, bad things happen in most of the other stories — if nothing bad is happening, it’s not much of a story. But one wants to see a glimmer of hope. A lead character who somehow triumphs over the awful stuff.

Possibly the branching story lines of browser-based interactive fiction deepen the problem. A story may have several endings, some of them grim and others perhaps hopeful or even triumphant — but you have to find the happy ending by making choices along the way, choices whose outcome is seldom obvious. That is, you can’t tell whether you’re steering toward the happy ending. Often you can’t tell whether you’re steering the story at all. What appear to be separate choices may all lead to the same following scene. The only way to know for sure is to go through the story several more times, making alternative choices. But who wants to wade through a depressing story several times in the hope that something good will come of it? Not me.

Six or seven of the authors failed to put their name on the opening screen of their game. A few of them can’t even be bothered to rename their game file before uploading it; several of the game files are named “index.html.” This would seem to indicate a certain lack of professionalism. But after all, these people aren’t professionals. It would be a mistake to expect too much in the way of polish.

One of the questions I’m asking myself as I go along is, “Is this a story I would recommend to a friend as worth reading?” So far, I haven’t found a story like that. Four or five of them have points of definite interest, but none of them has me thinking, “Wow, this is really good!”

I wish I could tell you that interactive fiction is alive and well, but I’m afraid such an assessment would be overly optimistic. Maybe the computer delivery medium is just not attractive to good writers for technical reasons (though really it has a lot to offer). Or maybe it’s because an aspiring author can at least dream of greater visibility and hope to make a few bucks by uploading a conventional story to Amazon.

The one thing I will say about my new game is that it’s intended to be fun. There’s a vain and self-absorbed ghost in it, but no zombies. In retrospect, maybe I should have included zombies.

Posted in Interactive Fiction | Tagged , | Leave a comment

The Great Red Spot

I’ve been re-reading James Gleick’s 1987 book Chaos. A lot of chaos theory is about mathematical abstractions, but it also has a very practical side. The Great Red Spot on the surface of Jupiter is still a bit of a mystery, but it appears to be a storm system that has somehow remained relatively stable for several hundred years, while the rest of the atmosphere of Jupiter swirls around it in fantastic disarray.

The essential point here is that islands of stability can arise spontaneously even in the midst of instability. We can see the same thing on Earth. A tornado doesn’t last nearly as long as the Great Red Spot has lasted, but one would naturally expect that such a thing as a tornado could never exist. How could a whirlpool of air come into being and then sustain itself for twenty minutes without flying apart?

Chaos defies our expectations. Gleick describes mathematically sensible objects that have a finite size, an infinite surface, and an interior volume of zero.

Since I’m still fuddling about the unlikely idea that the rapid spin of galaxies is caused by a mysterious substance called dark matter (see my earlier blog entries Spin and Hubble Trouble), the similarity between spinning galaxies and the Great Red Spot was hard to miss. Galaxies are islands of order in a universe whose constituents, as far as we currently understand them, would not seem capable of producing such orderly phenomena.

There are deeper problems in cosmology than the spinning of galaxies. Don’t ask me to do the math, but my hazy impression of subatomic physics is that the world we live in can only exist due to the precise values of certain abstract numbers, such as the Fine Structure Constant (it’s about 0.00729735, if wikipedia is to be believed). If that number was slightly larger or slightly smaller, no atoms. No molecules. No you and me.

It seems inherently absurd to suggest that somehow the universe sprang into existence with all of these numbers precisely tuned so as to give rise to galaxies, stars, and you and me. Various proposals have been made to account for this. For instance, the Weak Anthropic Principle begs the question by saying, “Well, if it was any different you wouldn’t be here to ask the question.” Then there’s the multiverse theory, which posits an infinite number of complete universes, each of them perhaps exploding into existence due to the crunching up of a black hole in a parent universe. In each of these baby universes the values of things like the Fine Structure Constant would be random, so most of them would either collapse or dissipate in a vast gray cloud. But a few, like ours, would be suitable for life to come into existence, along with black holes so the universes can continue to propagate.

We will never be able to observe any of those other universes, however. The multiverse theory has the advantage that it can never be tested observationally. It’s a Just So story.

Our best current understanding of our own universe is that it sprang into existence about 13.8 billion years ago, has been expanding ever since, and will eventually (some trillions of years from now) become a cold dead place for all eternity, when all of the stars have burnt out. This is not a very cheerful prospect. There are some good observational reasons for thinking the Big Bang really happened. But why and how it happened, and how it gave rise to the Fine Structure Constant — there are no answers to questions like these.

The fact that there are no answers leaves the door open for me to suggest one.

To begin with, we don’t even know whether the universe is finite or infinite. We know there are distances beyond which our best telescopes cannot see, but that’s not quite the same thing. So let’s suppose for a moment that our universe is in fact infinite in extent and also chaotic. The part of it we can see, while unimaginably vast, is also infinitesimal compared to the whole. There is not, in this conception, any “whole” at all. But because all of the processes are chaotic, areas of stability will occasionally arise. In fact, an infinite number of areas of stability will arise.

The area we can see is an island of stability — the Great Red Spot writ large. Even basic characteristics like the Fine Structure Constant may be different in different parts of the universe. Somewhere beyond the horizon of what we can see with our telescopes, the rules of physics may change — because there’s no reason why they shouldn’t change. Nobody is in charge. No godlike entity has laid down the laws of physics and decreed that they shall always, everywhere, be the same.

It’s even possible that when we look at the most distant galaxies and see quasars, brightly burning objects that are quite unlike anything in nearby galaxies, we’re seeing regions where the laws of physics are a bit different. To be sure, the quasars we see today existed billions of years ago, because that’s how fast light travels (in our region of stability, anyway), but the idea that the laws of physics that allowed quasars to burst into being are exactly the same as what we observe nearby — that idea is purely speculative. We can’t go out there and perform experiments.

The idea that the universe may be both infinite and chaotic removes a lot of the balky problems in physics, starting with dark matter and dark energy. Dark energy is the theory that something (we don’t know what) is pushing galaxies away from one another faster than would be expected, based on our current theories of how things are. But what if distant galaxies are just rolling downhill, as it were, in a universe where not even the number of physical dimensions of space is fixed?

Turn an aluminum mixing bowl upside down and pour a bag of beebees out on it. The beebees will roll down the sides of the bowl. If you were sitting on one of the beebees, you could be forgiven for imagining that some dark force is pushing the beebees away from one another, when in fact something else (gravity plus the curvature of the bowl) is pulling them apart. The gravitational force is outside of the bowl, and the trajectories of the beebees are determined entirely by the shape of the bowl.

To me, that feels like a more sensible way to look at it. And of course those two ways of looking at the expansion of the visible universe are not necessarily different! We just don’t have the terms with which to talk about this stuff. Nor do we have the apparatus with which to test it. Nor is it guaranteed that such apparatus could ever be built.

Modern cosmology and modern subatomic physics arose in the years between 1900 and 1940, give or take a few years. At that time, linear analysis of physical systems (the kind of analysis that Galileo and Newton did) was assumed to be capable of perfectly explaining all physical phenomena. We now understand that that’s not the case. Nonlinear, chaotic, fractal processes are everywhere in our world. Why should we imagine that the wider universe is simple?

Posted in random musings | Tagged , | Leave a comment

Are We Board Yet?

I’ve always been fascinated by board games. Over the past three years I’ve acquired quite a nice collection of clever and colorful modern games. We’re not talking Parcheesi, Risk, or Monopoly, though I do have a few classic games on my shelf.

I’d love to have a weekly game night at my house. The trick is finding a few people who are keen to be there. If you’re not into board games as a hobby, you may not know what to expect. Somehow an email invitation doesn’t quite convey the essence of what awaits.

With that in mind, I’m assembling a few photos. If something here piques your interest, you know where to find me.

Let’s start with games that you could learn and enjoy pretty quickly — no massive set of rules to memorize. First up is Nova Luna.

Each turn in Nova Luna you acquire a tile, which you place on the table in front of you adjacent to one or more tiles you’ve already placed. Each tile has a few “tasks” on it, which tell you what tiles this one would like to be adjacent to. A task might be, for instance, one gold and two dark blue. When you complete a task, you put one of your markers on it to show it’s complete. The first player to place all 20 of their markers on their tile layout wins. The usual playing time is no more than 30 minutes.

Sounds almost too simple, right? But there are a couple of wrinkles. For one thing, the turn order is not fixed. The players have turn order markers on the central round, and whoever is in last place takes the next turn. Each tile has a number between 1 and 7, and the more desirable tiles have larger numbers. When you take a tile, you move your marker ahead on the round by that many spaces. So if you grab a really useful tile you may have to wait while a couple of your opponents take two turns. If you take a cheap tile, you might get to take two turns in a row.

Another of my favorites is Century Spice Road:

In Century Spice Road you’re collecting and upgrading your collection of “spices,” which are represented by the colored cubes. Saffron, turmeric, cardamom, and cinnamon? It hardly matters. Each turn, you can play a card from your hand to improve your collection of spice cubes — trading in one brown for a green, a red, and a yellow, for instance. As the game goes on, you’ll acquire more powerful cards from the central row and add them to your hand. A lot of strategic thinking is involved, but the things you’re manipulating — cards and cubes — are easy to understand.

Soon you’ll be able to trade some cubes for one of the cards in the left-side row. Those are the cards that are worth points, but their value is variable. Some are easy to acquire and are worth only 6 or 8 points. Others are worth as much as 20 points, but you’ll need a hefty set of cubes to grab one of them. When someone has acquired five point-value cards, the game ends.

A recent addition to my game shelf is Momiji:

The idea in Momiji is that it’s autumn, and the leaves are falling. You’re strolling through the Imperial Garden in Japan, collecting beautiful fallen leaves. There are six types of leaves — that is, cards — and you have to add them to your collection in a certain order, following certain rules. You can also collect acorns (the little coin disks) and spend them, utilize your landscape abilities, and acquire task tokens that will be worth points if you fulfill the task. There’s a fair amount of competition for the desirable leaves in the offer pile. Very satisfying game, and not long or drawn-out.

Instead of cards, Azul uses satisfyingly chunky tiles:

In the first phase of a round, players acquire tiles from the central market and place them on the left side of their mat. In the second phase, tiles are moved over to the grid on the right side, and points are earned. The game ends, usually after five or six rounds, when someone has completed a horizontal row of five tiles on their grid — but if you’re behind in points you may want to choose different tiles from the market so as to avoid completing a horizontal row until you have a chance to catch up.

The rules for how you acquire tiles and how you place them on the grid are easily explained, but planning is essential. There’s some competition, because you can see what tiles your opponents are trying to acquire, so you may be able to pick up some tiles that will thwart their ambitions. Simple procedures, satisfying game-play.

These four games are easy to learn and not too long, but challenging and fun. In a few days I’ll put up some pictures and descriptions of games that are slightly more complicated. Stay tuned for more!

Posted in games | Tagged | Leave a comment


I’ve memorized a fair amount of piano music. I can sit down at the piano and play for an hour, going through ten or fifteen pieces, without opening any of the books of sheet music that are stacked next to the piano.

What I’ve found as I get older is that the process of playing a memorized piece is becoming less secure. As I slide down the slope from 70 toward 80, the mistakes I make at the keyboard are becoming more frequent. They’re also a great deal more annoying, but that’s a separate topic.

In observing what goes wrong, I’ve learned a few things about how music is memorized.

On the page, a piece of music appears to have a straightforward linear form — or, if you want to be technical, a two-dimensional form. Time moves from left to right, and the pitches are arranged vertically, perpendicular to the time axis. However, that two-dimensional structure has very little to do with how a piece of piano music is stored and retrieved in the brain.

Several brain systems are intimately involved in the storage and retrieval process. Yes, there’s a linear component, which we might call “the music” if we’re not being too analytical, but it’s stored as audio — as a panorama of the expected sounds — not as patterns of dots on the page. The visual memory may also be involved in storing and retrieving the patterns of dots, but I find that that’s one of the least important facets of memorization.

The expected patterns of muscle movement by the fingers (and of course the forearms, because the hands don’t remain stationary) are stored in a different part of the brain from the auditory memories. The fingering memory involves and probably relies to some extent on tactile feedback — the sensations that the fingers transmit back to the brain as the keys are struck. If I hit the edge of a key rather than the center, the fingering memory can get confused.

There seem also to be short snippets — individual musical phrases — that are stored in a slightly different manner. It’s not precisely auditory memory, it’s pattern memory.

Let’s not neglect the music theory memory. If you know how chords and harmony work, and how classical pieces are constructed by the composer out of motifs and larger structures that recur and are altered in certain systematic ways, your brain will be retrieving some or all of that information as you play the piece.

Event memory is yet another system: If I have trouble with a particular spot (such as, let’s say, an awkward trill followed by a leap of the hand in a Haydn sonata movement), as that spot in the music approaches I’ll be aware that it’s approaching, and I’ll be reminding myself, perhaps even sub-verbally, to devote special attention to it.

Above all this is what I would call the manager. The manager — perhaps we should call it the conductor — has neural connections to all of those other memory systems, and calls them up in an appropriate order, so that the piece is executed.

As I get older, I’m finding that all sorts of bad things can happen during this process, any of which will result in a mistake in performance. I become aware of the various systems by observing how they go wrong.

Any of the brain regions described above can decide to take a little nap. Often it will be the fingering memory that falters: For a second or two the neurons that store the finger movements will become unavailable. What happens at that point can vary. The auditory memory may still be perfectly aware of what’s supposed to be happening, in which case the manager will try to improvise a fingering. This effort may even succeed, but it will more likely fail.

Sometimes the fingering memory will just do the wrong thing, in which case the music may be fine for half a second or so, after which the mistake will cause a train wreck. Wiring things up correctly (shift to the fourth finger here, not to the third) enlists the event memory to help the fingering memory.

When the fingering fails, the manager can get lost. If this happens, the whole production grinds to a halt. The manager is not simply sending out commands to the various subsystems — it’s also getting moment-to-moment feedback from the other systems. If the feedback from another system fails, the manager loses track of how to spool the various oncoming events. The only solution is to stop and go back to a known starting point, which may be the beginning of the piece or a distinctive spot within the piece, and start the manager again.

Sometimes the short-phrase pattern memory gets things a little jumbled. My fingers will occasionally skip a note in a phrase. This can happen because the pattern memory thinks one of the notes has already been played, or perhaps several of the notes.

On rare occasions the auditory memory or the manager will take a two-second snooze but the fingering memory will continue flawlessly, in which case the piece is not interrupted. “My fingers knew what to do!” I cry triumphantly (though usually not out loud).

I’m starting to see mistakes in the fingering system that seem to be due to declining precision in proprioception. Proprioception is such a new field of study that the spell-checker in the online editing software I’m using to write this doesn’t even know the word! It’s one of our senses, just like taste and hearing, but it was unknown until just a few years ago, because it’s so obvious. Proprioception is your sense of where your body parts (arms, legs, etc.) are in space. In rare cases due to a brain lesion, the proprioception can fail. If this happens, the sufferer can’t even sit up in bed, because they literally don’t know where their arms and legs are, even though the sensory nerves in their skin are still working just fine.

If I need to reach, let’s say, an octave down with the fifth finger of my left hand, but my sense of proprioception isn’t focusing well, I may reach too far or not far enough. The fingering memory was operating correctly, but it wasn’t getting enough input, so the muscles didn’t take the finger to the right spot.

It’s likely that proprioception can dim as we get older. That’s why old people trip and fall down, run into doors, and spill things. Not that young people never do that stuff; maybe it’s just that old people don’t recover when they stumble because their reflexes are slower. That would apply to piano playing too, I suppose. But because my eyes and ears are not what they used to be, I’m a bit suspicious that my proprioception may be dimming too.

Sometimes the fingers and the auditory memory both falter, and the music theory steps in to try to fill the gap. Instead of the root of the chord, my left hand might land on the fifth of the chord.

Often in classical music, two passages will be very similar in auditory terms, but they’re in different keys. Because of the layout of the keyboard, this may require a different fingering.If the manager consults the auditory memory but not the music theory memory before sending instructions to the fingers, the fingers may try to play the wrong pattern.

There are also moments, fortunately not frequent, when my improvisation module decides to get in on the act. I may find that my eye is looking at a wrong note and then the finger plays the wrong note, or starts in that direction, just because the improviser thought it might be an interesting note.

And then we get to the distractions. A noise outside in the street can throw the manager completely off the trail. Or I might be thinking about something I read on Facebook (and got pissed off about). Again, the playback system grinds to a halt.

The music theory monitor will sometimes distract the manager/conductor. This can happen when I play one of Bach’s Goldberg Variations, for instance, because the Goldbergs have a complex theoretical basis. The pattern memory is producing a melodic snippet, and the theory module says, “Hey, is that the main theme coming back in an inversion?” Trying to work out whether it is or isn’t involves musical analysis, and musical analysis forces the manager to do something different from conducting.

I don’t know whether all this detail is covered in the scientific literature on piano playing. I don’t even know if there is any scientific literature on piano playing. (Surely there must be!) I just think it’s worth documenting, because I happen to be paying attention. Sometimes I shout at myself, or stamp my feet and swear, because it gets pretty damn frustrating. I know the piece, I’ve played it a hundred times, but today I can’t get through the first eight bars without something going haywire.

Maybe getting old would be easier if I wasn’t paying attention.

Posted in music, random musings | Tagged , , | Leave a comment