Bears Discover Fire


In 1990, Terry Bisson’s short story “Bears Discover Fire” won all the awards in science fiction. (Odd, since there’s no science in it.) January of 2014, Lightspeed Magazine put it online for free.

This story has puzzled me for a long time. Why is it a story at all, let alone a good story?

(You have to read the story to understand this post.)

Nobody in the story seems to know what they want. There’s no struggle or deliberate action. There are three narratives:

1. The narrator’s relationship with his brother. The narrator is more old-fashioned, and less uptight. His brother is one of those people who thinks they know everything. They have a disagreement about how to raise his brother’s son. Nothing major.

2. Bears discover fire. Also, they discover a new kind of berry.

3. The narrator’s mother. She lives in an old folks home. She’s bored. She goes to sit by the fire with the bears. Then she dies.

That’s it. Three narratives, none of which are a story, none of which connect to each other except circumstantially. None of them seem to support, parallel, or relate to the others.

So why are they a story when you put them together? I like this story, but I don’t know why.

What is Love


They say the Eskimos have 50 different words for snow.  Or they used to; it’s become a bitter debate among linguists, made worse by the fact that you can’t call people Eskimos anymore.

But anyway, those northern Native Americans distinguish many types of snow.  If you’re going to walk five miles across ice fields to hunt seals when it’s fifty degrees below, it matters what kind of snow it is.

But 50 terms for snow would hardly be excessive. lists 52 synonyms for the adjective angry in English.  And 53 synonyms for the verb hit.

It lists only 18 synonyms for the verb love.  I’ve used for years, and that’s the fewest synonyms I remember seeing for any word.

If love is important to us, why have we got so few words for it?  Even the “synonyms” we have are no good; the top of the list is admire, cherish, choose, and go for.

We haven’t got a word to distinguish romantic love from motherly love or brotherly love.  We haven’t got a verb for ‘lust’ or ‘friendship’ that takes a direct object.  We have a shocking paucity of words for love.  So few that ‘love’ is barely a word.  It’s used in so many ways that it hardly means anything at all.

If the Eskimo Inuit, Yupik, and various other tribes have many words for snow because it’s important, does that mean love is unimportant to us?

No; just the opposite:  We have only one word for love because it’s so important that it’s dangerous.

When you talk about snow, you want people to know precisely what kind of snow you’re talking about.  When you talk about love, you want people not to know what you’re talking about.

Imagine you’re a man, and your girlfriend or wife asks you, “Do you love me?”  You are, as stipulated, a man, so odds are your greatest act of introspection into your feelings was two years ago when you finally decided to switch from Busch to Yuengling.  How strong does liking have to be, to be love?  “Do you love me more than you love the Steelers?”  Let’s be honest:  there are many women in your state, and only one pro football team.  It’s not a fair comparison.

Now imagine there are 50 words for 50 different types of love, and each night, she asks you about a different one of them.


If we named as many varieties of love as we’ve named ways of moving slowly, I suspect the word for the predominant romantic emotion that most women feel when they say “love” would be one that most men have never felt.  And wouldn’t that make for some interesting late-night conversations?

But that’s not an explanation.  If there’s an international male conspiracy to obliterate synonyms for ‘love’, I wasn’t told about it.

(Though that’s just what I would say, isn’t it?)

I think ‘love’ is like ‘God’ with a capital ‘G’.  When there were many gods, people ascribed different qualities to each.  But after Plato said ‘god’ had a single abstract essence, and Jesus said that essence was perfection, every good thing became part of God’s definition.  (Hence some philosophers believed God must be a perfect sphere.)

So every good and positive human emotion got sucked into the word ‘love’.  Still, that doesn’t explain why any more-specific terms disappeared.  And it’s still suspiciously convenient.

Thoughts on listening to Mahler’s Fifth Symphony three times in a row


In “The annihilation of art”, I griped about the path toward ever greater chaos and dissonance that orchestral composition has taken, to the point where it sounds random to me. I tried to appreciate Brian Ferneyhough’s music, but couldn’t. The folks who like it claim that it’s a natural progression from Beethoven to Ferneyhough. I figured that to understand Ferneyhough, I’d have to back up a half-century or so and first try to appreciate something in-between Beethoven and Ferneyhough. So while driving across Pennsylvania, I popped in a CD of Mahler’s Fifth Symphony (1902).

I’ve long been frustrated by my inability to remember Mahler’s compositions. Beethoven’s can get stuck in my head for days, to the point where they give me migraines. Mahler’s, I can only remember snatches of. I was determined to play the CD until I could remember how it went.

I played it all the way to Pittsburgh, and still can’t remember it. Mahler’s Fifth isn’t going to get stuck in my head anytime soon.

The symphony opens with single trumpet repeating a few ambiguous notes, then rising in a dramatic minor chord. Suddenly, the entire orchestra joins in a triumphant shift to a major key. And just as suddenly, it shifts back to minor. That exemplifies everything that is wrong with Mahler’s fifth symphony.

When you have a host of brass make a sudden dramatic reversal like that shift from minor to major, it should mean something. But it doesn’t, because we only stay there for a few seconds before there’s another, equally-dramatic reversal by that same brass section back into a minor key. And that doesn’t mean anything either, because we were in major for all of about two measures.

Observer 1: Look, up in the sky!

Observer 2: It’s a bird!

Observer 3: It’s a plane!

Observer 1: Naw, it’s a bird.

The dramatic equivalent of the opening of Mahler’s Fifth.

The piece didn’t earn that shift back to minor. And that’s what it’s like throughout: Sudden, ostensibly dramatic transitions between keys, tempos, rhythms, and motifs, in a desperate attempt to be unpredictable. All those transitions did nothing for me, because they were so unpredictable that I didn’t care where the music went. It was like an action adventure flick that, to keep you entertained, jumps from one cliff-hanging action sequence to another without ever letting you find out who the characters are. Too try-hard, Gustav.

This is especially apparent in the fourth movement, which is the most boring piece of classical music I’ve ever heard. I am definitely in the minority about this, as it’s regularly found on “The Most Soothing Classical Music” collections, but then I don’t listen to music in order to cure insomnia. I could not pay attention to nine minutes of very pretty but disorganized wandering about in various major and minor keys. I find myself repeatedly zoning out and ignoring the music every time I listen to it. Music this slow and lacking in harmony needs more repetition and regularity for me to grasp hold of.

In “Information theory and writing”, I said art should have high entropy. The entropy of a thing is the number of bits of information you would need to replicate that thing. Something with high entropy is unpredictable. The huge caveat is that random strings have very high entropy, and yet random strings are boring.

The British mathematician G. H. Hardy once visited the Indian mathematician Srinivasa Ramanujan in the hospital:

I remember once going to see him when he was ill at Putney. I had ridden in taxi cab number 1729 and remarked that the number seemed to me rather a dull one, and that I hoped it was not an unfavorable omen. “No,” he replied, “it is a very interesting number; it is the smallest number expressible as the sum of two cubes in two different ways.”

If we could perceive the unique qualities of each random string, we might find each random string as interesting as Ramanudran found each number. But we don’t. Random strings are boring because we can’t tell them apart. What we want is an entropy measurement that tells us how many bits of information it would take to replicate something like the item of interest, from an equivalence class for that item. Something sufficiently similar that we wouldn’t care if one were substituted for the other. (Assume we have a random number generator available for free; randomness does not require information.) A random string of 16 bits has 16 bits of information, but it would take zero bits of information to make another string “like” it, if any string will do.

This equivalence-adjusted entropy would be a measurement of complexity. Measuring complexity is a difficult problem in the study of complex systems.

Cellular automata (CAs) are simple model complex systems. A CA is a set of rules that operate on cells. The cells are usually laid out as squares. Each cell is in one of K states. (For the game of Life, the most-famous CA, K = 2.) Each rule says which state a cell in state k should change to on the next turn, given the states of itself and of its neighbors in the current turn.

Steve Wolfram, studying cellular automata (CAs), found that there was a class of rules that quickly produced static, unchanging CAs, and a class that quickly produced random noise, and a narrow class in-between that produced strange, beautiful, non-repeating patterns. He called these patterns “complex”. He found a single parameter that predicted whether a CA would be complex. Probably he could have used entropy, but he did not. He used λ (lambda), which he defined as the fraction of transition rules that turn a cell “off”.

These three graphs below from (Langton 1992) show typical results, for four-state CAs: A set of rules with λ = .40 quickly leads to a static, “dead” state, and a set with λ = .65 quickly blows up into random noise, while a set with λ = .50 shows interesting, non-repeating patterns for quite some time:

Langton 1992- Life at the edge of chaos figure 2 lambda=.4 Langton 1992- Life at the edge of chaos figure 2 lambda=.5 cropped Langton 1992- Life at the edge of chaos figure 2 lambda=.65
The curious thing is that entropy (unpredictability) is maximal for these four-state CAs when  λ = .75. Increasing λ increases the apparent complexity up to a point, but past that point, although it  it is still increasing unpredictability, it generates noise, not complexity.

Figure 3 from (Langton 1992) plots transient length (one measure of complexity) versus lambda. Transient length peaks suddenly in the area with middling lambda, then just as suddenly falls off again as lambda and unpredictability continue to increase:

Langton 1992- Life at the edge of chaos figure 3

Gregorian chant was very predictable: one part only, no instruments, and almost no rhythmic or dynamic variation. Music became steadily more complex and less predictable over the next several hundred years.

It seemed like a good rule to say that the less-predictable music became, the more complex and better it would be. And in fact, the commentaries on Mahler’s Fifth are full of references to the “complexity” and “interest” generated by its dissonances and irregularities.

But music does not become more complex the more unpredictable it is. After some point, increasing unpredictability makes it less complex. Instead of complexity, we get mere noise.

This, I speculate, is what happened to music. Composers internalized the theoretical belief that unexpectedness made music more complex and interesting, rather than just listening to it and saying whether they liked it or not. They kept making things less and less predictable, even after passing the point where complexity was maximal.

Once they’d passed that point, unpredictability only made the music boring, not complex. Like Mahler’s Fifth. That created a vicious circle: New music was noisy, unstructured, and boring. Composers believed the way to make it less boring was to make it less predictable, which only made it even more boring, pushing them to make newer music that was even less predictable. This led inevitably to Ferneyhough’s random-sounding music.

And the inevitability of the entire progression was taken as evidence that this was progress!

“But, Writing Guide,” you might protest, “you’ve based this on the idea that there are equivalence classes of musical compositions. But what counts as equivalent depends on the listener. To someone who understands music perfectly, each composition might be distinct! Then each equivalence class has exactly one member, and randomness equals complexity.”

There is something to that objection. The more one studies music, the more distinctions one can easily make in music. But if you really believe that’s a valid objection, you must conclude that all possible music is equally good.

I don’t know how to deal with subjective equivalence classes, but we don’t have to base our measurements on something subjective. We can use an objective information-theoretic measure of complexity. Mutual information, for instance. The mutual information between two variables is the information they have in common. If both are very low-entropy, this is low, since neither contains much information. But if both are high-entropy and uncorrelated, it’s low again, since you can’t predict one from the other. Here’s a plot of mutual information versus lambda, again from (Langton 1992):

Langton 1992- Life at the edge of chaos figure 11

This appears to have a maximum around lambda = .25 instead of .5, which might be a problem. But I don’t think lambda makes sense as our measurement, since it depends so much on the arbitrary choice of which state is the “off” state. Entropy would probably be a better measure, and using it might remove the discrepancy between which lambda givese maximum MI and which gives maximum transient length.

My point is that we can choose some objective scheme for measuring the complexity in a score. For instance, go through the score three measures at a time. Call three measures in a row A, B, and C. You can measure P(C|A,B) and P(C|A) for each set of three measures, and then compute how much information about measure C you get from measure B but not from measure A. This will be small for compositions so predictable that measure B doesn’t add much information, and it will be small for compositions that are so random that neither B nor A helps you predict C.

We could argue about how to make the measurement, but we could actually make such measurements (if, say, you got an NEA grant to spend a few months on the problem). I believe that any reasonable measurement would prove that Ferneyhough’s compositions are less, not more, complex than Beethoven’s.

That wouldn’t mean everyone should start chasing complexity. I think the problems with modernism that I complained about can be summarized as “doing art according to a theory rather than according to what seems good”. Ideally, the result of proving this would be to incline people to trust their feelings more and their theories less.

Chris Langton (1992). Life at the edge of chaos. Artificial Life II.