Thoughts on listening to Mahler’s Fifth Symphony three times in a row

Standard

In “The annihilation of art”, I griped about the path toward ever greater chaos and dissonance that orchestral composition has taken, to the point where it sounds random to me. I tried to appreciate Brian Ferneyhough’s music, but couldn’t. The folks who like it claim that it’s a natural progression from Beethoven to Ferneyhough. I figured that to understand Ferneyhough, I’d have to back up a half-century or so and first try to appreciate something in-between Beethoven and Ferneyhough. So while driving across Pennsylvania, I popped in a CD of Mahler’s Fifth Symphony (1902).

I’ve long been frustrated by my inability to remember Mahler’s compositions. Beethoven’s can get stuck in my head for days, to the point where they give me migraines. Mahler’s, I can only remember snatches of. I was determined to play the CD until I could remember how it went.

I played it all the way to Pittsburgh, and still can’t remember it. Mahler’s Fifth isn’t going to get stuck in my head anytime soon.

The symphony opens with single trumpet repeating a few ambiguous notes, then rising in a dramatic minor chord. Suddenly, the entire orchestra joins in a triumphant shift to a major key. And just as suddenly, it shifts back to minor. That exemplifies everything that is wrong with Mahler’s fifth symphony.

When you have a host of brass make a sudden dramatic reversal like that shift from minor to major, it should mean something. But it doesn’t, because we only stay there for a few seconds before there’s another, equally-dramatic reversal by that same brass section back into a minor key. And that doesn’t mean anything either, because we were in major for all of about two measures.

Observer 1: Look, up in the sky!

Observer 2: It’s a bird!

Observer 3: It’s a plane!

Observer 1: Naw, it’s a bird.

The dramatic equivalent of the opening of Mahler’s Fifth.

The piece didn’t earn that shift back to minor. And that’s what it’s like throughout: Sudden, ostensibly dramatic transitions between keys, tempos, rhythms, and motifs, in a desperate attempt to be unpredictable. All those transitions did nothing for me, because they were so unpredictable that I didn’t care where the music went. It was like an action adventure flick that, to keep you entertained, jumps from one cliff-hanging action sequence to another without ever letting you find out who the characters are. Too try-hard, Gustav.

This is especially apparent in the fourth movement, which is the most boring piece of classical music I’ve ever heard. I am definitely in the minority about this, as it’s regularly found on “The Most Soothing Classical Music” collections, but then I don’t listen to music in order to cure insomnia. I could not pay attention to nine minutes of very pretty but disorganized wandering about in various major and minor keys. I find myself repeatedly zoning out and ignoring the music every time I listen to it. Music this slow and lacking in harmony needs more repetition and regularity for me to grasp hold of.

In “Information theory and writing”, I said art should have high entropy. The entropy of a thing is the number of bits of information you would need to replicate that thing. Something with high entropy is unpredictable. The huge caveat is that random strings have very high entropy, and yet random strings are boring.

The British mathematician G. H. Hardy once visited the Indian mathematician Srinivasa Ramanujan in the hospital:

I remember once going to see him when he was ill at Putney. I had ridden in taxi cab number 1729 and remarked that the number seemed to me rather a dull one, and that I hoped it was not an unfavorable omen. “No,” he replied, “it is a very interesting number; it is the smallest number expressible as the sum of two cubes in two different ways.”

If we could perceive the unique qualities of each random string, we might find each random string as interesting as Ramanudran found each number. But we don’t. Random strings are boring because we can’t tell them apart. What we want is an entropy measurement that tells us how many bits of information it would take to replicate something like the item of interest, from an equivalence class for that item. Something sufficiently similar that we wouldn’t care if one were substituted for the other. (Assume we have a random number generator available for free; randomness does not require information.) A random string of 16 bits has 16 bits of information, but it would take zero bits of information to make another string “like” it, if any string will do.

This equivalence-adjusted entropy would be a measurement of complexity. Measuring complexity is a difficult problem in the study of complex systems.

Cellular automata (CAs) are simple model complex systems. A CA is a set of rules that operate on cells. The cells are usually laid out as squares. Each cell is in one of K states. (For the game of Life, the most-famous CA, K = 2.) Each rule says which state a cell in state k should change to on the next turn, given the states of itself and of its neighbors in the current turn.

Steve Wolfram, studying cellular automata (CAs), found that there was a class of rules that quickly produced static, unchanging CAs, and a class that quickly produced random noise, and a narrow class in-between that produced strange, beautiful, non-repeating patterns. He called these patterns “complex”. He found a single parameter that predicted whether a CA would be complex. Probably he could have used entropy, but he did not. He used λ (lambda), which he defined as the fraction of transition rules that turn a cell “off”.

These three graphs below from (Langton 1992) show typical results, for four-state CAs: A set of rules with λ = .40 quickly leads to a static, “dead” state, and a set with λ = .65 quickly blows up into random noise, while a set with λ = .50 shows interesting, non-repeating patterns for quite some time:

Langton 1992- Life at the edge of chaos figure 2 lambda=.4 Langton 1992- Life at the edge of chaos figure 2 lambda=.5 cropped Langton 1992- Life at the edge of chaos figure 2 lambda=.65
The curious thing is that entropy (unpredictability) is maximal for these four-state CAs when  λ = .75. Increasing λ increases the apparent complexity up to a point, but past that point, although it  it is still increasing unpredictability, it generates noise, not complexity.

Figure 3 from (Langton 1992) plots transient length (one measure of complexity) versus lambda. Transient length peaks suddenly in the area with middling lambda, then just as suddenly falls off again as lambda and unpredictability continue to increase:

Langton 1992- Life at the edge of chaos figure 3

Gregorian chant was very predictable: one part only, no instruments, and almost no rhythmic or dynamic variation. Music became steadily more complex and less predictable over the next several hundred years.

It seemed like a good rule to say that the less-predictable music became, the more complex and better it would be. And in fact, the commentaries on Mahler’s Fifth are full of references to the “complexity” and “interest” generated by its dissonances and irregularities.

But music does not become more complex the more unpredictable it is. After some point, increasing unpredictability makes it less complex. Instead of complexity, we get mere noise.

This, I speculate, is what happened to music. Composers internalized the theoretical belief that unexpectedness made music more complex and interesting, rather than just listening to it and saying whether they liked it or not. They kept making things less and less predictable, even after passing the point where complexity was maximal.

Once they’d passed that point, unpredictability only made the music boring, not complex. Like Mahler’s Fifth. That created a vicious circle: New music was noisy, unstructured, and boring. Composers believed the way to make it less boring was to make it less predictable, which only made it even more boring, pushing them to make newer music that was even less predictable. This led inevitably to Ferneyhough’s random-sounding music.

And the inevitability of the entire progression was taken as evidence that this was progress!

“But, Writing Guide,” you might protest, “you’ve based this on the idea that there are equivalence classes of musical compositions. But what counts as equivalent depends on the listener. To someone who understands music perfectly, each composition might be distinct! Then each equivalence class has exactly one member, and randomness equals complexity.”

There is something to that objection. The more one studies music, the more distinctions one can easily make in music. But if you really believe that’s a valid objection, you must conclude that all possible music is equally good.

I don’t know how to deal with subjective equivalence classes, but we don’t have to base our measurements on something subjective. We can use an objective information-theoretic measure of complexity. Mutual information, for instance. The mutual information between two variables is the information they have in common. If both are very low-entropy, this is low, since neither contains much information. But if both are high-entropy and uncorrelated, it’s low again, since you can’t predict one from the other. Here’s a plot of mutual information versus lambda, again from (Langton 1992):

Langton 1992- Life at the edge of chaos figure 11

This appears to have a maximum around lambda = .25 instead of .5, which might be a problem. But I don’t think lambda makes sense as our measurement, since it depends so much on the arbitrary choice of which state is the “off” state. Entropy would probably be a better measure, and using it might remove the discrepancy between which lambda givese maximum MI and which gives maximum transient length.

My point is that we can choose some objective scheme for measuring the complexity in a score. For instance, go through the score three measures at a time. Call three measures in a row A, B, and C. You can measure P(C|A,B) and P(C|A) for each set of three measures, and then compute how much information about measure C you get from measure B but not from measure A. This will be small for compositions so predictable that measure B doesn’t add much information, and it will be small for compositions that are so random that neither B nor A helps you predict C.

We could argue about how to make the measurement, but we could actually make such measurements (if, say, you got an NEA grant to spend a few months on the problem). I believe that any reasonable measurement would prove that Ferneyhough’s compositions are less, not more, complex than Beethoven’s.

That wouldn’t mean everyone should start chasing complexity. I think the problems with modernism that I complained about can be summarized as “doing art according to a theory rather than according to what seems good”. Ideally, the result of proving this would be to incline people to trust their feelings more and their theories less.


Chris Langton (1992). Life at the edge of chaos. Artificial Life II.

Advertisement

2 thoughts on “Thoughts on listening to Mahler’s Fifth Symphony three times in a row

  1. Joe

    Personally I dislike Ferneyhough’s pretentious unfounded alien-styled inhumane chaotic pseudo-random material, which exists only for it’s own sake and creates sensory responses that are not of the composer’s intention, but just happen to occur.
    Make no mistake: Ferneyhough is no real composer; and the fact that this has never been accordingly stated or criticized shows the times in which we live: Feed the people any rubbish, with just a hint of added intellectual superiority and they’ll believe it and worship your ‘message’.

    … Ferneyhough… the charlatan king of pretentious wishful implication

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s