The first part of this post is long, and explains how bad mathematics can lead to bad music. The second part is short, and uses the first part to explain something that I think Beethoven did right in his Moonlight Sonata, and that Faulkner and Momaday did poorly in As I Lay Dying and House of Dawn.
Music and the Wundt curve
In Thoughts on listening to Mahler’s Fifth Symphony three times in a row, I talked about how artists had come to praise complexity, failed to distinguish complexity from unpredictability, and so idolized randomness. I mean “idolize” (or “fetishize”) literally; they made a shoddy graven image of what they worshiped, and worshiped the image (randomness) instead of the original thing (complexity). I used Mahler’s Fifth Symphony as an example:
It seemed like a good rule to say that the less-predictable music became, the more complex and better it would be. And in fact, the commentaries on Mahler’s Fifth are full of references to the “complexity” and “interest” generated by its dissonances and irregularities.
But after some point, increasing unpredictability makes music less complex. Instead of complexity, we get mere noise. [Assuming other people use the word “complexity” the way I do.]
That’s what happened. Composers internalized the theoretical belief that unexpectedness made music more complex and interesting… They kept making things less and less predictable, even after passing the point where complexity was maximal.
Once they’d passed that point, unpredictability only made the music boring, not complex. Like Mahler’s Fifth.
A few months after posting that, I read the first chapter of Muses and Measures: Empirical research methods for the humanities (M&M). It talked about the Wundt curve. In its most basic form, the curve plots enjoyment against signal intensity. The data comes out like a bell curve:
This applies to things like the pleasantness of a particular temperature, or the appeal of music at a given volume (low to high). M&M said that this curve applied in general to human aesthetics, and also used Mahler’s symphonies as an example:
Suppose you ask people to listen to a simple song. Chances are high that most people will find it of medium to high hedonic value. Now have them listen to a Mahler symphony. In all likelihood, the ratings for hedonic value will be lower. The explanation for these different ratings lies in the different complexities of the two pieces of music. For various reasons, we must call the Mahler symphony more complex than the song: It is much longer, is executed by a much larger orchestra, containing more different instruments that build, moreover, ever–changing combinations, and its melodic patterns are more intricate and unusual (hence it is also more “novel”). …
Suppose we expose listeners to the song repetitively, and we do the same with the Symphony. What one will observe is that after several trials, the hedonic value ratings for the song will start falling, while those of the Symphony may start rising. The complexity of the sound texture of the Symphony makes it nearly impossible for most untrained listeners to be appreciative on a first or second hearing: its richness is simply not taking in. With repeated exposures, listeners may begin to grasp its variations of melodic and orchestral patterns, it’s structure of repetitions and contrasts, and its multilayered levels of tone and rhythm.
In this standard view, complexity equals unpredictability, which equals the opposite of novelty; and where the peak in the curve is depends on how novel the stimulus is to the observer. In my view, the peak appeal in the curve is at the amount of unpredictability (or, equivalently, information) where complexity (relative to the listener) is maximal.
So far we just use different terms: the standard view uses the word “complex” to mean “unpredictable”, and says that people have some arbitrary level of unpredictability that they like best. I use the word “complex” to mean “aesthetic appeal as a function of unpredictability.”
The difference is nominal, but not trivial. I’m naming the amount of appeal some artwork has due just to its degree of unpredictability “complexity”, to make it a thing we can study. If we can reliably predict where its maximum will be, we will thereby know, if not understand, part of what makes good art good [1].
In practice, the disagreement is worse. Instead of teasing out the relations between complexity / appeal, randomness, and novelty, people using the standard view usually simply declare they all mean the same thing [2], as van Peer absent-mindedly does in the paragraphs above, and as (Berlyne 1970) and (Heyduk 1975) do as well–as if the curve were simply appeal = randomness. (In which case it would not be a curve, but a straight line. The labelling of the Y-axis on the figure above makes it not an axis at all, but a mysterious mixture of two components–all to preserve the absurd belief that “complexity” means “randomness”.)
This leads to bad music. It says that, if you start with something random enough, it starts out sounding ugly, but every time you play it, it becomes less novel, and thus more beautiful, until after some number of exposures it becomes…
…the most beautiful thing ever.
Becoming an art connoisseur then means training yourself to like more and more noise and randomness. No justification is ever offered for why we don’t stop liking the canonized noise-art after hearing or viewing it enough times.
It’s true that people will come to better like a Mahler symphony, or noise by Ferneyhough, the more often they listen to it. But if we take this as proof of beauty, it would mean that anything we dislike at first is beautiful, while the things we used to call beautiful–songs, paintings by Dutch masters–are so ugly that they rate a zero on the objective scale of artistic merit. Mathematically speaking, the fraction of all possible songs or paintings that are less than or as random as those songs and those paintings, and hence no more beautiful than they are, is zero.
Complexity Structure
As I explained in that post, in the bad old days before 1992, “complexity” meant computational complexity, and “complexity” measures like entropy and Kolmogorov complexity are actually measures of information. They say that random sequences of numbers have the most bits of information (which they do), but it sounded like they were saying they have the highest possible complexity.
Mathematicians in the study of complex systems knew that was wrong. “Complexity”, if it means the adaptability or interestingness of a system, is maximal at a point between between the realms of boring, dead stasis and random chaos.
Coming up with a definition of complexity that didn’t give random sequences high complexity wasn’t hard. The hard thing is that there are lots of measures that do that, and it isn’t obvious that one is more right than the other. (Feldman & Crutchfield 1997) concluded that we ought to stop using the term “complexity”, and say more specifically what we’re trying to do and what we want to measure. Feldman & Crutchfield suggested using the term “structure” instead.
“Complexity” had always seemed intuitively clear to me before, but it derives from Latin “complex” (a collection of parts). It’s an interesting comment on how new our notions of structure and organization are, that we had to appropriate the word “complex” to mean a thing with an intricate causal structure and many behaviors, when in the Middle Ages it just meant a thing with many parts. “Complicated”, perhaps a related word, meant “things folded together”, which again does not have any notion of complex function or causality. “Structure” meant “to build”; “organism” and “organization” come from “organ”, which means “an instrument”. “Elaborate” is from the 16th century; “mechanism” from the 17th. Latin and Middle English had an abundance of synonyms for “complicated”, including “intricate” and “perplexing”, but neither medieval Latin nor Middle English seem to have had any word for productive complication, in which the number of behaviors, or the sophistication of behavior, grows faster than the number of parts. (I’m making a big deal of this because it’s another indication that the medievals didn’t understand creativity.) The closest they had to a word for describing complex organization was “hierarchy”, from “hierarch” (sacred ruler), meaning a top-down chain of command like that of the angels and heavenly beings.
This may be why, as I argued in Modernist and Medieval Art, one of the tenets of modern art and modernist writing is the rejection of structure in art. The purpose of structure, once we get past the Renaissance with its mystical principles of composition in paintings, is to combine parts in relationships that multiply rather than merely add their power. Examples include the structures and dynamic arcs built out of repetitions, inversions, and variations on the theme in a Bach fugue, or the interlocking plots, themes, and character arcs in a novel. If modernism is a reversion to medieval thought patterns, which focused on timeless, hierarchical relationships between abstract essences or types, rather than dynamic interactions between real individuals, then modernism will be similarly less interested in structuring components in space and time.
This prediction is confirmed by Schoenberg’s modernist 12-tone music. Its basic principle is to use all the tones in a scale before re-using any of them. This tends to maximize the entropy and randomness of the music. It’s as if he designed his theory specifically to make structure in music impossible.
Maximal musical complexity: Already attained
Let’s say music can be complex in three ways: Melody, rhythm, and harmony. Consider the first movement of Beethoven’s moonlight sonata:
Melodically and rhythmically, this music is dead simple. Why did Beethoven write such a boring, simple melody and rhythm?
Because harmonically, it’s crazy. Here’s the chord progression in the first 13 measures:
c#m c#m7
A D G#7 c#m G#7 Cdim c#m/E c#m
G#7 c#m f#m
E B7 E em
G7 C em F#7 bm
I’ve never heard any other tune use the chord progression C#m F#m E B7 E Em G7 C Em F#7 Bm. Pick any equally-long pattern using C, F, G, Em, and Am, and you could find thousands of songs that used it.
The melody and rhythm are simple because otherwise you wouldn’t even be able to tell what the chord transitions were. You wouldn’t know where the transitions between chords are (they’re played one note at a time).
Beethoven decided that the chord progression was so unpredictable that it used up all his allowable unpredictability. He simplified everything else so that people could perceive the chord progression.
Similarly, a Bach fugue will have great polyphonic novelty, with perhaps eight different voices at the same time, but based on a repetitive melody, with a constant rhythm and few key changes. Negro spirituals and ragtime usually have simple tonality and melody, but complex rhythm and timing. Dixieland and some other forms of jazz alternate between complex and simple stretches. Pushing the novelty envelope in one way to achieve a distinctive effect always requires scaling back the novelty somewhere else.
Faulkner and Momaday: Not Enough Structure
Which brings us to Faulkner’s As I Lay Dying and Momaday’s House Made of Dawn. Both of them had all of the following:
– strangely-styled, strangled sentences
– connections between events that were not revealed until much later
– chapters narrated by tertiary characters whose relevance to the story wasn’t revealed until later
I realized this was too much unpredictability when I was trying to figure out whether I bought popular explanations of the second passage I quoted in my review of House Made of Dawn. In most novels, I’d have been able to guess whether his interpretation was correct by how well it fit the events that led up to the passage, and what the old man did afterwards. But I couldn’t do that with HMoD, because there was no continuity in time between the scenes. One person does one thing at one time in one place, then some other person does something else in some other place at some other time. If the meaning were clear, I could use it to figure out the connections between the scenes. If the connections between the scenes were clear, I could use them to figure out the meaning of each scene. As nothing was clear, it was difficult to figure out anything.
I think that these modernist books, like Mahler’s Fifth, have objectively too little structure for contemporary American readers. They are so unpredictable that, though this high unpredictability means they convey many bits of information, they convey less meaning than they could with less unpredictability–where “meaning” is, as with “complexity” or “structure”, information minus randomness.
[1] Did you think it was odd that I said we might “know, if not understand” something? Later, I’m going to talk about the difference between rationality and empiricism. (It’s super-important, honest!) One of the non-obvious ways of distinguishing them is that rationalists believe you must understand things before you can know anything about them. This is epitomized by the obsession that Socrates and medieval scholastics had with defining terms. Empiricists, by contrast, believe you must begin with a collection of known facts, some of which might use unanalyzable terms you just made up, before you can hope to understand things. A classic example is gravity. Isaac Newton just made it up to name a force in his equations, with no understanding of how it worked. We know a lot about gravity, but we still don’t understand it. (This observation is from Popper 1966, vol. 2 chapter 11, “The Aristotelian Roots of Hegelianism”, part 2. Its truth is due to the scientific practice of operationalization, which means you create terms as shorthand for what you can measure rather than as shorthand for definitions of what you want to measure. When I say complexity is what the Wundt curve measures, I’m operationalizing complexity rather than defining it.)
[2] With novelty being 1 – randomness, or 1 / randomness, or some other inverse measure of randomness.
References
Berlyne, D. E. “Novelty, Complexity and Hedonic Value.” Perception and Psychophysics 1970, 8: 279-286.
David Feldman, James Crutchfield 1997. Measures of statistical complexity: Why?
Heyduk, Ronald. “Rated preference for musical compositions as it relates to complexity and exposure frequency.” Perception & Psychophysics 1975, Vol. 17(1), 84-91.
Karl Popper 1966. The Open Society And Its Enemies, 5th ed., Princeton University Press. 1st ed. 1945.
Willie van Peer, Jemeljan Hakemulder, & Sonia Zyngier 2007. Muses and Measures: Empirical research methods for the humanities. Cambridge Scholars Publishing.