When Improvising, Music Theory Becomes a Verb

Relational structures are the practical choice for music improvisation

I knew I wanted to be a composer when I was in Junior High School. My parents found me a couple of teachers my last couple of years of high school. They were both well respected composers. The first teacher showed me Hindemith’s Theory of Interval Hierarchy on the first lesson. The second teacher waited until the second or third lesson, but then brought me a short treatise on Information Theory. Though I was totally baffled, these two events were actually a pretty good introduction to how many composers think about music.

Music composers and theorists have riddled music with a litany of systematic hierarchical structures. Tonality and key, harmony, meter and rhythm, form, and many more complex all-consuming theories are all systems of organized structures meant to impart logical meaning to emotional content. It is an idea borrowed from writing, and it is a visual concept. A large written work is organized into parts, chapters, sections, paragraphs, sentences, words, and finally letters. Each letter and word has its own function and meaning, and this becomes extended to sentences, paragraphs, etc., until every part has its own place and role. Most composers think of their music in the same way, with each note, phrase, or layer having a special function and meaning, and playing a part in a larger whole.

brush-portrait3-fb-banner-copyBut it is impossible to retain this kind of detailed attention to multiple levels of musical structure while improvising. You can keep track of what is going on for a while but at a certain point you reach the Too Much Information point and things fall apart. With no pre-planning, the amount of information, levels, and functions add up until you are overwhelmed. At that point, you just give up and think about something else, because, after all, you are creating everything on the spot, not just the structure.

So when improvising, the amount of actual musical information you can pay attention to is limited. This is why many improvisers prefer to improvise on given themes or motives, because when doing that, the emphasis shifts to creative manipulation for which improvisation is excellent. But all of traditional musical theory treats music as information, so when freely improvising, you have three choices: 1) limit the amount of information you keep track of, 2) not care, or 3) keep track of your music in a different way. Choice No. 1 very quickly becomes a precondition. Limiting means making choices, usually in advance. This is how most of the world improvises, with limits of key or mode, tempo, meter, harmonic choices, etc. It narrows the playing field so that the improviser can concentrate on expression and creativity. Choice No. 2 pollutes the imagination and eventually becomes a choice. I believe that a lot of Free Jazz or Improvisation has become like this, where making an organizational choice of any sort becomes taboo.

It is not the music itself but its relationship to the surrounding music that gives it meaning.

I found that, as a composer, the only choice open to me was Choice No. 3 because I didn’t want to make pre-conditions and I did care. So instead of treating bits of music as information, I started treating them as relative values on the sliding scales of several simultaneous musical “fields” or parameters. Every bit of music I improvise is louder or softer, higher or lower, faster or slower, darker or lighter, more or less resonant, more or less dissonant, denser or more open, flatter or sharper, etc. than the music which is sounding simultaneously or adjacent to it. It is not the music itself but its relationship to the surrounding music that gives it meaning. Because emotion is perceived as a specific oscillation of intensity and release, and oscillation in many forms is a fundamental component of music, these relationships make excellent expressive tools while improvising. Most importantly, because their focus is very specific, they become tools that are actually possible to use while improvising in an intelligent way.

Because music has multiple dimensions, and these dimensions can be creatively manipulated simultaneously, the creative possibilities are nearly endless. An improviser can also create his or her own expressive parameters just by juxtaposition (i.e. more temple block and less triangle). Thinking about oscillating parameters encourages an improviser to concentrate on what he or she is doing at any given moment rather than creating the mental distance it takes to keep track of specific information over a longer period of time. This is exactly the mental attitude it takes to improvise well!

Of course, identifiable ideas do emerge. But instead of treating them as informational Legos to build things with, I treat those ideas as relational focal points that are altered and changed at every appearance. The information relates to itself. Large-scale structures do emerge too, but I don’t often think about them consciously. Sometimes I will deliberately bring back an idea from earlier within an improvisation if it is particularly prominent (and I remember it). Sometimes I will bring up an idea from a different improvisation or even another piece. But I seldom find that when I do it has any real or special impact. I find I can use and trust my musical instincts to control large-scale shape, contrast, and function. In other words, I play it by ear. It usually works. If it doesn’t, oh well, I’ll try again next time.

Music and Emotion

How the research of Manfred Clynes inspired and refocused my musical career

I was helping a friend of my son’s with her music theory. She was a first year theory student and her assignment was in figured bass. She was a sharp girl and seemed to have no problem with the material, but was obviously distracted. Finally, she shut the book and sighed. “I don’t care a thing about figured bass,” she said, “what I want to know is why, when I play Tchaikovsky’s Romeo and Juliet, it gets me EVERY SINGLE TIME?”

Clynes picSometime in the late 1970’s, I was in my doctor’s waiting room looking for something to read. I thumbed through a copy of Psychology Today (slim pickings) and discovered an article by Manfred Clynes. He had been doing research on emotion by asking subjects to think of situations that would cause them to feel love, joy, anger, etc., while recording their reactions as pressure on a finger sensor. I was struck by his intuitive knowledge that emotion was a timed phenomenon of tension and release. I later discovered that besides being a psychologist, neurologist, inventor, and computer whiz, he was also a concert pianist!

Subjects were able to generate emotions at first by visioning and later on their own, and Clynes was able to identify specific waveforms for a number of different emotions. These waveforms were the same among all of the subjects. Clynes then was able to secure a grant to test subjects of completely different cultures (Central Mexico, Japan, and Bali). His results were still the same. His research led him to presume that these waveforms, or sentic forms as he called them, were innately human and a part of the central nervous system. (Here is an article he wrote on sentic forms, with some illustrations.)

Being also a musician, he decided to play recorded music for his subjects. He found that the listeners responded emotionally to the music all in basically the same way, at the same points in the music, across different cultures. OK, now he had my attention.

As musicians, we know that we respond to music emotionally. It is part of the natural camaraderie between musicians. But it was news to find out that everybody responds to music in the same way, with the same emotions!

When Clynes first started experimenting with his finger-pressure device (sentograph), he had musicians “conduct” (on his device) while imagining different pieces of music silently. His first subjects were Pablo Casals and Rudolph Serkin, so he was not fooling around!  He soon found that a specific composer’s music generated a unique waveform that permeated all of his works.  A different composer, however, would elicit a different waveform. It was almost like a fingerprint. For a composer, this is very interesting!  For a performer, this helps explain why musicians can identify most composers after only a second of two of listening to their music.  Later, while doing his emotional research Clynes noted the interplay of the emotional waveforms with the previously noted composer waveforms and noticed some interesting results. In Middle-period Beethoven, for instance, which is often angry, the emotional waveforms usually ran counter to the composer waveforms; while in Beethoven’s later works, which can be nothing short of transcendent, the waveforms tended to run concurrently.

Clynes found these sentic forms, being biological, to be exceptionally specific. An expression of an emotion in music that wasn’t quite precise, would be perceived as less strong. If it is off a little more, the expression would be perceived as false or fake. Off even more and the emotion isn’t perceived at all. This speaks to the difference in “musicality” between performances. Musical expression turns out to be a very specific skill. Predictability also seemed to diminish the strength of the emotion. This speaks to the difference in skill among composers. Even emotional expression can become tedious! Tchaikovsky’s Romeo and Juliet, after probably 200 career performances, doesn’t get me “every single time” anymore, but it is still surprisingly affective even though I know exactly what is coming!

While working at the University of California at San Diego, Clynes developed a therapeutic discipline for emotionally disturbed patients that involved expressing a whole cycle of emotions with the assistance of his sentograph over a period of about thirty minutes. These sentic cycles are essentially both biofeedback and therapy. Learning to recognize, control, express, and develop intimate knowledge of these emotions, as well as allow patients to express and release these emotions in a safe environment, had a significant effect on patients. (These therapies are now readily available on the Internet.) Though the patients seemed to be getting better, however, the research ran counter to other research at the institution funded by drug companies and his funding was not renewed. After which, he was offered a position and lab in Sydney, Australia, where he concentrated more on music, emotion, and electronics.

At any rate, I was lucky that day in the doctor’s office that the doctor was quite a bit behind schedule and I was able to finish quite a bit of the article. I was so excited that I stole the magazine! A few years later, Clynes’ book Sentics – The Touch of Emotion was published and I ordered it. But I kept the magazine for many years, through a number of moves, and I may still have it somewhere.

No John, it’s not the sounds that are making love! Words and pictures don’t make love either, but they can break your heart!

As a professional performer, I have always known that it is emotion that makes music tick. Without emotion, there is no reason to listen to music at all. John Cage scoffed at the idea of emotion in music. He joked that some people thought the sounds were making love. No John, it’s not the sounds that are making love! Words and pictures don’t make love either, but they can break your heart! Music is sound, but it is sound in motion, and it’s the motion that is important. It is the ways that those sounds change which trigger our biologically wired emotional impulses. It is the verb, not the noun, where the action is.

Clynes has done some composing, but his musical interests are primarily interpretive. His later work involves programs that analyze music for its emotional content and shape the intonation, vibrato, and metrics to conform to that content. He wrote a program called “Superconductor” that allows someone to conduct a piece and alter it according to not only the tempo, but the emotional content contained in the conductor’s motions.

Though I can understand his excitement about how his work relates to musical interpretation, I am more interested in his theories from a creative standpoint. For me, he confirmed that emotion was the language of music. Emotional forms are very specific and unforgiving, but they are hard-wired into all of us so we already know what they are! How to create those forms in music takes a little skill, but whether or not the music is expressive takes more intuition than knowledge. Considering his research showed that composers are leaving an emotional record and a personal inner pulse within their music, it seemed to me that the most important characteristic for a composer to maintain should be honesty. At a time when music was being flooded with the importance of Ideas and Processes, it became clear to me that to keep an intimate knowledge and identification with the music I was creating was the only way to insure its emotional integrity.

I don’t think that the sound of the music is the only way music can have an emotional impact. Juxtaposition of style, texture, placement, social concerns, stark contrasts, and innumerable other techniques can all cause emotional involvement of a different sort by suggesting situations which trigger emotional memories, fears, or responses. But even by just manipulating the sound, I think there are still vast untapped resources for emotional expression.

As for that first year theory student, I was able to give her some hints about what was going on in the music, but mostly I just reassured her that she was on the right track. She had discovered the magic of music herself. Dr. Clynes has shown us the mechanism with which it gets us. It is up to the creative ingenuity of performers and composers to devise methods with which to deliver that magic to us all.