For the most part, I feel that the ascension of recorded music has been positive. It preserves performances and non-written music, and gives everyone access to music from everywhere. But one of the main downsides is that it has turned most of the world into listeners instead of participants. Whereas most people used to sing or play if they felt the need for music in their lives, now they just push a button, turn a dial, or swipe. Some people now listen to music for a large majority of their life – at work, at home, driving, shopping, walking, everywhere, but they never actually participate in any of it.
This easy accessibility comes with a price. Aside from the missed social opportunity, there is just a lack of any sort of first-hand musical experience. A lack of any understanding of what it actually feels like to make music. Though there are probably more people who consider themselves musicians now than ever before, the actual consumption of music has become completely passive. Michael Tilson Thomas, music director of the San Francisco Symphony, composer and new music advocate, once said that he thought that because nobody sings anymore, people have been losing contact with the emotional significance of the notes. As a result, he thought that many people now heard music as a collection of sounds, turning all music into percussion music. This doesn’t mean that the music is devoid of expression or nuance, but it does mean there is a general apathy toward tonal nuance.
This, I think, is most apparent in pop music, dance music, hip-hop, rock, and most commercial genres, but it is also apparent in Classical music. Minimalism, though no doubt chemically inspired initially, quickly became a serious discourse. Its popularity was also derived from its urban roots. John Cage talks about all the (urban) sounds around him, with a special love for the sound of traffic. This background hum of electrical, mechanical and other activity seems to be the very essence of Minimalism. Not much tonal nuance in traffic though. In fact, the numbing trance-like sameness of Minimalism is almost the antithesis of emotional nuance, though there is a cumulative aspect to the music that can be quite powerful. The pan-diatonicism and pan-metricism with which the style began quickly became highly structured multi-leveled hierarchies. Composers quickly saw the potential in the style for the organization of pure sound. Multiple layers of minimally changing sound ideas allowed composers to organize on several metric hierarchical levels at once. John Adams, for instance, has used Minimalism to create huge, intricate, almost “maximal” compositional structures. Composers essentially turned meter and texture into the new tonality. Specific sounds became structurally significant simply by when, where, and how prominently they were used.
But this kind of use of sounds, both musical and nonmusical, has a drawback; it demands meaning. A musical representation of an urban milieu is not enough reason to group random sounds together. There has to be a reason to choose which sounds are played together, and which ones aren’t. Why start here? Why end there? Do the sounds clash? Do they blend? Composers can’t really write what used to be called “pure music” with sound. It demands justification, otherwise, it just “is.”
Because of this, many composers started to use certain sounds for their cultural significance. This included using styles or direct quotes from other composers. One of the first (and still one of the best) uses of this technique appeared in the “Scherzo” movement of Berio’s Sinfonia (1968-9), which is a masterpiece. But recording artists have been using “samples,” “mash-ups,” and “remixes” now for over thirty years. This may solve the problem of meaning (and can be really interesting), but it doesn’t provide the same kind of direct emotional involvement that people are used to enjoying from the music they listen to.
This quandary of meaning and expression is not a problem facing only Minimalism and other sound-based music; it is a problem with all music constructed in “layers.” Composers have always composed in layers to a certain extent, but I am talking about independent layers. Today, composing with layers of music has become the norm. It started with multi-track tape recorders. That methodology was brought wholesale into computer sequencing and digital recording, and has worked its way into written music as well. Composers have toyed with the idea of juxtaposing unrelated musical materials for hundreds of years. Take the offstage band in Mozart’s Don Juan, the converging marching bands in Ives’ Decoration Day, or the unrelated layers of his Unanswered Question. Of course, Mozart’s layers are perfectly integrated harmonically, even if they are in different meters, but the Ives’ layers are unrelated in every way except meaning.
Stravinsky is another composer who explored the idea of compositional layers early in his career. The opening of the Rite of Spring and in fact, the entire Part One, is a textbook example of compositional layers. (Part Two is much more linear.) It is also very effective and some of my favorite music! I have always felt, however, that the reason Stravinsky abandoned this approach was not because he had to escape through a back window, and not because of World War I and the fact he was broke, but because he saw the limitations of this approach both structurally and expressively.
One of the most interesting parts of this phenomenon deals with popular music. Popular music has always been rhythmic, but since the emergence of rock and the infusion of blues and other African influences, the music has adopted a more ethnic cultural outlook. Music in Western countries generally tends to be well integrated, with melody, harmony, and rhythm working together as a whole. Music in much of the rest of the world tends to be set up as a vehicle for individual expression set against a rather static and unchanging background. This could be a drone, a repeating rhythm or pattern, or a combination of the two. Popular music genres seem to be adopting this modal more and more. Vocal lines and solos are where all the expression is; the instrumental parts are the big bad unchanging world. This is a huge exaggeration, of course, but it seems to be one of the common and most successful answers to the problem of creating music with sound and not notes. And the music is even more rhythmic today. Hip-hop has become an art of sound collage. Of course, pop music always has words. There is never any doubt what it is about.
The drawback is that instrumental music is increasingly being pushed into the background. On one hand this has led to some very imaginative music for TV and Film, but on the other, it has led to some less than thrilling attempts at Classical music. New sound-based Classical music does well when it has an exciting soloist grabbing the audience’s attention, and has had success with new opera, dance, and video, but it is struggling getting its audience to pay attention to sound-based instrumental music in its own right. I don’t have an answer for this. But it would be a good idea for a composer to remember that a sound collage is exactly what the audience hears everyday when they step outside. In order to get an audience’s attention these days, composers must have something in the foreground. If not a soloist, or even a melody, then at least something front and center – electronics, whale songs, video, dance or some other “hook!” Otherwise, to the audience, the music sounds just like their real world and there is no reason to listen.