Procuring a reasonably good demo recording of the pieces I have written has always been a frustrating chore. Many good performances of my music have gone unrecorded, or badly recorded. The good recordings I have received I have treasured like gold, but many were obtained with strings attached that did not allow me to publish them online.
Composed music began moving to computer about thirty-plus years ago, and notation programs started including some sort of playback feature a few years later. It was a breakthrough to be able to hear what you wrote immediately, and it changed composing forever. But the actual sound of those playback tools has always been, in a word, atrocious!
This is no longer the case. Sibelius, Finale, and, now, Dorico have been improving the tolerability of their sounds for a decade or more, but the release of NotePerformer, an independent artificial-intelligence sample library designed to be used with written notation software, has finally crossed the line of believability. It is available for all three notation platforms. It is probably best suited for Sibelius, but I have been using it with Dorico which is the software I have come to prefer.
NotePerformer uses multiple iterations of solo sounds to create the sound of a section, instead of using a different sound. It randomly offsets the individual sounds to give the “halo” effect so characteristic of strings, for instance. The AI is built in to the interpretive/musicality aspect of the program, but if you don’t like what it does you can add instructions (i.e. – legato, detaché, etc.) to change its approach to the music. It reads articulations and most instructions (pizz, snap pizz, pont., marcato, etc.). Dorico does not read glissando/portamento indications as yet, though Sibelius does. Sibelius also maps several different brass mutes, while Dorico only offers straight mute.
As an example, I thought I would use a work I wrote for nine solo strings, (4 violins, 2 violas, 2 celli, and double bass). I wrote this piece in 1993 and I performed it with some of my colleagues from the Phoenix Symphony. But there was something wrong with the original recording and I couldn’t use it. None of the other performances have been recorded so I have had to present the work using a old MIDI recording. I hated it. MIDI recordings of strings have always been the most cringe-worthy. Strings have such a complicated sound with such richness and variety, reproducing them has been nearly hopeless. Not anymore. The NotePerformer performance below is maybe not equivalent to a live performance, but it is very realistic.
The piece, A Wake At Night, is derived from a long improvised melodic line. All of the other accompanying material is derived from that melodic line and surrounds the part of the melody it was derived from. The title has multiple meanings, but the primary one refers to this structure. The melodic line is the “boat” and the surrounding material is the “wake” which has been generated by the boat. At night the wake shimmers with phosphorescence, and thus it is a metaphor for the swirling material surrounding the melody, which is often more prominent than the original.
G. Stallcop: A Wake At Night (1993) – for nine strings
Working around the confines of traditional music notation
When I was first drawn to solo piano improvisation, about forty years ago, it was because the kind of music I heard being improvised was not available in written form. It wasn’t the style that got my attention, it was the freshness and spontaneity and the way the music unfolded. Though I spent quite a few years trying to mimic improvisation with my composition, I was never entirely successful. If I recorded some improvisations and then transcribed them, I thought maybe I could get a result that approached the feeling of the original improvisation.
I started in the late 1970’s with a reel-to-reel tape recorder that made transcribing them no mean feat! It took weeks to transcribe each improvisation with some passages having to be played at half speed over and over again. Getting all the notes was difficult sometimes, but making a decision on the rhythm was sometimes ridiculously hard. At least the notes were real; rhythm is an abstract concept. Deriving beats, figuring the meter, and deciding where in the meter the music went are all decisions that became very difficult. This was especially true when I was playing freely, which was increasingly the case. It’s harder to find the beat when the beat keeps changing.
When a performer does not conceive of his or her improvisation as following a particular tempo or meter, transcription becomes nearly impossible. I would try to derive a sense of strong or weak beats through groupings and emphasis, see if there was consistency, and count beats through the long notes. Even after deriving the logic of what was played, all I could do was to show the groupings and write “freely” for the tempo. For a solo piano, this would be all right, but if I were to arrange it for even a small group, I would have to be a lot more definitive. I ended up making a lot of compositional decisions that didn’t relate to the original at all.
When I began to record using MIDI, it became somewhat easier because all the notes were there, but the rhythm was still a problem. People have asked me why I didn’t just let the computer transcribe my improvisations. The reason is that if you don’t play WITH a computer, (i.e. a click track); the computer doesn’t know what you are doing. It often doesn’t know anyway.
Music notation programs started becoming available about 30 years ago. They were rather basic at first, but after about 15-20 years, they had it pretty well figured out. The programs can now notate and play just about anything you can write, and can read and play slurs and articulations, dynamics, and a number of instructions such as pizz. and col legno. They can alternate between 20 or 30 different samples per line and they sound great. They all have mixing boards and notating music has become an instant recording studio. But transcription, other than easy rhythms, is still a problem, and you still have to play with the computer. Though they can change tempo on a dime to the hundredth of a metronome marking, they cannot follow what you are playing, other than rounding to the nearest eighth or sixteenth note. If you are playing freely, they are no help at all.
I was not able to get transcriptions of my improvisations to sound anything like my originals without a ridiculous amount of markings and tempo changes. I felt I needed a better way to transcribe them. At first, I thought maybe I should allow the performer more freedom and try to capture the feeling I had when I improvised them. I tried more abstract notation systems. I tried spatial notation. I tried graphic notation. I tried a more generalized form of regular notation without all the intricacies of my original. But all of these attempts had the same problem – you couldn’t practice them! All the pianists who tried to play them, including me, had to alter them to practice them. The pianist would end up deciding how they were going to play them and then change the notation to accommodate their decisions. That was not the intention.
I spent a long time, several years, on this problem. I mentioned it to Gina Genova at the American Composers Alliance (my publisher). She told me about some late piano pieces by Earle Brown where he just improvised freely on a keyboard and let the computer transcribe them. She had had a pianist clean them up and perform them and thought they had worked out fine. She suggested that I try doing my transcriptions that way. I balked. A computer transcription of my music was not only illegible it wasn’t very accurate. The computer would “round” the value of the notes off to the nearest sixteenth or whatever you chose, and make it sound really choppy.
A computer transcription was not a good choice, but what the computer was trying to do was to transcribe the music against a steady pulse instead of trying to convey the imagined pulse of the music. That particular concept became increasingly more intriguing. It would be like drawing a grid of squares across a photograph and reproducing it square by square in a painting, much like the procedure for painting billboards. Using triplets, quintuplets, and syncopation to convey the differences in meters and tempos would smooth things out and could be made to work if I was careful about it. Transcribing against a grid would capture much more of the original improvisation than using instructions and tempo changes would. The more I thought about it, the more I was tempted to try it.
My recording program displays the MIDI information on a graph that looks like a piano roll for a player piano, but it does so against the background grid of a chosen tempo. In my case, that tempo is rather arbitrary because I don’t use it to keep a beat. So I tried transcribing a few improvisations against this rhythmic grid to get a feel for what was involved. I had to be careful, I discovered, to not make the transcription too complicated. If the original was consistently just a little off from the grid, I would find a way to align it better. I was happy, at that point, to have the notation software play the transcription back faithfully for me because I could compare it with the sound of the original. I discovered that some rhythmic subtleties are difficult to determine visually when looking at the screen, but much easier to hear when I played the transcription back. Generally, it worked out much more smoothly than I would have guessed. I ran into a tough measure or passage every once in a while, but I was able to work through them and get it done.
To test the result, I practiced and learned to play the pieces I transcribed, and the results were very interesting. The transcription was different than the rhythms I had imagined when I listened to the improvisation, but as I practiced the music, my conception of the music changed! I remember this having happened with a number of other pieces I had played where I had heard a recording first and imagined the rhythm as different from what the composer had written (usually the composer was Stravinsky.) But I just re-conceived the piece once I saw it notated. Once I saw the rhythm, I was OK (usually).
Notating the music “irrationally” was not only much more true to the original, it actually brought out relationships that I didn’t realize were there. Though the process was not like clicking a button and letting the computer do it, it was not really that difficult. I could make good progress and finish a transcription of a five-to-seven minute improvisation in a few days, which was generally faster than most methods I had tried. There certainly are some tricks to it, but the process gets easier the more I do it.
The end result is that I am more than happy doing my piano transcriptions this way. I think the clincher came when I realized that the concentration level I used when performing the transcriptions was close to the same level I needed to create them in the first place. Of course, I was concentrating on completely different things, but the feeling was very much the same. A performer needs to be able to concentrate on enough detail to properly provide an involved performance. The level of concentration achieved by a performer is an important consideration in determining how much a performer enjoys the experience. And a happy performer makes for a happy composer!
So I am now in the process of transcribing some “suites” from my albums. The transcriptions are actually true enough that I can use the original recording as an example. Having a written version of the music available is of no consequence to the casual listener, but if you play piano, it is always considerably more enjoyable to play through the music yourself. Being an improviser has been very rewarding musically, but it is a little lonely. People either like or don’t like what you do, but it is all on a surface level. When your music is written down, musicians get to know it better and thereby, get to know you better. It is more rewarding for everyone.