INART 55

History of Electroacoustic Music

Interactive Composition


Joel Chadabe's CEMS System pushed the envelope of synthesizer composition by assembling a series of sequencers and random processes that created pieces as they were played.



In the 1970s, Salvatore Martirano (1927-1995), an instrument designer and performance artist who worked with the San Francisco Tape Music Center in the early 1960s, created the SalMar Construction at the University of Illinois. This was a performance panel of 291 switches that provided digital routing of signals -- larger sequences controlled smaller sequences. The panel was connected to a set of analog filters, oscillators, and amplifiers. The output was sent to 24 speakers. Four programs ran on it simultaneously. The performer (Martirano) could change all four or just one. The instrument had a "zoom" capability that allowed him to jump to different levels of control. Like Chadabe's CEMS System, this was another interactive system in which the performer created the piece along with the machine. The piece Look at the Back of My Head for a While is a recording of one of these improvisations.

In 1977 Joel Chadabe purchased a special Synclavier system that allowed special software and performance interfaces to be added to it. In 1978, his piece Solo used the Synclavier system with a program Chadabe had written that composed melody notes based around predefined parameters of harmony and melody. Melodies were created for eight melody lines played by the Synclavier, two clarinets, two flutes, and four vibraphones. In performance, Chadabe controlled the composition with two modified theremins that were created by Robert Moog. The two theremin antennae were placed on the stage, about three feet apart. Moving a hand towards or away from one of them controlled the tempo by changing the durations of the notes. Moving a hand in the proximity of the other antenna controlled the relative volume of the instruments (thus, the overall timbre produced). Chadabe described performing the piece as being like "a conversation with a clever friend." He could do things like cue the clarinets to play slowly, but he did not know what notes the clarinets would play. The notes they produced determined his decision for the next note produced. In 1981, Chadabe coined the term interactive composition to refer to a process in which a performer shares control of the music with a computer.

Chadabe's piece Follow Me Softly (1984) was an improvisation with Chadabe on the Synclavier system and a percussionist. Chadabe sat at the alphanumeric keyboard and was able to control the triggering of notes, changing of timbres, or produce melodic figures.

While Chadabe was developing his pieces involving the Synclavier, Morton Subotnick had taken a step beyond the sequencing techniques featured on his early albums. His "ghost" box featured a pitch and envelope follower for live audio input. Once changes in pitch and amplitude were converted to voltage changes, these changes could be sent to an amplifier, a frequency shifter, and a ring modulator. The parameters of these processors could be controlled by moving a finger across a touch sensitive plate, thus "invisible" gestures could control the performance -- hence the name "ghost" box. These manual gestures could also be recorded onto tape and played back at each performance.

Subotnick worked with the "ghost" box until the mid-1980s. His pieces that used it include Two Life Histories (1977), Liquid Strata (1977), Parallel Lines (1978), The Wild Beasts (1978), The Last Dream of the Beast (1979), A Fluttering of Wings (1981) and Axolotl (1981).

Robert Rowe (1954-), who trained at the Institute of Sonology from 1978 to 1987, IRCAM, and received the first music Ph.D from the MIT Media Lab in 1991, has made a systematic study of interactive computer music systems. He considers such systems to be an intersection of the fields of music theory, music cognition, and artificial intelligence, and thus has taken Mathews' idea of the intelligent machine to new levels. His work examines how computers may be programmed to "sense" salient musical features such as individual phrases, tempo, density, loudness, register, and harmonic progression. Based on what computers "sense," they may then process incoming music and be programmed to "respond" to the input of a human musician. Thus, his work explores new ways in which computers may be made to interact with human musicians.

Rowe classifies interactive music systems according to interactions of three dimensions:

  • Score-driven vs. performance-driven
    • Score-driven programs are meant to compare incoming music to stored collections of events of musical fragments, and typically operate with traditional time-based categories of beat, meter and tempo.
    • Performance-driven systems do not respond based on incoming music matching stored patterns, and typically do not operate according to traditional time-based categories, but to more general categories such as regularity, loudness, or density.
  • Response methods
    • Transformative methods take incoming musical material and create variations on it -- inverting all intervals, playing a phrase backwards, transposing the material, etc.
    • Generative algorithms operate at a higher level of abstraction, listening for elements such as a certain scale or set of durations.
    • Sequenced responses consist of stored musical passages that are played back in response to some type of input -- a certain pitch being held for a long duration, trigger from a device such as a pedal, and so on. Variations on the stored material during playback may include changing the tempo or dynamic shape.
  • Paradigms
    • Instrument paradigm systems extend the player's instrument, so that a solo performance is augmented by elements such as effects, pitch doubling, and so on.
    • Player paradigm systems attempt to create another player, so that the system plays a type of duet with a musician, adding another personality to the performance.

Rowe's own software program Cypher receives incoming music in the form of MIDI data, and outputs its responses in the form of MIDI data. A Listener analyzes six types of input features, plotting each MIDI event onto a six-dimensional feature space with axes representing speed, density, dynamic, register, harmony, and rhythm. These six feature streams are tracked over time for regularity or irregularity. A Player is configured by a composer to output certain features in response to certain input features. Working with a graphical user interface, a composer can specify instructions such as what timbres should be played, and for transformations such as, for example, extra accents to be added to the output material when the input material is being played in a low register, a pitch to be output that is an inversion about a note (such as Middle C) of the input pitch, or for regularity or irregularity of a certain feature affecting the phrase length of output material. A series of these connections are stored as "states" that advance throughout the course of a piece. A Critic tracks the Player's output. In the absence of input from a musician, the Critic has the Player continue to generate material based on features that the musician has recently played.