INART 55

History of Electroacoustic Music

Bell Labs


During World War II, Bell Labs evolved from a communications research firm to a more general think tank. They recruited scientists from many universities to work on radar. To make the offer attractive to all of these innovators, the company gave these recruits freedom to pursue whatever else interested them at the same time. The facility in Murray Hill, New Jersey, was near a nature preserve. Scientists were encouraged to walk in the woods and think. Artists were also recruited, as it was felt that an aesthetic side was necessary to bring meaning to new technologies, and the scientists were encouraged to pursue any creative interests that they had. Researchers engaged in a variety of pursuits often had adjoining offices, and there was a great deal of creative cross pollenization as a result of neighbors poking their heads into other offices to see what was going on. The open-ended nature of Bell continued through the 1960s, as the race to develop space-based technologies provided continuing impetus for innovation.


Photo courtesy of Max Mathews

Max Mathews (1926-2011), often called the "father of computer music," recalled the IBM 704 computer as a "big room full of refrigerator-shaped objects." Mathews had joined Bell Labs after receiving his doctorate in electrical engineering from MIT in 1955, when he became the Director of Acoustic and Behavioral Research. His job was to create computer equipment to model telephony, to study the audio transmission of telephones and work to improve it. Mathews created some of the first digital audio equipment to digitize sound (analog to digital converter, or ADC), and create sound from digital information (digital to analog converter, or DAC). This was a revisiting of pulse code modulation (PCM), which Bell Labs had implemented during World War II.

PCM represented a "connect the dots" method of storing and playing audio. Natural sound waves are continuous, meaning that between any two points on the wave there are an infinite number of possible points in between them. Computers can only operate on discrete values. Thus, in order for a computer to work with audio, the sound pressure wave had to be represented as a series of samples that represented instantaneous amplitude values. The process is analogous to cinematic representation of motion, which is not continuous but rather consists of a series of sampled images that are projected fast enough that motion is perceived by the eye. Using stored samples and a digital to analog converter, the numerical samples may be converted to voltage changes, the voltage changes may be filtered to smooth the transitions from point to point, and the resulting current may drive a loudspeaker and create audio.


Jonathan Pierce
Having made this conversion process possible, Mathews became interested in using the computer to create musical audio. The story goes that he and Jonathan Pierce (1910-2002), the executive director of Bell's communication sciences division, attended a piano concert in 1956. Feeling cocky, and perhaps dissatisfied with the performance, one of them remarked to the other, "The computer can do better than this!" Mathews then began work creating a new type of software known as an acoustic compiler that could produce musical sounds.

The first such compiler, Music I, was created in 1957. It could generate a single triangle wave (a simple and easily-computed wave shape), plus control pitch, loudness, and duration. Psychologist Newman Guttman made a short composition called The Silver Scale, which was an experiment to contrast equal temperament and just intonation. While the piece did not win praise for its compositional merits, this first step at synthesizing audio represented a fascinating beginning. Guttman's second piece, Pitch Variations, used a waveform that consisted of a quick series of pulses. The result was a variation on amplitude modulation, in which the extremely fast volume changes resulted in sum and difference sidebands in addition to the frequency of the wave, resulting in multiple frequencies.

These works were in contrast to the work of Lejaren Hiller, who had used the Illiac computer to generate numbers that were then notated and played by humans. In Mathews' implementation, the computer was actually producing the audio. While computers had produced a variety of beeps and buzzes for alerts, this was the first time audio had been produced in a controlled manner to realize musical ideas. Music I was first in a series of seminal software synthesis programs, known collectively as Music N.


Images courtesy of Max Mathews
The program was written for an IBM 704 computer, which was not located at Bell Labs, but at the IBM World Headquarters in New York City, where Bell Labs was able to rent the computer at a price of $200 an hour. (Bell Labs subsidized Mathews' extra-curricular work developing music software.) Music I used the computer in conjunction with the IBM digital tape recorder, which had a buffer system that allowed large numbers of audio samples to large computer tape storage reels. A separate machine at Bell Labs could play the sounds back through the digital to analog converter.

In 1958 the next version, Music II, was written for the IBM 7094, which was constructed with transistors instead of vacuum tubes. This program was faster, and thus capable of more interesting synthesis algorithms. It was capable of producing four voices, with 16 possible wave shapes (timbres) via the creation of a wavetable oscillator. The wavetable remains the basis of computer music. A wavetable is a description of an arbitrary wave shape, represented as a series of samples. This allowed new types of waveforms to be synthesized, not just the types of waves produced by analog oscillators. The wavetable capitalizes on the fact that pitched sounds are created from periodic waves, waves that repeat regularly. Thus, musical material contains a high degree of redundancy (Claude Shannon might say that music has low entropy). Since music has a high degree of redundancy, it is not necessary to produce samples representing the entire duration of a piece. One description of the wave is all that is needed. A table is made up a number of address locations, each of which contains an amplitude value for the wave. An incrementer reads through the table and sends sample values to a digital to audio converter. That is, the wavetable oscillator has a phase value and a current address in memory. It reads the value stored at the current address, then skips a number of addresses, the number determined by the phase value. When the end of the table is reached, the incrementer "wraps around" to the beginning of a table. Thus, addresses are read in a circular fashion (think of the values on a clock -- it's the same idea). The sampling increment, the number of samples that are skipped from output to output, affects how quickly the table is iterated, and, correspondingly, which pitch is produced. The sampling rate determines how many samples per second are sent to the digital to analog converter.

The stage was set for computers to become a force in the world of music. Mathews and Pierce began searching for musicians to work with. In 1959, Mathews and Guttman gave a talk titled Generation of Music by a Digital Computer at the third International Congress on Acoustics in Stuttgart, Switzerland, where Pitch Variations was played. Among those in the audience was Iannis Xenakis. That same year, they went to the conference of the Audio Engineering Society conference in New York City, where they met Babbitt, Ussachevsky, and Luening, and where Pierce gave a talk titled The Computer as a Musical Instrument. Mathews and Guttman worked at the Columbia Center with Varèse to produce a new version of the soundtrack to Déserts. Part of the work involved a visit to Bell Labs, where they made a digital recording of the sound of a buzz saw. Varèse thus took an interest in computer music, and included Guttman's second piece, Pitch Variations, to be played at a 1959 concert in New York City featuring his work and the work of John Cage. This was the first public performance of computer music. Bell Labs produced a recording called Music from Mathematics and sent copies to Leonard Bernstein and Aaron Copland. They received a note of thanks from Bernstein's secretary, and a personal reply from Copland, who said, "The implications are dizzying and, if I were 20, I would be really concerned at the variety of possibilities suggested."

Music III in 1960 was the culmination of what became foundation principles of computer music. It allowed complex instruments and musical scores to be created.

The instrument stage incorporated a modular design of unit generators. These are discrete software units that can either output a waveform (e.g., an oscillator unit generator or a noise unit generator) or process a waveform (e.g., a filter unit generator) or do math (e.g., an adding unit generator could combine two signals by adding each of their samples together). Unit generators have one or more inputs, and one output. The output of a unit generator may be directed to the input of another unit generator. Complex instruments could be created by patching together unit generators. A flowchart system of illustration shows the virtual interconnections of unit generators and the mathematical operations that may be done on their outputs. Signal flow starts at the top of the illustration and moves down to the final output. In the illustration, a sawtooth wave oscillator with a frequency of 220 Hz and an amplitude of 0.3 is sent to a lowpass filter. The cutoff frequency of the filter is made to oscillate via a sinewave oscillator that goes through one period every two seconds. The amplitude value of 1800 to the sine oscillator causes it to output values between -1800 and 1800. The adder provides an offset of 2200 to all the amplitude values, thus causing the cutoff frequency value to oscillate between 4000 and 400 Hz.

The score stage was a list of musical events, each listed with a time value. This allowed them to be listed in any order, as the computer would re-list them chronologically when the audio was created. Each event was assigned to an instrument, and consisted of a series of values for the unit generators' various parameters (frequency, amplitude, duration, cutoff frequency, etc). Each unit generator and each note event was entered onto a separate punchcard. While punchcards are no longer used to enter information into a computer, the concept of complex instruments made up of interconnected unit generators remains fundamental to music software and commercial digital instruments.

This incarnation of the software allowed values to change gradually over specified amounts of time. In Mathews' Numerology (1960), a variety of computer effects are displayed, such as extremely fast vibratos, single notes that undergo large frequency changes, simulating changes in volume with changes in vibrato frequency and amplitude, and note onsets that increase to a rate unplayable by any human performer. This ability to gradually change parameters became one of the distinctive features of computer audio, in contrast to the distinctive feature of tape splicing that lent itself to quick juxtapositions of unrelated elements. Composer/historian Kyle Gann observes that many computer composers wrote pieces with a slower unfolding as a result, experimenting with gradually changing parameters (most often timbre).

SOURCES:
Paul D. Lehrman, "Recalling a Legendary Playpen: a Revisit with Bell Labs," Insider Audio, March 1999.
Max Mathews. Keynote address of International Computer Music Conference 2006.