Madness to the Method - Gain Structure, By Mark Amundson
In this dissertation I am going to do a little Q&A, a la the old Audio Cyclopedias. You may have wondered about these questions, but always just did what you were told. I am going to throw down a generous helping of electronics history as a way of answering these questions, to remind us all were all this technology came from.
Question #1: Why is 0dBu the reference signal level?
Answer:

The question could easily be re-phrased: why is 0.773 volts RMS (0dBu) the standard and not some other convenient number like 0.1, 1, or 10 volts?

The answer goes way back to Alexander Graham Bell’s era, when there was no such thing as radio or broadcasting. As the Bell System and American Telephone and Telegraph (AT&T) monopolized the phone service industry, Western Electric Company was formed as a subsidiary of the Bell System to design and produce telephone gear for the whole country. After much trial and error, a standard two-wire pair transmission line was developed with 600-ohm source and load impedances to maximally send carbon microphone signals down the wires. With the right construction materials, voice signals (about –20dBu) could transit five miles with a passable loss of signal amplitude.

When Lee DeForest invented the Vacuum Tube Triode for signal amplification, his “killer app” re-boosted feeble telephone signals, creating long-distance phone service in the second decade of the last century. Western Electric still had a lock on the electronics industry in the 1920s as broadcast radio was just emerging, so naturally it had the highest technology suitable to fulfill civilian and military requests for standard “Public Address” apparatus. By the early 1930s, Western Electric had the first quality dynamic microphone (requiring no DC power, unlike carbon mics) and combined vacuum tube amplification connected to the first efficient “loud-speaking apparatus” that we now know as horn loaded drivers.

As broadcast radio became widespread, and specialized companies like Electro-Voice, Magnavox, and Shure Brothers came to supply (with Western Electric) the needs of public address and broadcast gear, the 600-ohm line cabling still held as the lowest loss method of distributing and processing audio signals. From that era, a one-milliwatt reference level into 600 ohms became the reference level—0dBm (zero decibels referenced to one milliwatt). Zero dBm is exactly 0.773 volts RMS, but as technology marched on and audio electronics moved from power matching to “bridging” impedance matching, the 0.773 volts without any specified load impedance was then described as 0dBu (zero decibels unreferenced).

To answer the lingering question of what became of Western Electric: government anti-monopoly policies in the 1930s forced the breakup of AT&T (the first time) into RCA for broadcast, Bell Labs for telephony, and All-Technical Products (Altec) for public address. Altec slowly became Altec Lansing, then split into Altec and James B. Lansing Inc., which became JBL.

Question #2: Why gain (trim) to line levels and then mix afterwards?
Answer:

This question is more math than history, but we still thank the early broadcast pioneers of the 1930s for the first work on defining signal-to-noise and noise source definition. This question could also be formed as: what is the best method to minimize hiss in the mixing console?

The answer comes from the invention of the radio, and techniques used to maximize signal-to-noise ratio, and thus transmission distance. As a signal is created, processed, and sent to its final destination, there is a signal-to-noise ratio (SNR) degradation. As each stage, or processing block, passes on the signal, the noise eventually encroaches on the signal level. The number of dB drop of SNR per stage is defined as its noise figure (or noise factor for you dB challenged). A noise figure of 6dB or less per amplification (gain) stage is considered a low-noise design for a preamp.

To better visualize this idea, lets put some example numbers to work. If a typical dynamic mic and voice put out –50dBu signal peaks, and the console’s referred input noise is –128dBu, you have a 78dB SNR, which is respectable in live sound applications. As the signal proceeds through the channel mic preamp, EQ section, channel fader or VCA, summing amps, master fader, and balanced line driver, there is a noise figure penalty to be paid. The good news is if two gain stages are cascaded together, the noise figure of the first stage dominates, with the second gain stage noise figure effectively divided by the gain of the first. What this means is that cheaper electronics can be used after the mic preamp, with a high gain preamp covering for the sins of the rest the console’s electronics.

One other item to be shared is that attenuation circuits (EQ filters, faders, pots, VCAs, etc.) can generally be assumed to be direct losses in SNR, with every dB in attenuation a corresponding dB increase in noise figure. So the theoretical perfect (low noise) mixing console setup would be faders maxed, EQ flat, and amplifier gains a perfect match between mic level and power amp full power sensitivity.

But no realistic scenario exists on a mixer without faders to “mix” with. So the next best answer is take your desired loudest channel in the mix, set its preamp gain control (gain, trim, etc.) for about 0dBu average level, keep the channel, group, and master faders reasonably high, but preserve some headroom for the occasional “louder” demand. This minor compromise yields the best SNR while still giving mix flexibility. This practice also applies to the gain of all the other signal source channels, but with the obvious idea that their faders would be more attenuated.

Question #3: Why is +22dBu the common maximum level?
Answer:

This answer also comes from electronics history, but only a half-century back. The dawn of the first mass-produced transistors had a typical maximum voltage level of 30 to 40 volts. Of these early transistors, many were targeted for industrial controls and “analog” computers for military and aerospace usage. The most common analog computer section was the “operational amplifier” or “op-amp.” These op-amp sections were designed to be near-perfect mathematical gain stages with both positive and negative voltage swing capabilities. Because of the limitations of the transistors plus the need for a bi-polar (plus and minus) power supply, the standard of +/-15 volt supply levels was instituted, and is still used today.

As transistors got grouped on one silicon die, integrated circuits (ICs) were born, with the first standard products becoming IC op-amps. As IC prices dropped in the late 1960s and early 1970s, more IC op-amps found their way into audio equipment, still requiring their +/-15 volt power supplies. Today’s pro-audio signal processing and mixing gear is largely composed of IC op-amps and a few application-specific ICs, plus just a few necessary unintegrated transistors. The legacy of commonly supplying them with +/-15 volt levels still exists, with op-amps capable of near +/-14 volt audio signal swings. This level translates to about 10 volts RMS, or +22dBu, at which the circuits would exhibit clipping of the signals. Some math trickery may also be in maximum output specifications, as you can gain another 6dB in level by stating the output as balanced, in which case each balanced output contact swings in opposite polarity to double the levels.

With the above explanations—that we should set our levels close to 0dBu and keep away from the clip levels around +22dBu—there is plenty of headroom for classifying what peak signals can be and what is required to get the drive channels (EQs, crossovers, and power amps) to full output. Most power amp manufacturers set their sensitivity values to around zero to +9dBu for full unclipped speaker drive. Keeping the post preamp levels at or below the power amp sensitivity values mostly assures a clipping-free production. Of course, that relies on you keeping the power amp input attenuator controls full up. You do not want to be “low-noise” all the way to the amp, and then throw away all that SNR at the last attenuator, do you?

Final Words

Our latest generation of audio production personnel deserves to be educated on how we use our gain structure procedures, and why these methods came about. Some may argue about exact levels and forming up mixes, but I am coming at this from an electrical engineering view and attempting to shine light on what the design engineers consider optimum use, rather than operator tactics “that seem to work for me.” We need to appreciate that live sound borrowed heavily from the telephone and radio broadcast pioneering work, and understand how electronics achievements have impacted our practices. The 20th century could be termed the “electronics” century, and it looks like the 21st century will be the “photonics” century, with fiber-optics promising nearly unlimited bandwidth for passing analog or digital signals from baseband to radio-frequency to optical-frequency signals.

The above article was published by Front of House (FOH) Magazine.
[ www.FOHonline.com ]
[ Tech Archive ] [ Theory & Practice Archive ] [ FOH Article Archive ]
[ Back ] [ Contact Me ] [ Home Page ]