REVERBERATION
REVERBERATION
REVERBERATION
Real rooms have walls that bounce sound back to us. Recording studios don’t, because the walls have been
treated to reduce reflections. Lavaliere mics, even in real rooms, tend to ignore reflections from the walls
because they’re so much closer to the speakers’ mouths. In both cases, the reverb-free sound can seem
artificial.
This may be a good thing, if a spokesperson is talking directly to camera or a narrator lives in the non-space
of a voice-over. But reverb is essential if you want dramatic dialog to seem like it’s taking place in a real
room. It can be the real reverb of the room, picked up by a boom mic along with the voices. Or it can be
artificially generated, to make lavs and studio ADR sessions feel like they’re taking place in the same room
we see on the screen.
Today’s digital reverberators use memory to delay part of the sound and send it back a fraction of a second
later. There may be dozens of delays, each representing a different reflecting surface in an imaginary room.
Since real surfaces reflect highs and lows differently—a thin windowpane may reflect highs and absorb
lows, while a book- case does the opposite—equalizers are included for the delays. To adjust the sound of a
reverb, you have to control these delays and equalizers.
Real Reflections
The speed of sound might be impressive for an airplane, but in terms of human perception—or even of
video frames—it’s pretty slow stuff. Sound travels about 1,100 feet per second, depending on factors like air
temperature.5 That means it takes about 1/30th of a video frame to move one foot. If you’ve got a large
concert hall, it can take a couple of frames for a drumbeat to reach the back by the most direct path. But if
you’re sitting in that hall you also hear the drum over other paths, each taking a different length of time—
that’s reverb.
Figure 16.16 Reverb paths in a concert hall. It’s the longer (dashed) paths that give the room its rich sound.
Figure 16.16 shows how it works. Imagine a concert hall roughly 45 feet wide × 70 feet deep. You’re about 20
feet from the drum. There’s no way you can hear that drum sooner than 20ms after it’s played (two- thirds of
a frame) because it takes that long for the first sound to reach you. I drew that direct 20- foot path as a heavy
black line between the drum and you.
But sound also bounces off the side walls and then to your ears. I’ve drawn two of those possible paths as
heavy dashed lines. The lower one is about 40 feet long, and the upper one is almost 60 feet. You hear a
second drumbeat at 40 ms, and a third at 60 ms. If these were the only paths, you’d hear a drum and two
distinct echoes—it could sound like a giant stadium. A well-designed concert hall has plenty of other
relatively direct paths, so these early echoes are mixed with others and you hear a richer sound instead.
Sound fills the room. Reflections bounce off the back wall, off the side to the back and to the ceiling, from
one side to another, and so on. All of these paths (dashed thinner lines) eventually reach your ears, but you
hear so many of them that they merge into a barrage of late reflections. The longer the path, the softer the
sound gets; in a well-designed hall, it can take many seconds to die out.
Of course, there are other factors. A concert hall, a gymnasium, and a hotel lobby can all have the same size
and shape but sound completely different. That’s because multiple surfaces determine how far apart the
reflections are when they reach you. Also, the shape and the material of each surface affects the tone of the
reverb. Hard surfaces reflect more highs. Complex shapes focus the highs in different directions.
A good reverb device lets you simulate all these factors. You can adjust:
• The timing and number of early reflections (up to about 1/10th of a second)
• The density of late reflections and how quickly they build
• The relative bass and treble of late reflections
• How long it takes late reflections to die out
• The relative levels of initial sound and early and late reflections
By controlling those factors, you can simulate just about any acoustic space.
Convolution Reverbs
Artificial reverb is a compromise. The first studio units used springs or vibrating metal plates as a substitute
for large physical spaces. Early digital devices took their own shortcuts, limiting the bandwidth and only
approximating the later reverberations. It’s a tribute to the cleverness of their inventors that early units could
sound good while using less computational power than today’s pocket calculators. Computers got more
powerful over the years, and they could support better and better simulations. Modern reverb software does
a perfectly fine job, and can be appropriate for many parts of a mix. But it’s still a simulation. The
mathematical tricks fool us into thinking we’re hearing what acoustics actually accomplish in a real room.
Within the past few years, a totally different approach has become possible. Rather than simulate the
reflections mathematically, convolution or sampling reverbs use actual recordings of physical spaces reacting
to specific test sounds. They analyze how the spaces respond, and then duplicate that response on your
tracks. It takes a lot of math, but if you give them good enough samples and a powerful enough computer,
the better programs can help a well-recorded ADR line or Foley effect sound like it belongs to the original
production track.
Evaluating Reverbs
Accurately recreating the echoes from every one of the thousands of different surfaces in a real room
requires too much processing to be practical. Fortunately, it isn’t necessary in a film soundtrack; we need
just enough reflections to think that a space is real. This is as much art as science and is limited by how much
processing power is available, which is why different reverb hardware or effects plug-ins can have totally
different sounds.
Practical Reverb
The first place to control reverb is at the original recording. A good boom operator will strive for just the
minimum natural reverb to convey a sense of space, knowing that we can add more reverb at the mix. ADR
and Foley recordings, and some tracks picked up by lavs, will have very little reverb at all.
To adjust a reverb plug-in for normal dialog situations, turn its mix control to about 75% reverb. That’s far
more than will sound natural, but is usually necessary to hear the effect for fine-tuning. If you have a choice
of algorithms, choose Small Room. Then start previewing. Adjust the reverb time for something fairly short
that doesn’t interfere with intelligibility. Set diffusion fairly high for most rooms, but a large totally empty
room, such as a new house with no furniture, will have lower diffusion. Set the equalizer or brightness toward
the middle, with more highs in a room with plaster walls and fewer in a room with lots of curtains or
wooden walls. Then lower the mix until it seems right for the room and matches a boom sound; usually this
will be around 20% reverb.
Many people believe reverb doesn’t belong in voice-overs. Reverb implies a room around the voice, and a
voice-over exists in limbo where it can talk intimately to us. That’s an aesthetic judgment, of course (it’s one I
agree with), and you can certainly try other treatments for voice-over.
Be careful when applying reverb to individual clips in an NLE or DAW. With most software, it can last only
as long as the edited clip, which means the effect could cut out abruptly after the last syllable. Add some
silence at the end of the clip so late reflections have time to die out. This may require exporting a new audio
element that includes the silence, and then replacing the clip with the new version. If your program lets you
apply reverb to an entire track instead of just to clips, this won’t be a problem. Most audio software lets you
send a track to a reverb effect, and then bring the unprocessed track up on one fader and just the reverb on a
separate one. This is the most flexible way to work.
Be aware of the sound field when applying reverb. If you’re applying it to a mono clip, or patching it into a
mono track, the reverb will be one-dimensional and in the center of the screen. This is appropriate when
matching ADR to mono dialog, since the production mic picked up the real room’s reverb in mono. But it
might not be what you want if you’re using reverb to make an environment bigger or smooth out a piece of
music; they normally have stereo reverb effects. With most audio programs, you can either insert a mono
reverb on a mono track, or send the mono track to a stereo reverb.
Reverbs in an audio program often have separate controls for the early reflections. Set them longer and
farther apart to simulate very large spaces.
Beyond Reverb
Reverb isn’t just for simulating rooms. Add it to sound effects and synthesized elements to make them
richer. If you have to cut the end of an effect or piece of music to eliminate other sounds, add some echo to
let it end gracefully. Reverb can also help smooth over awkward edits within a piece of music.
One classic studio production trick is “preverb”: echoes that come before an element instead of after it. Add
some silence before a clip, and then reverse the whole thing so it plays backwards. Apply a nice, rich reverb.
Then reverse the reverberated clip. The echoes will build up magically into the desired sound, which lends
an eerie sound to voices and can be interesting on percussive effects.
Always think about why the reverb is there. Early Hollywood talkies added reverb to exteriors to make
them more “outdoorsy.” But if a sound doesn’t have something hard to bounce off, there can’t be an echo. In
this case, filmmakers used speakers and microphones in a sealed room to generate the reverb the exact
opposite of what you’d find outdoors. The tracks felt artificial, and Hollywood quickly abandoned the
practice.
Similarly, don’t expect a reverb to necessarily make something sound bigger. In the real world, we associate
reverb with distance, not size. Sure, we usually stand farther away from very big objects; a large machine in
the background ambience of a room would probably have reverb. But adding reverb doesn’t make a voice-
over sound bigger. It just makes it sound farther away, destroying intimacy.
A single longer delay, with a slower change in its delay time, can be combined with the original signal. As it
does, different frequencies will be canceled or reinforced based on the delay time. This flange effect imparts a
moving whooshing character to a sound, and is often heard in pop music production. It can add a sense of
motion to steady sound effects, increase the speed of automobile passbys, and is a mainstay of science fiction
sound design. Chorus and flange don’t need as much processing as a true reverb, so even low-cost units and
simple software plug-ins can be useful.
Pitch shifters use the same memory function as a delay, but read the audio data out at a different rate than it
was written. This raises or lowers the pitch by changing the timing of each sound wave, just like changing
the speed on a tape playback. Unlike varispeed tape, however, the long-term timing isn’t affected. Instead,
many times a second, the software repeats or eliminates waves to compensate for the speed variation.
Depending on how well the software is written, it may do a very good job of figuring out which waves can
be manipulated without our noticing.
Comb Filtering
Dimmer noise and other power-line junk can be very hard to eliminate from a track because it has so many
harmonics. But one delay technique, comb filtering, is surprisingly effective. You need to know the precise
frequency of the noise’s fundamental. Divide it by one to get the period of the wave (if the noise is based on
60 Hz, then the period is 0.01666 seconds, or 16.666 ms). Then combine the signal with a delayed version of
itself, exactly half that period later (for 60 Hz, that would be 8.333 ms). The delay will form nulls in the
signal, very deep notches that eliminate many of the harmonics. The nulls are so deep and regularly spaced,
that if you drew a frequency response graph, they’d look like teeth of a comb.
The numbers in the previous paragraph are actually repeating decimals (such as 0.0166666666 . . .); I cut
them off to save space on the page. Very few audio programs outside the lab let you set a delay this
precisely. But you can simulate the delay in an audio program that supports multiple tracks, by putting a
copy of the original clip on a second track and sliding it the right amount. For the most accurate setting,
locate the actual noise waveform during a pause in dialog, measure its length, and slide the copy to the
halfway point. Oftentimes, there’ll be an inverted noise spike halfway across the wave. This visual
adjustment can be better than trying to calculate the delay, since small sample-rate differences between
camera and NLE can change the noise frequency from exactly 60 Hz.
The delay necessary for comb filtering can also make a track sound metallic. Reduce this effect by lowering
the level of the delay around 3 dB.
Vocal Elimination
In pop music production, the vocalist and most instruments are usually recorded with individual mics,
rather than in stereo, and then distributed from left to right electronically. The main vocal is put in the center
of the stereo image, meaning its sound is identical on both the left and right channels.
If you invert the polarity of just one channel of the stereo pair, the vocal energy of this centered solo will
produce a negative voltage on one side while producing a positive one on the other. If you then combine the
channels equally into mono, positive and negative will cancel each other and eliminate the vocalist. Radio
stations often use this technique when creating song parodies. It can also be used to lower the melody line in
pre-produced music so dialog can show through.
This technique has limitations. It’s useless on acoustic recordings, such as live performances, or if studio
tricks like double-tracking have been used. It doesn’t eliminate a soloist’s reverb, which is usually in stereo,
so the result can be a ghostlike echo of a voice that isn’t there. And it also eliminates anything else that was
centered in a studio recording, which often includes critical parts of the drum kit and the bass line. Still, it’s
an amazingly effective trick when used with the right material.