Sam George - Become A Competent Music Producer in 365 Days-Focal Press (2023)
Sam George - Become A Competent Music Producer in 365 Days-Focal Press (2023)
Sam George - Become A Competent Music Producer in 365 Days-Focal Press (2023)
“I love how this book brings effortless simplicity to the art of music produc-
tion. I fully recommend it.”
Damian Keyes, Educator, Founder of BIMM
and DK-MBA
Sam George
Designed cover image: © Ninoon via Getty Images
First published 2023
by Routledge
4 Park Square, Milton Park, Abingdon, Oxon OX14 4RN
and by Routledge
605 Third Avenue, New York, NY 10158
Routledge is an imprint of the Taylor & Francis Group, an informa business
© 2023 Sam George
The right of Sam George to be identified as author
of this work has been asserted in accordance with
sections 77 and 78 of the Copyright, Designs and
Patents Act 1988.
All rights reserved. No part of this book may be
reprinted or reproduced or utilised in any form or
by any electronic, mechanical, or other means, now
known or hereafter invented, including photocopying
and recording, or in any information storage or
retrieval system, without permission in writing from
the publishers.
Trademark notice: Product or corporate names may
be trademarks or registered trademarks, and are
used only for identification and explanation without
intent to infringe.
British Library Cataloguing-in-Publication Data
A catalogue record for this book is available from
the British Library
Library of Congress Cataloging-in-Publication Data
Names: George, Sam (Music producer), author.
Title: Become a competent music producer in 365 days / Sam George.
Description: New York : Routledge, 2023. | Includes bibliographical
references and index. | Identifiers: LCCN 2022057003 (print) |
LCCN 2022057004 (ebook) | ISBN 9781032446110 (paperback) |
ISBN 9781032446141 (hardback) | ISBN 9781003373049 (ebook)
Subjects: LCSH: Sound recordings—Production and direction. |
Popular music—Production and direction.
Classification: LCC ML3790 .G46 2023 (print) | LCC ML3790 (ebook) |
DDC 781.49—dc23/eng/20221202
LC record available at https://lccn.loc.gov/2022057003
LC ebook record available at https://lccn.loc.gov/2022057004
Typeset in Goudy
by Apex CoVantage, LLC
Access the Support Material: www.theproducertutor.com
Contents
Preface xvi
Acknowledgements xix
1 Balancing a mix 1
Day 1 – Welcome 1
Day 2 – Balancing a mix 1
Day 3 – Why is balancing a mix so overlooked? 2
Day 4 – SPL vs. perceived loudness 3
Day 5 – 24-bit 3
Day 6 – Reference tracks 4
Day 7 – What is gain staging? 5
Day 8 – Normalising region gain 7
Day 9 – Pre-fader metering 8
Day 10 – Phase cancellation 9
Day 11 – Bottom-up mixing 9
Day 12 – Top-down mixing 10
Day 13 – The clockface technique 11
Day 14 – Mono compatibility 13
Day 15 – The VU meter trick 13
Day 16 – Balancing vocals 14
Day 17 – The mixing hierarchy 15
Day 18 – A singular focal point 15
Day 19 – The rule of four 16
Day 20 – The pink noise technique 16
Day 21 – Loudest frst 17
Day 22 – Set a timer 18
Day 23 – Leave your busses 18
Day 24 – Watch your meters 18
Day 25 – Split them up 19
Day 26 – FFT metering 20
viii Contents
2 Panning a mix 23
3 EQ 43
Day 1 – Applying EQ 43
Day 2 – Fundamentals and overtones 43
Contents ix
4 Compression 64
5 Reverb 85
7 Automation 127
8 Vocals 152
9 Synthesis 175
10 Mastering 197
Index 224
Preface
‘Music producer’ is a term that has evolved over time. In the traditional sense,
music producers assist artists to make records. They help bring artists’ visions
to life. This can involve technically and creatively guiding them, and may
include anything from coaching the artists through their performances to
organising meetings, scheduling, and budgeting. However, it tends to mean
something a little different when referred to in the modern day. When peo-
ple call themselves music producers nowadays, they simply mean people who
create music. Literally, they produce music. This may involve artist liaison,
organising sessions, and all those other things, but often the producers may
themselves be the artists. This is the context within which I operate through-
out this book. I’m referring to the music producer, a self-suffcient entity that
creates music for themself or for others.
I have spent a lot of time watching online educational content over the
years. As a music producer, I am entirely self-taught. My journey began when
I started writing songs when I was 13. I started in the traditional sense with
pen and paper, but by the time I was 16, I was making sketchy demos using
Cubase. By the time I got to university, my demos were acceptable at best. My
degree was based primarily on songwriting and contained little production
time. But this is where my interest in the subject really sparked. I spent hours
upon hours making demos and sending them back and forth between my co-
band members. Over time they improved but were nowhere near professional
in quality. Bear in mind that I started university in 2005. This is roughly the
time Facebook launched. At that time, you needed to be affliated with a
university (or college) to join. YouTube wasn’t on the scene at all. As online
content exploded over the following ten years, I began consuming as much
of it as possible.
Fast forward to now, and I’ve watched just about every so-called educa-
tional content creator there is. Out of them all, perhaps only a handful know
what they’re talking about. The vast majority are either factually incorrect,
only partially correct, or take so long to give you the information you need
that you’ll have moved on to something else by the time they get there.
Preface xvii
• www.theproducertutor.com
• www.youtube.com/@theproducertutor.com
• www.tiktok.com/@theproducertutor
• www.instagram.com/theproducertutor
Acknowledgements
My thanks must begin with Mum and Dad. Their unwavering support truly
knows no limit. They have guided and supported me practically and fnan-
cially through every challenge, of which I have posed them many. With lesser
parents this book wouldn’t exist. Perhaps I wouldn’t either. Mum and Dad,
words can’t describe my gratitude for everything.
My wife, Estrella. You have shown complete faith in me, even when my
own wavered. You’ve supported me in my risk-taking and have picked me up
every time I’ve fallen. I owe you everything.
My secondary school music teacher, David Leveridge. You tried your best
to understand me and support my creativity, even when it went against the
grain. I was a challenging student, sometimes for the right reasons, sometimes
not. But you backed me. Good education begins and ends with the teacher,
and you were one of the best.
My brothers in crime, Tim Talbot, Jay Armstrong, John Atkins, and Harry
Armstrong, the Armstrong boys. The days, months, and years on the road
with you in sticky-foored clubs and rural recording studios moulded my popu-
lar music education. I learned how to write good songs with you, learned how
to record music with you, learned how to tour with you. You’ll always be my
frst true love.
The two best role models any teacher could ask for – Liz Penney and
Alice Gamm, The BRIT Queens. Liz, you gave me my shot, took a punt on
me when you already had a safe option. You let me get my foot in the door at
BRIT. Alice, you showed faith in me, trusted me, gave me responsibility, and
let me run with it. The two of you are truly wonderful educators. I learned so
much from you both in such a short time. Thank you.
Dec Cunningham and Mat Martin. It’s hard to quantify how much knowl-
edge I’ve robbed from you both. Daily I’d be in one of your ears for something
or another. Never has knowledge been given up so willingly. So much of what
I know now I gleaned from you two. You’re both legends.
If the Armstrong boys were my frst true love, Nathan Lilley, you were my
second. A wonderful friend, a sickeningly talented musician, the most pas-
sionate educator with a ferce commitment to the students, and a fountain of
xx Acknowledgements
knowledge in all things music. Thank you for your friendship, and for answer-
ing my WhatsApps.
And fnally, my oldest friend, Luke Fox. We message daily, almost always
about music. Your counsel and interest in exploring the subject from every
angle continually inspires me to review and reassess. Thank you for challeng-
ing me.
Unit 1
Balancing a mix
Day 1 – Welcome
Are you tired of trawling through YouTube looking for thorough, detailed
information on a music-production topic? Are you even sure that the infor-
mation you’re fnding is of good quality?
I’m Sam, the Producer Tutor – your musical PT. I’m here to bring you
my 365-day course that’ll take you from enthusiastic amateur to competent
producer.
When I was learning music-production, the available content was sparse
and generally not that engaging. Nowadays, the issue is compounded by every
other home producer flling the internet with tutorials on creating different
sounds and effects. But nobody’s talking about the fundamentals. The basics.
The foundations of music-production.
That’s what I promise to give you. A structured, thorough approach to
learning music-production properly. I’ll teach you from the ground up, cover-
ing all the fundamental topics you’ll need to digest to solidify your production
skills.
What makes me qualifed to teach you? I spent six years teaching song-
writing and music-production at The BRIT School in South London. Many
of my students have achieved success nationally and internationally. Some
have even gone on to sell platinum records.
So, strap yourself in, and have your notepad at the ready because, within
365 days, I’ll have taught you just about everything you need to know to
produce music competently.
DOI: 10.4324/9781003373049-1
2 Unit 1: Balancing a mix
wonder how there’s possibly enough within this topic to warrant spending
a whole month on it. But I promise you that you will feel so much better
equipped and informed when you get to the end of this unit than you do
now.
For me, there are four fundamental skills that are the building blocks of
producing music. These are balancing a mix, panning a mix, applying EQ,
and applying compression. If you can master these four elements, you’re 90%
of the way towards a great-sounding track. Everyone talks about EQ and com-
pression, sometimes panning, but mix balancing gets very little airtime. And
it’s the most signifcant part.
TASK – Find a resource online where you can download free multi-
track stems. You’ll be given a few different options if you Google ‘Free
Multitrack Stems’. You’ll want various options to use as practice in the
coming units.
TASK – Make a bullet point list of how you would typically begin
balancing a mix. I want you to compare with my guidance at the end
of this unit to see where you can improve your current fow.
Day 5 – 24-bit
But before you even reach for a fader, there are two vital things you need
to do frst. The frst thing is to make sure you’re working in 24-bit. When
digital audio began to take over from analogue, loads of old practices got
carried over. One of the biggest was the idea of recording as loudly and
cleanly as possible. Why? To keep the signal above the noise foor. What
is the noise foor? Well, every electronic device produces noise. These
devices include your mikes, cables, and audio interfaces. The noise foor
refers to the amount of noise your equipment makes. The noise foor was
undoubtedly an issue when working on tape. Using a lot of tracks in 16-bit
audio can begin to gather a noticeable noise foor. But 24-bit solves that.
The 24-bit noise foor is so low that you can give yourself loads of room
(15 – 20dB) between your peaks and 0dBFS without worrying about noise
or loss of resolution.
4 Unit 1: Balancing a mix
If you don’t have Reference, don’t fret. You can achieve the same result, with
just a little more work. In this case, route everything in your mix to a new
buss. You can use this as your mix buss instead of your stereo out. Drag your
reference tracks into your project on new tracks but route these to your stereo
out, not your new mix buss. Then, place a metering plugin on your stereo out.
Use the metering plugin to match the levels of your project and your refer-
ence tracks. Focus on matching the loudness of the tracks using your eyes and
your ears. Use the loudest sections of the songs for this. The peak levels will
be quite different as your reference tracks will already have lots of compres-
sion and limiting on them.
Now is an excellent time to identify a helpful rule I like to follow. Good
plugins fall into one of two categories. Either they make an aspect of pro-
ducing markedly quicker or they make it easier by reducing your workload.
That applies to Reference. Or they allow you to generate sounds that you
can’t recreate in any other way. Most plugins fall into this category. Plugin
manufacturers love to convince you that their latest product is the only
one that will allow you to create a specifc sound. This is very rarely the
case. Now, I’m not against third-party plugins at all. I own a lot of them.
But before spending any of your hard-earned cash, do your research. Read
some blogs. Watch some YouTube videos. Ensure what you’re looking to
buy is a good spend and that you’re not just being won over by an effective
marketing campaign.
Having said all that, you’re now ready to start reaching for faders, which
is precisely what you’ll be doing tomorrow.
process of ensuring that the level on each of your tracks is healthy, not too
hot, and not too quiet. There are several benefts to doing this.
You make your working signal on each channel more consistent by gain
staging things. Think about it. The level at which you record each part – your
vocals, guitars, drums, etc. – will likely all have been trimmed in at slightly
different levels. Especially if you’ve been working with virtual instruments,
by default these will probably have been quite hot in the instrument. Or, if
you’ve tracked parts over multiple recording sessions or days, your trim levels
will likely not be precisely the same. By gain staging, you can level the play-
ing feld across all your tracks. This process has the knock-on effect of ensur-
ing that you don’t need to start making signifcant moves with your faders to
compensate for disparities between channels.
However, the most signifcant beneft comes when sending your signal
through plugins. If your level is too low, you won’t take advantage of the
digital 24-bit system’s full length. Your DAW will fll the unused headroom
with nothingness, and when your digital signal is returned to analogue as it
comes out of your speakers, it will sound unclear and limp. That sucks. But
in fact, it’s the opposite end of the spectrum that sounds worse. When you
work too hot, it barks and distorts in nasty ways, which tend to attack you in
short bursts. You avoid both the limp, lifeless mix and the crunchy, distorted
one by gain staging.
So, what’s the correct level to gain stage to? Generally speaking, the
optimum level at which to send a signal through a plugin is -18dB. Manu-
facturers design and build plugins to work best with this signal level coming
through them. But there are multiple ways of measuring your signal, so
what’s the best way? Well, your channel meters are what’s known as full-
scale peak meters. These work super fast and display the highest peak level
of the signal on each track. But actually, these meters work much faster
than the human ear does, so in terms of gain staging, they aren’t the most
helpful representation.
What’s much more helpful is a VU meter. These have been around since
the ’30s and work much slower, much more like how the human ear works.
Their response time is typically around 300ms. You can calibrate your VU
meter to target your chosen dB level. So, if you calibrate your VU meter to
target -18dBFS, this will mean that when you’re hitting 0dBVU on the meter,
you have 18dB of headroom. This method is the best way to gain stage to
optimise your signal for processing. This isn’t an exact science, so you’ll need
to proceed with some caution.
Now, you still need to keep your peak meter in mind. You don’t want to
exceed -10 to -6dBFS at the most, so you must watch that too. But the VU
meter method is the best way to gain stage most effectively, and importantly,
humanly.
Bear in mind also that sounds with a wide dynamic range will be trick-
ier to gain stage using the VU meter than more dynamically consistent
Unit 1: Balancing a mix 7
cranking the gain on a hi-hat to get it metering where you want it on your
VU meter.
TASK – Find and practise normalising region gain (or your DAW’s
equivalent).
TASK – Make sure you know where and how to toggle between post-
and pre-fader metering in your DAW.
Unit 1: Balancing a mix 9
snare bottom or other snare sounds, bring these up to taste around your main
snare sound.
Then bring your kick tracks up so that they’re almost as loud as your snare.
The low end should feel full and robust without interfering with the snare
drum’s bottom end. Next, you’ll bring in your toms. If used sparingly, they
can be almost as loud as your snare. But if they’re used a lot, they should sit a
little lower in the mix.
Now bring in any cymbals, overheads, and room mikes you may have as
required. How you do this will vary depending on the genre you are produc-
ing, so pay attention to your reference tracks to identify a reasonable level.
They should support your spot mikes rather than overpowering them, so pay
careful attention to each element of your kit.
It’s also worth mentioning that the instrument you bring up frst is genre-
dependent. For most genres that use ‘real’ instruments, starting with the snare
is the right choice. The snare is not as important as the kick in many elec-
tronic genres. Use your judgment to decide which element to build your drum
balance around.
I’ll say this many times throughout this course but use your reference
tracks. How you mix your drums and many other aspects of your mix will
depend on the style of music you’re trying to make. To get a genre-ap-
propriate sound, comparing what you do side by side with professionally
produced and released tracks is essential. This isn’t cheating, and it doesn’t
make you less competent. All the pros do it! It keeps your ears honest and
trustworthy and puts your mix decisions in context. Think about it like this:
If I gave you a blank canvas and beautiful paints and brushes, you’d have
everything you needed to make a beautiful painting. But if I blindfolded
you, you couldn’t see what you were doing! Mixing without reference tracks
is the same. Using reference tracks is your way of seeing that the mix deci-
sions you’ve made are good ones, rather than doing it blindly and hoping
for the best.
Figure 1.1 An aerial view of a drum kit with clockface overlaid to illustrate
the clockface technique.
Source: Image by Sam George.
But how much should you pan things? I like to use the clockface method.
In this method, you think of your kick and snare as 12 o’clock, which is dead
centre in your mix. In Figure 1.1 you can see how I’ve identifed where I’ll place
everything in this drum kit. I’ve assumed that 8 o’clock and 4 o’clock are my
widest positions left and right. These are effectively 100% left and right. Then,
everything else in the mix is panned according to where it is on the clock face.
For example, the rack tom is at 11.30 on the clock face, which is roughly 12.5%
left. The second foor is around 2.45 on the clock face, about 65% to the right.
You can work out these percentages as a pan position relative to your DAW’s
system. Logic Pro goes from 0 – 64 left and right. Other DAWs go from 0 – 100.
Every drum kit will be different as all drummers set up their kit differently.
I like this approach so much because it means that your instruments will ap-
pear in the same position as they do in your overheads. Consider it this way.
Our second foor in this example will appear in this 65%-to-right position in
our overheads. If we don’t pan the spot mike in the same place – say we go
only 30% to the right or too far at 85% to the right – it will sound like two
different toms in our mix rather than one.
TASK – Insert a correlation meter on your mix buss and check your
drums for mono compatibility.
Make sure that you check your reference mixes at this point. The amount
of bass that can be heard in a mix changes a lot from genre to genre. Don’t be
afraid to adjust other levels in your mix as you add more parts. Every time you
bring a new element in, it’s likely to interfere with something else slightly.
Nothing is set in stone. You can make adjustments at any point.
I’ve got a great trick to help you get the balance between your kick and
bass just right, though. Solo out your kick drum. Then adjust the trim on your
VU meter until single hits are metering at -3dBVU. Then add your bass in.
Trim your bass on its channel so that when the kick and bass strike together
they hit 0dBVU. Nine times out of ten this will give you an outstanding bal-
ance between your kick and bass. Just don’t forget to return the trim on the
VU meter afterwards.
TASK – Study the level of the lead vocal in a range of genres. You will
fnd that it is more prominent in some styles than in others. How will
this inform where you place your vocals?
Unit 1: Balancing a mix 15
your ear, and training yourself to make positive, informed mix decisions
based upon a clearly defned goal.
loudness (let’s stick with 0dBVU). You then go through and solo each track
one at a time and adjust your trim until it’s just audible above the pink noise.
In theory, once you’re fnished, you’ll have achieved roughly the same thing:
All tracks should feel nicely balanced, with no one sound jumping out too
much above the rest. This should also provide you with a solid starting point
and make it easier to preserve your headroom.
Every DAW will have a pink noise generator built in somewhere, so seek
it out. It may be within the utility plugins, or at the very least, some synths
have a pink noise generator within them.4
TASK – Set up a new static mix using the pink noise technique. How
does this compare to a static mix made in the traditional way?
TASK – Go back to a previous static mix. Pull all the faders down and
rebalance it, focusing on the loudest section of the song. How does this
static mix compare with the previous one you had?
18 Unit 1: Balancing a mix
TASK – Get a timer ready! Set it to ten minutes, and then see if you
can make your static mix in that time. Practise working effciently
until you can reduce this time to fve minutes.
TASK – Review your busses in a mix. Are they at unity? If not, adjust
things so they are!
it! We’ve already mentioned our peak level, but let’s remind ourselves. On
individual channels, you don’t want your peak level going above, say, -6dB at
the very most. For me, this is too hot. I set -10dB as my max because I don’t
want to exceed -6dB on my mix buss.
Peak meters are one thing, but something else entirely is LUFS. LUFS
stands for Loudness Units Full Scale and is what matters when it comes to
competing with professionally produced records. It can be measured in three
ways: Momentary, short term, and integrated. Momentary is how loud your
track is at that specifc moment in time, short term is over a period of a few
seconds, and integrated is over the length of everything you play through it.
At this stage, keep an eye on your short term. I would recommend aiming not
to exceed -18dB LUFS short term at this point.
Most DAWs will have a loudness meter built in, and if they don’t, there’s
a great free one you can grab from YouLean. You can adjust your target level
within it on some, which is very handy!
TASK – Set up a loudness meter on your mix buss. You can use your
DAW’s stock meter if it has one or get a third-party option. Moni-
tor your short-term loudness during the loudest section of your track.
What numbers are you metering?
TASK – Revisit a mix. Are there places where you could split chan-
nels up? The likely candidates are vocals and guitars, but you may fnd
opportunities on any instrument. It will depend on the arrangement.
20 Unit 1: Balancing a mix
TASK – Set up an FFT meter on your mix buss. You can use your DAW’s
stock meter if it has one or get a third-party option.
TASK – Run through your metering checklist. If your DAW has the
option to save a channel strip as a preset, then save your mix buss
setting. This will allow you to recall all your required meters quickly
and easily.
Unit 1: Balancing a mix 21
That’s a lot of ground to have covered. I promised you I’d be thorough, and
I hope I haven’t disappointed you. See you in a couple of days when we’ll be
looking at panning.
C hecklist
• Make sure you’re working from a clean mix
• Check your loudness at your mix position
• Check your DAW is set up to use 24-bit
• Set up your reference tracks
• Gain stage with a VU meter
• Toggle pre-fader metering
• Bottom-up or top-down drums
• Pan your drums
• Check mono compatibility
• Bring in other mix elements according to your mix hierarchy
• Do you know the rule of four?
• Have you tried the pink noise technique?
• Set a timer
• Split tracks up where necessary
• Set up your meters (correlation, loudness, FFT)
Further reading
1 Triggs, R. (2021). What you think you know about bit-depth is probably wrong. [on-
line] soundguys.com. Available at www.soundguys.com/audio-bit-depth-explained-
23706/ [Accessed 9 Nov. 2022].
22 Unit 1: Balancing a mix
2 Asher, J. (2022). Why use pre-fader metering in Logic Pro X? [online] macprovideo.
com. Available at www.macprovideo.com/article/audio-software/why-use-pre-fader-
metering-in-logic-pro-x [Accessed 9 Nov. 2022].
3 Hobbs, J. (2021). What is phase cancellation? Understand and eliminate it in your audio.
[online] ledgernote.com. Available at https://ledgernote.com/columns/mixing-mas-
tering/phase-cancellation/ [Accessed 9 Nov. 2022].
4 Bazil, E. (2014). Mixing to a pink noise reference. [online] soundonsound.com. Avail-
able at www.soundonsound.com/techniques/mixing-pink-noise-reference [Accessed
9 Nov. 2022].
5 NTI Audio. (2014). Fast Fourier Transformation FFT – basics. [online] Nti-audio.
com. Available at www.nti-audio.com/en/support/know-how/fast-fourier-transform-
fft#:~:text=The%20%22Fast%20Fourier%20Transform%22%20(,frequency%20
information%20about%20the%20signal [Accessed 9 Nov. 2022].
Unit 2
Panning a mix
DOI: 10.4324/9781003373049-2
24 Unit 2: Panning a mix
However, for a stereo sound source, this isn’t so helpful. Stereo tracks have
different information on the left and right channels. Using a balance knob
on a stereo track will only adjust the left and right channel’s level rather than
changing the whole sound source’s position to the left or right. So, if you pan
a stereo track hard to the left with a balance knob, you will effectively just
hear the left channel of the sound and none of the right.
Instead, you want to use a stereo pan pot on stereo tracks. This type of
control will change the stereo channel’s absolute position between your mon-
itors, not just adjust the left and right channel’s levels. This means you’ll still
hear your entire stereo instrument, only in the stereo feld position you want.
As a general rule, on mono tracks, use the default pan knob, a balance
knob; for stereo tracks, use a stereo pan knob.
The third option is a binaural pan control, which gets pretty tricky,
and I’m not going to be covering it in this unit. In a nutshell, it’s a
method of emulating human hearing by allowing you to position a sound
source in front, behind, above, below, or to the left or right of the listen-
ing position.
TASK – Explore the pan pots in your DAW. Ensure you know what
pan pots you have available to you and how you change between the
different types.
TASK – Pay attention to the kick and snare in various records. Listen
to some acoustic tracks, some electronic, some old, some new. What
do you notice?
26 Unit 2: Panning a mix
Day 6 – Hi-hats
Panning hi-hats is a hot topic of debate that people argue over for hours.
My opinion on it is simple. If you’re working with a real drum kit played by
Unit 2: Panning a mix 27
a human, then the hi-hat will be panned to whatever position it was on the
drum kit. Often this will be around 60–70% to one side – whichever side it
was on the kit. However, don’t feel that you must use it. You may fnd that
you have enough hi-hat present in your overheads and that an additional hi-
hat channel is overkill. It will depend on the part, the performance, and the
recording. So, use your ears and producer’s intuition.
In electronic music, however, the rulebook is far less precise. Sometimes the
hi-hats will stay very close to, if not straight down, the centre of your mix. Some-
times you’ll be working with a stereo hi-hat sample. From time to time, you’ll
want to hard-pan your hi-hat. Frequently, you’ll be using multiple hi-hat samples
in the same project. Occasionally you’ll use auto-panning so that the hats move
around the stereo feld. So, to put it simply, you can do what you want with your
hi-hats in electronic music. In this case, I would advise using the guidance I
just gave you: Think about having narrower verses and wider choruses to create
movement in your mix. Keep your verse hats narrow and close to the centre and
widen them out in your hooks with more movement and panning.
Day 7 – Toms
Panning toms on a live drum kit is reasonably straightforward. You want to place
them in the same position in the drum kit as they appear in your overheads.
Think back to the clock face technique from Unit 1. That’s your best starting
point for live drums. As a general rule, I wouldn’t go wider than about 75% to the
left or right for toms. But this is dependent on the size of the kit. How you pan
Travis Barker’s toms will be very different from what you do with Mike Portnoy’s!
For electronic toms, as with your hi-hats, there is no specifc set of rules
to follow. In the case of electronic music, do what is best for the song. If you
want a crazy super-wide tom fll, go for it. But always do what is best for the
music. Generally, you want each mix element to have its own space in your
stereo feld. So, don’t place your mad tom fll directly on top of your hats,
shakers, or congas. Position each element in its unique spot so that it has a
place to poke through the mix without interfering with another aspect.
Day 8 – Bass
Panning bass is simple: Don’t pan it! Your electric bass, along with your kick
and snare, is what will root your track, so keep it centred. However, some-
times you may wish to add some subtle roomy width to create depth to your
bass and make it feel a little more 3D. You can do this by creating a reverb
send, but keep it tastefully low in the mix. Generally, I will let this come
forward in sections of my track where the texture is thinner – most likely the
verses, where there is more space in the arrangement for the bass room to be
heard. But in the fuller sections of the track, you can back this off, allowing
the bass to entirely focus back down the centre of your mix whilst the song
is at its busiest.
Synth bass is a slightly different animal. For thick, subby basses with loads
of low-end and very little high-frequency content, you’ll want to keep these
up the centre of your mix. For a gritty, biting bass line with a lot more melody
that is used more like a riff or a bass lead, you’ll often want this to have more
width. Especially if you’re using an instrument in stereo, you’ll want to keep
it this way. In this instance, I suggest you keep everything below 100–120Hz
in mono, and anything above this can be in stereo. You can achieve this
in a couple of ways. Either use a mid/side EQ, low-pass everything in the
mid-channel below 100–120Hz, and high-pass everything in the side channel
above 100–120Hz. This method can feel a bit crude. Alternatively, duplicate
your track, so you have the same content twice. Low-pass one at 100–120Hz
and use a stereo imager to make it mono. High-pass the other at 100–120Hz
for your stereo content. You can then process both elements of your bass
sound independently and buss them back together to give you maximum con-
trol over your sound.
Day 9 – Guitars
Let’s talk about panning guitars. The frst vital thing to note is that guitars
and voices share a lot of similar frequency content. So, if you put them in the
same position in your stereo feld, they will end up fghting each other. The
loser will be your mix. Therefore, I recommend you aim to put your guitars in
a different position than any vocal in your mix.
Unit 2: Panning a mix 29
For stereo tracks such as piano, organ, and pads, if you fnd that they get
a little boring when they are placed in one static position in your mix, you
can try utilising an auto-panner to add some movement to them. CableGuys’
Pancake is a great free plugin. Or Soundtoys’ Panman is a brilliant premium
option.
TASK – Review the keys and synths in some of your previous tracks.
Assess whether some of these parts may have been better off narrower
in the mix or entirely mono.
TASK – Study some pictures of full orchestras. Identify where the dif-
ferent sections are located. Create your own panning template based
on this.
Unit 2: Panning a mix 31
Day 12 – Vocals
Always keep your lead vocal centred in your mix. I pretty much never
stray from this. If I have a single lead vocal in a track, it’ll be kept straight
up the middle. I may stray from this if I have a double-tracked lead vo-
cal, for example. Then I may pan each one slightly off centre. Similarly,
I may pan ad-libs off centre. But they’ll still be very close to the centre
of the mix.
The rulebook for backing vocals is much less easily defned. Modern pro-
ductions often have layers and layers of backing vocals to help thicken the
texture and give a lush and rich production. I love to fnd pairs of backing
vocals that are similarly weighted and pan them hard left and right to create
a super-wide feel to the BVs that won’t interfere with the lead vocal. Again,
you can alter the positioning between verse and chorus to make your mix
open out into the chorus or drop.
If you have fewer BVs that can’t so easily be hard-panned and balanced
in this way, then again, you should look for a unique spot in your stereo feld
to position them. You can place a BV anywhere in your stereo feld. Keep
in mind the main tips we’ve covered already: Don’t put it directly on top of
something else in your mix and keep things evenly balanced between your
left and right sides.
excellent for getting your mix up and running in a short time as it forces you
to make decisions quickly, assessing their impact immediately. Think of it like
this: When painting, instead of deciding if you want to paint something navy,
baby, or sky blue, determine if you wish to paint it blue at all or if, in fact, it
would be better off red. This is the LCR panning method in a nutshell.3
TASK – Using the multitrack stems for a new project, set it up using
the LCR technique. How quickly can you balance your stereo feld?
TASK – Go back over some of your previous mixes. Focus the low end
of your kick and bass channels below 120Hz into the centre of your
mix. How does this affect how the song feels?
Unit 2: Panning a mix 33
elements in your arrangement change, how you balance things in your stereo
feld will likely need to adapt too. It’s improbable that all the tracks will be
playing in your song all the way through. As parts come in and out, you’ll
need to adjust your stereo balance to stop your mix from toppling to the left
or the right.
Working out how to keep your stereo feld balanced at this stage imme-
diately after you have set up your static mix will stand you in excellent stead
moving forward. Separation in the stereo feld is a massively overlooked com-
ponent of a mix and is such a helpful tool when planned out carefully and
strategically from the outset. You should use a balance meter, which will indi-
cate whether your mix is nicely balanced or not.
TASK – Review the panning in some of your previous mixes. Are they
evenly weighted throughout? Are things appropriately paired off and
balanced in the left and right sides?
TASK – Revisit an old mix. Find one or two elements within that mix
that you can subtly alter the position of to add some added interest.
TASK – Make sure you have both a balance and a correlation meter
that you like. Your DAW may have them as stock, or you may prefer to
fnd a third-party option.
Unit 2: Panning a mix 37
Day 22 – Crosstalk
Check in headphones to ensure your mix doesn’t sound too disjointed or un-
balanced. Lots of your listeners will listen on headphones at some point. And
you want your music to sound great across all listening experiences. Listen-
ing on headphones and through monitors is a very different listening experi-
ence. This is because headphones don’t have any crosstalk. Crosstalk happens
when information from the right monitor reaches the left ear and vice versa.
When listening through headphones, this doesn’t happen, so you hear just
the right side in your right ear and vice versa. This makes for reasonably con-
trasting listening experiences. Something that sounds great in headphones
may not sound so effective through monitors.
Conversely, lots of home producers may produce almost exclusively on
headphones. You should try to avoid this for precisely this reason. You may
38 Unit 2: Panning a mix
TASK – Make sure you have more than one listening device in your
setup. As a minimum, you should have one pair of monitors and one
pair of headphones.
TASK – Review a previous mix. Pay close attention to any parts that
are playing rhythmic patterns. Are they balanced evenly, or are they
spread too wide? Adjust your mix accordingly.
of the Motown sound? Maybe you’re keen to play around with mono reverbs
and their placement in the stereo feld? There are so many variables to creat-
ing a vintage sound through your panning that it would be impossible to sum
them up in one short chapter.
The best thing to do here is to study the records you’re trying to nod
towards. There’s no set of rules, as many engineers have experimented with
many different techniques over the years. Listen carefully to your reference
tracks and extract the parts you like.
TASK – Find a vintage record that you like which utilises some unu-
sual panning. Attempt to replicate the positioning in your mix.
As you’ve discovered by now, if you didn’t know already, panning is way more
involved than most people think. It is a massive part of making a mix work,
ensuring you can hear everything in your mix clearly, and, when used intel-
ligently, can make your mix shine. When used poorly, it can pretty much ruin
your mix! So, it’s well worth spending time practising this skill and getting
good at it.
In the next unit, we’re moving on to number three of the Big Four: EQ. I
know this is a topic that many of you will want to explore in a lot of detail, so
make sure your pencil is sharp.
Checklist
• Make sure you have the right types of pan pot set up
• Ensure you’ve decided on your mixing perspective
• Check your drum positions correlate with the overheads and rooms
• Focus your low-frequency content
• Employ complementary panning. Are your left and right sides balanced in
terms of numbers? Amplitude? Timbre? Rhythmic content?
42 Unit 2: Panning a mix
Further reading
1 DRUM! (2021). What is Glyn Johns technique? [online] drummagazine.com. Avail-
able at https://drummagazine.com/glyn-johns-technique/ [Accessed 9 Nov. 2022].
2 Shaw Roberts, M. (2019). Why are orchestras arranged the way they are? [online]
classicfm.com. Available at www.classicfm.com/discover-music/orchestra-layout-
explained/ [Accessed 9 Nov. 2022].
3 Houghton, M. (2021). LCR panning pros and cons. [online] soundonsound.com.
Available at www.soundonsound.com/techniques/lcr-panning-pros-and-cons [Ac-
cessed 9 Nov. 2022].
4 Westlund, M. (2018). Essential tips for orchestral positioning and mix panning. [online]
Flypaper.soundfy.com. Available at https://fypaper.soundfy.com/produce/orches-
tral-positioning-mix-panning/ [Accessed 9 Nov. 2022].
Unit 3
EQ
Day 1 – Applying EQ
We’ve arrived at one of the big ones: EQ. Before we get into what EQ is, why
it’s so helpful, and how to use it, it’s essential to clarify that you shouldn’t
jump straight in at this point. If you skipped through the frst two units to
get to the meaty stuff, go back and do those frst. EQ is a beautiful tool and
will help shape a lot of your mix’s clarity and character. The way to get the
most out of your EQ is to balance and pan your mix thoughtfully frst. If your
mix isn’t well balanced, then any EQ alterations you make will be from an
uninformed position. So, you won’t be able to gauge properly whether the EQ
changes you’re making are genuinely making a positive contribution to your
mix or not.
Similarly, if you EQ before you pan, you may end up EQ’ing things that
don’t need to be touched. You may create the tonal separation you need be-
tween instruments by positioning them differently in your stereo feld. If you
EQ before panning, you may try to separate things too aggressively when in
reality a bit of panning might get you most of the way there. So, don’t cut cor-
ners in getting to this point. Set yourself up for success by doing the frst two
stages in your mix process: Balance your mix carefully and pan it intelligently.
What are we going to cover in this unit? We’ll start by ensuring we under-
stand precisely what EQ is and how it works. We’ll talk about different types
of equalisers and their strengths and weaknesses. We’ll cover a range of vary-
ing EQ techniques and applications and look at common mistakes made with
EQ, and I’ll share some top tips for EQ’ing in an informed way.
DOI: 10.4324/9781003373049-3
44 Unit 3: EQ
fundamental is, if you like, the actual pitch of the sound. It will be the low-
est part of the sound source, the initial vibration. Looking at the graphic
display in Figure 3.1, you can see the fundamental sticking out. Overtones
are subsequent vibrations that have their origin in the fundamental. They
are generated based on something called the Harmonic Overtone Series. The
frst overtone will be an octave higher than the fundamental. The second
will be a ffth above this, the third a fourth above this, with each overtone
mathematically becoming closer to the previous one.
‘Why is this important to know about?’ I hear you ask. . . . Well, to put it
simply, as the Harmonic Overtone Series1 continues and gets higher in pitch,
some of these overtones will begin to clash with the fundamental. Depend-
ent upon the sound source’s timbre, the key of your song, and the note being
played, you may wish to reduce or enhance certain aspects of a sound source
to bring out or reduce certain characteristic features that you like or dislike.
Understanding this concept of targeting specifc elements of a sound source
and why you may wish to do so will transform how you approach EQ. Rather
than just fddling until you decide you like the sound of something, you can
train yourself to listen for things you like and dislike and target these specifc
areas.
A table specifying different frequency characteristics. Across the top are labelled the frequency bands, from sub-bass across to ex-
tremely high end. Down the side are labelled too much, balanced, and not enough.
Unit 3: EQ 47
third-party plugins. Learn to EQ with the stock plugins that come with your
DAW frst.
With that said, let’s fgure out what these different types of equalisers look
like. The parametric EQ is the most common sort you’ll come across. It offers
you a lot of control, allowing you to adjust the sound’s frequency content in
a wide range of different ways. Commonly, they’ll have a visual representa-
tion of the soundwave within the plugin, allowing you to see how you affect
the sound you are EQ’ing. Graphic EQs, in contrast to parametric EQs, have
fxed frequency bands. You can only boost or cut the frequencies at the speci-
fed points on the EQ rather than adjusting the precise target frequency, as
you can on a parametric EQ. A shelving EQ allows you to boost or cut fre-
quencies in the high or low end above or below a specifed target frequency.
So, everything above or below the target frequency will be affected, rather
than being able to home in on a specifc target frequency, as with a paramet-
ric or graphic EQ. A dynamic EQ allows you to target a frequency and set a
threshold for that band. When that threshold is exceeded, you can instruct
the EQ to either compress or expand the targeted frequency.
By learning how each type of equaliser behaves, you will be able to select
the right tool for the job. Each equaliser has its strengths and weaknesses. A
parametric EQ is excellent for fnding and attenuating a specifc frequency
but is not so good at shaping a sound’s character. A graphic EQ is the op-
posite: Great for shaping character but not very good for targeting a specifc
frequency point. Shelving EQs are great for making space in your mix’s high
or low end but not much else. Dynamic EQs are handy for taming a specifc
frequency or making space in your mix at certain times without permanently
losing that frequency from the sound. Learn how to use the tools you have
available to you to use them most effectively. You wouldn’t use a screwdriver
to hammer in a nail, would you?
TASK – Look at all the stock EQs that come with your DAW. Identify
what type of equaliser they are.
Day 5 – Anatomy of an EQ 1
Whatever sort of equaliser you’re using, you’ll fnd the same anatomy across
all of them. When working with a new EQ plugin, the frst thing you should
do is study it to understand precisely what controls you have to work with.
The frst control you’re likely to come across is the frequency band. Some-
times these will be moveable, like on most parametric EQs, and sometimes
they’ll be fxed, as on graphic EQs. The frequency band allows you to select,
or at least know, what frequency that band will be affecting. By knowing this
and referring to our chart from before, you’ll quickly assess whether a cut or
boost at that frequency will provide the characteristic you’re after.
48 Unit 3: EQ
The second ubiquitous control you’ll come across is a flter. Lots of EQs
will have high-pass and low-pass flters built in. These are self-explanatory:
They allow everything above or below a specifed frequency to pass through,
hence the name high- or low-pass. These can be presented in several ways
depending upon the EQ’s style, so make sure you learn where they are. You’ll
use them all the time to tidy up sounds and remove undesirable frequencies
in the high and low end.
Day 6 – Anatomy of an EQ 2
Along with the flter, there are two other types of EQ you’ll come across:
These are the shelf and the bell. A shelf is similar to a flter in that it affects
everything above or below a specifed point. Unlike a flter, though, you can
boost a shelf as well as cutting it, and you don’t have to cut or boost all the
way; as with a flter, you can just shelf a bit.
There are three other vital controls you’ll come across on your equaliser.
On your flters, you may have a slope control. The slope designates how ag-
gressively the frequencies are fltered above or below your specifed cut-off
frequency. This is typically stated in dB/octave. Sometimes you’ll want to
have a steep slope to cut most of the sound, or sometimes a gentler slope may
be appropriate where you wish to tidy gently and non-aggressively.
You’ll also come across the Q or bandwidth control. The Q control applies
to the width of a frequency band. Sometimes you’ll want to target a precise
frequency and will therefore require a narrow Q that only impacts on a small
frequency range. Other times you may want to be gentler, most likely when
boosting a frequency range, so a broader Q would be appropriate. You won’t
fnd a Q control on every EQ, though. Many EQs have bands with fxed Qs.
The other noticeable thing you’ll encounter is the gain control. The gain
control simply allows you to cut or boost your targeted frequency range by
your specifed amount. Once these controls are locked in your mind, you’re
50% of the way there. As with selecting the appropriate equaliser for the job,
you also want to choose the most appropriate EQ type (a flter, shelf, or bell)
to do the job most effciently.
TASK – Explore your stock EQs further. Do any of them have shelves?
If they have flters, do they have slopes? Do your bands all have Q/
bandwidth control? Have you found the gain control for each band?
Unit 3: EQ 49
Day 7 – Subtractive/corrective EQ
There are only two methods of EQ’ing. The frst is subtractive or corrective
EQ. This applies to any cut you make. Almost all recorded sounds will have
parts of their sounds that are less pleasant than others or are entirely un-
necessary. Recordings pick up undesirable frequencies for all sorts of reasons,
whether it’s the microphone being used, the room you’re recording in, or
something physically within the sound source being recorded. Even synthe-
sised sounds can have unwanted aspects to them. Identifying and reducing
these undesirable or unnecessary frequencies will give you a cleaner, more
pleasant sound to work with. Learning to subtract unpleasant frequencies
sensitively is key to retaining the character of a sound whilst creating the
space in your mix that you desire. Applying subtractive EQ too aggressively
can result in removing frequencies that you would rather keep in your sound
source and often means you end up sucking the life out of your sound, making
it feel thin, shallow, or lacking character.
Your subtractive EQ’ing will often be the frst thing you do to a sound in
your signal chain. You’ll want to remove unpleasant or undesirable frequen-
cies initially before applying any other processes to a sound source. That’s not
to say that you can’t use further subtractive EQ later in your signal chain, but
it’s a good idea to do it frst and foremost to ensure you’re working with the
cleanest, best version of your sound source from the beginning.
It’s important to note at this stage that any subtractive EQ move will
result in a quieter sound. Reducing or cutting a frequency from a sound is
literally taking that part of the sound away, so you must gain stage after apply-
ing subtractive EQ to ensure that your moves are good ones and are not just
making your sound source quieter.
Day 8 – Additive/creative EQ
The second method of EQ’ing is additive or creative EQ. This applies to any
frequency that you wish to boost. There will often be parts of a sound that are
very pleasant that you’ll want to hear more of. There’s nothing wrong with
boosting something that you like. However, I would suggest you proceed with
this word of caution in mind: Boost with a wide Q. Keeping the Q wide when
boosting will sound much more natural than using a narrow one. Narrow
boosts will almost certainly sound unnatural and will end up poking through
your mix in unpleasant ways. Boosting broadly means the effect is gentler
and, therefore, more natural sounding.
50 Unit 3: EQ
TASK – Read the manual(s) that came with your microphone(s). You
should know about its frequency response, capsule location, maximum
SPL, impedance, and polar pattern.
TASK – Every time you reach for an EQ over the next few days, ask
yourself, ‘What am I trying to achieve?’ Don’t EQ anything without
having an intention!
tinny, or dull. It’s also about joining the dots between these characteristics
and where they lie in the frequency spectrum. By knowing by heart what fre-
quency characteristics are likely to be problematic and where to fnd them in
the spectrum, you will save yourself an enormous amount of time and energy.
So, print out your frequency chart, stick it on the fridge, and read it three
times a day until it’s committed to memory. Once you have it engrained in
your mind, you’ll wonder why you didn’t learn it sooner.
offending frequency and tame it without affecting everything else around it.
By removing the nasty stuff from a sound, you will be making more space in
your mix for other things to come through.
The frst technique for doing this is known as the sweep EQ technique.
What you do is to create a narrow band and boost it slightly. Then sweep
it slowly across the frequency spectrum until you fnd a pokey frequency. I
mean by pokey that, as you sweep your narrow band, some points will jump
up in volume compared to other points. These are the pokey frequencies,
which generally denote that too much of that frequency range is present in
the sound. These pokey bits are the bits you’ll want to attenuate. Go gently
when you cut though. Don’t feel you need to pull them out completely. Go
with a 2–10dB reduction and see how you go. Some producers don’t like
this technique because it’s easy to overdo it if you’re heavy-handed. But if
you proceed with caution and aim to keep things sounding natural, you’ll
be fne.
Removing the nasty stuff also includes high-pass fltering. You’ll often fnd
a lot of buildup of low frequencies in sounds that are entirely unnecessary and
just clog up your mix. When looking on a parametric EQ, you’ll be able to see
the fundamental frequency and will visually see any low-end nonsense below
it that isn’t part of the sound you want. You can see this back in Figure 3.1.
Look to remove this with a high-pass flter to keep your frequency spectrum
tidy and allow your kick and bass maximum room to shine through. As a rule,
gently roll up your high-pass flter until you notice it starting to affect the
sound, and then back it off again until you can no longer hear it making a
difference. This will ensure you’re going gently enough and not removing any
of the frequency content you want to keep.
sound they are affecting. This is why producers will often select an analogue
model for tone-shaping EQ moves.
In terms of the moves that you make here, you’ll most likely be looking
to make more of the parts of sounds that you already like. So, if you want to
make more of a sound’s warmth, presence, or air, now is the time to do it.
Wider Qs are the way to go here. When boosting frequencies, you’ve already
found out when you swept a narrow boost with your subtractive EQ that a
narrow boost can quickly become pokey. Therefore, boosting broadly is the
way to go.
Most often, this is the time to use shelves in the high end too. Rather than
adding a bell and boosting at 15k to add air, for example, you may be better
served to select a shelf and to roll it up gently from 10k. This will make the
overall high-end boost smoother and less noticeable. For this reason, it can be
excellent practice to perform tone-shaping EQ with an EQ that doesn’t have
a spectral analyser. This will encourage you to EQ with your ears rather than
your eyes, which is a good thing. This is another beneft of using an analogue
model, as these will almost certainly not have any graphic read-out of the
frequency spectrum.
you can match the exact peak level using your channel meter. There are also
a few EQ plugins that have automatic gain compensation built in. FabFilter’s
Pro-Q 3 is one of these. But the TDR Vos SlickEQ is a great free option that
automatically adjusts your signal’s gain to match the input and output level.
TASK – Practise gain staging your equalisers. Can you match your
input and output signals precisely? Does this help in making better
assessments of the quality of your EQ moves?
TASK – Explore low cuts and high boosts with shelving EQs. Com-
pare this with cutting and boosting at the same frequency point with a
bell. Can you hear the difference?
stay in control of your mix’s overall level. If you shape your mix’s sound with
additive EQ, your level will creep up and up, resulting in a loss of headroom,
ultimately making it hard for you to get a good result out of your master.
Whilst it’s impractical to suggest shaping a mix entirely with subtractive
EQ, I would advise you to try and make the bulk of your tone-shaping de-
cisions with subtractive EQ and use additive EQ to shape the character of
sounds. By focusing on subtractive EQ, you are more likely to spend time
creating space in your mix for things to shine through, giving you better bal-
ance overall.
high-pass even my bass, but just gently around 30Hz, removing frequencies
that are almost imperceptible to the human ear. As instruments get higher,
I’ll high-pass more aggressively. So, on a lead line with a fundamental around
600Hz, I may use a slope of 18dB/octave or more to ensure it’s spotless. Ex-
periment with your slopes on your flters. The steeper the slope, the more ag-
gressive the cutoff will become. The slope amount will also adjust the cutoff
frequency.
Another prevalent issue is the build-up of the ‘muddy’ frequency range.
We already defned this frequency range earlier, but the most common culprits
will lie between 250–350Hz. To keep this section of your frequency range in
check, consider applying a small cut of 2–3dB somewhere in this range in
instruments that have this content present. Don’t be too heavy-handed with
this, or you’ll end up sucking the guts out of your track, and it’ll lack body.
TASK – Find a mix that you feel lacks clarity. Try to create some clar-
ity by cutting some of the muddy frequencies between 250–350Hz.
Does this help, or is the problem elsewhere?
Day 25 – EQ in mono
I have a few more tips for you in this unit. First, try to apply your EQ moves
in mono. Or at least check all your EQ moves in mono. Bear in mind that
some people will ultimately listen to your music in mono through no fault
of their own. So, your mix needs to translate. By checking all of your EQ
moves in mono, you are forced to create space and separation in your mix.
You shouldn’t just rely on panning to create separation. Yes, I recommend do-
ing your panning before EQ, as I feel this gives you a better impression of your
mix and allows you to have a better balance from the outset. This shouldn’t
be done instead of careful frequency allocation, but in addition to it.
One of the dangers of applying EQ exclusively in stereo is that you can
end up with phasing issues. For example, you may think that your guitar
panned halfway left and your keyboard panned halfway right sound brilliant
in stereo. And they probably do. But when summed to mono, they may con-
fict with each other. You need to check every element of your mix in mono
truly to guarantee that your mix will translate wherever it is played.
Day 26 – Dynamic EQ
Dynamic EQ should be your best friend. Take this example: Your kick sounds
excellent on its own, and so does your bass, but together they lack defnition.
The bass seems to be masking the kick and sucking the punch out of it. What
do you do? Do you reach straight for a bell EQ and cut the frequency you like
in the kick from the bass to make room for it? You certainly could do this,
but then you won’t have that frequency in the bass at all. And the kick isn’t
playing all the time as the bass is. The solution here is to use a dynamic EQ
band with the sidechain input set to the kick. In this way, you can select the
same frequency point where you want the kick to poke through the bass, but
rather than losing it forever, you can use the kick to trigger the dynamic EQ,
only pulling this frequency out when the kick strikes, and leaving it in the
rest of the time.
This is a superb technique that can be used across your mix in many differ-
ent ways. You could use it to sidechain duck a specifc frequency in your lead
guitar or synth from your lead vocal to give your vocal space to come through
more when it performs. Or you could use it to sidechain duck a frequency
in your vocal reverb so that it gets out of the way of the lead vocal whilst it
performs but blooms when the vocal exits.
Once you get this technique down, you will not look back. I guarantee it!
adding warmth and shine to kicks, guitars, and even your mix buss. Secondly,
an SSL E or G channel strip. These sound great on electric guitars, vocals,
and drums. Next, get a Neve 1073 or 1084. They’re often described as warm,
fat, and full. Try them on drums, bass, and vocals. Fourth, an API 550 and
560. They’re excellent on kick, snare, bass, and guitars.
This is more than enough to get you started. You can probably fnd six to
eight different manufacturers offering emulations of each of these models.
A good option is Waves, who will often do reasonably priced bundles and
discounts. I’ll stress this again for emphasis: Get one of each and learn it. Use
it lots and explore its characteristics until you really understand what tonal
qualities you can get out of it. There is no right or wrong EQ to select.
TASK – This piece of work will take some time. Get one of the ana-
logue emulations mentioned previously and learn it thoroughly. Read
up on it, watch videos about it, and learn its history and best applica-
tions. Be comfortable using it. Do all this before moving on to another
model.
That is an awful lot of content to have covered. Don’t expect all of this to
have stuck in just 28 days. You should expect to revisit these notes regularly
until these principles are engrained in your practice.
Unit 3: EQ 63
Checklist
• Are you comfortable with the concept of fundamentals and overtones?
Can you easily identify a sound’s fundamental?
• Have you learned the different frequency bands and their characteristics
by heart? Do you feel confdent identifying these characteristics in sounds?
• Do you know all the different types of equalisers? Do you understand their
anatomies?
• Have you established a routine of intent when EQ’ing? Are you always
focusing on achieving one of the big four?
• Do you gain stage accurately post-EQ?
• Are you prioritising cuts? Are you primarily EQ’ing in context? Are you
checking your EQ moves in mono?
• Have you befriended dynamic EQ?
• Have you explored the different characteristics of some analogue EQ
models?
You should move on to the following chapter only once you can answer yes
to all these questions.
Further reading
1 Saus, W. (2022). The harmonic series. [online] oberton.org. Available at www.ober-
ton.org/en/overtone-singing/harmonic-series/ [Accessed 9 Nov. 2022].
Unit 4
Compression
DOI: 10.4324/9781003373049-4
Unit 4: Compression 65
TASK – Explore all the stock compressors that come with your DAW.
Identify where the ratio control is on each. Look at some third-party
options, too, if you have them. Do they all have ratio controls? Some
compressors don’t!
your threshold set to zero. No part of your signal will exceed zero (unless it’s
clipping, deliberately or otherwise). As you reduce the threshold, the point
where it reaches the highest peaks in your signal will be crossed. Once there
is some signal above the threshold, the compressor will begin to work. But the
compressor only acts on those parts of the signal that exceed the threshold,
not on the whole signal.
It’s a bit like your tax-free allowance on your income (in the UK, at least).
You’re allowed to earn X amount each year, upon which you don’t have to pay
any income tax. But once you exceed that amount, you must pay income tax
on everything above that amount. But the tax-free amount is untouchable.
Moving the threshold is to change the amount of tax-free income you have,
or in audio terms, the amount of your signal that will avoid being compressed.
The further you reduce the threshold, the more of your signal will exceed the
threshold and be compressed.
To bring this full circle with the income tax metaphor, the ratio is effec-
tively the tax rate you pay on everything above the threshold. A 2:1 ratio is a
tax of 50%, meaning you give up 50% of your signal to the compressor gods.
Simple, right?
transients through but act more on the subsequent signal of a sound after the
initial strike, use a slower attack time. Faster attack times sound less natural,
and slower attack times more so.
The release is the time it takes the compressor to return to its neutral posi-
tion once the signal has fallen below the threshold. Understanding this state-
ment is quite simple, but what it means in practice is less obvious. How long
do you want your compressor to take to return to normal? How long is a piece
of string? My general rule is this: On anything that is a fast, transient heavy
sound (such as a drum), I look for the gain reduction meter to just return to its
neutral position before the next strike happens. In effect, you are looking to
make your needle bounce in time with the track. This means you allow your
compressor to return to being off before the next bit of sound comes through it,
so the new sound can be processed in the same way as what came before it. For
less transient-heavy tracks, such as vocals or strings, I again look for a release
that breathes with the part’s phrasing. So, the release time will be dependent
upon the track’s tempo and the rhythm of the part you are compressing.
The footnote here is that lots of compressors have automatic release con-
trols. These can be useful if you’re looking to speed up your workfow and
are generally reasonably reliable.
TASK – Now, fnd the attack and release controls on your compressors.
Do they all have them? Again, some don’t.
TASK – Identify these fnal elements on your compressors (if they are
there). Now you can begin to explore dialling in some compression
settings. Experiment with different ratios with a fxed threshold. Ex-
periment with different attack and release times with fxed threshold
and ratio. How do these parameters affect the sound?
with a variable rate. For example, there’s often a slight delay before the attack
kicks in. The harder you hit an optical compressor, the faster its initial release
time will be. The return to being off will be sloped, with the release getting
slower and slower as it falls.
The result of all this is that optical compressors are very smooth and there-
fore musical. They don’t jump or jolt but rather they glide. Consequently,
they’re great on vocals and other melodic elements that require ‘rounding
out’. The classic optical models are the LA-2A and 3A. Note how they lack
attack and release controls.2
The frst consideration is the input level. For a preset to be of value, your
input signal would need to be identical for the preset you’ve stored to work in
the same way as it did previously. Sure, you can just go back to the input level
and adjust or alter the threshold – no big deal. But then you need to consider
the attack and release time. Unless the actual shape of the sound coming in
is also identical, your attack and release settings will need to be tweaked too.
And by the time you’ve done all that, you might as well have started from
scratch. That’s my view, anyway.
As for applying compression in solo, here’s my thinking: Compression is
used to assist in allowing an element to sit better in the context of your mix.
If you were going to listen to something in isolation, you would not need to
compress it as it would have nothing else to compete with. Therefore, the
compression only matters in the context of the mix, not out of it. The ques-
tion you should ask yourself is, ‘Am I getting the consistency in dynamics
required to make this sit in my mix better?’ You can only answer this question
if you are listening in the context of your mix. With that being said, I under-
stand that to hear the subtle nuances of the attack and release, it is benefcial
to check in solo. But try to train yourself to make the fnal decision on your
compressor settings in the context of the mix and not in solo.
bring them back in line with the rest of your signal. Then follow this with an
optical compressor such as an LA-2A. With this compressor, you can gently
round out the sound in a musical way. You can beneft from the familiar char-
acteristics of different compressor architectures in this way whilst gradually
achieving a more desirable result.
more of your signal to exceed the threshold and trigger the compressor, giving
you more signal to hear the impact that your attack and release settings are
having. You can dial in your attack and release times from here. Then, once
you’re happy, back your threshold to a reasonable level where you achieve the
gain reduction you desire.
This technique is advantageous when trying to learn a new compressor’s
characteristics.
TASK – Spend some time practising getting your attack time right on
drums. Focus on ensuring you retain the punch and impact of the drum.
have a load of harmonies too. These come in and out at various points in the
track. So, you create a vocal buss and send both your lead vocal buss and your
harmonies buss to this. Again, you compress gently once more, this time with
a Variable Mu. Here you’re looking to join them all up, warming them nicely
with the valve emulation and making them feel like one performance rather
than multiple individual tracks. You only want 1–2dB of gain reduction here,
but the additional valve emulation in the Variable Mu provides the extra
harmonic richness that beautifully ties all the vocals together.
The objective with all these small moves is to gradually work towards an
end goal that is well controlled. 2dB here and 1dB there won’t have much of
an impact on its own, but accumulatively it all contributes positively.
• FET – super-fast, impart desirable colour, great for guitars, drums, and ag-
gressive vocals.
• Variable Mu – valve sound is rich, performs slowly and is super smooth,
great for groups and master busses.
TASK – Spend some more time focusing on your gain staging. It’s such
an essential skill that it’s worth spending the extra time.
more elements into it, something that sounded great before may not sound
so good later. As you know from our discussions on frequency allocation,
as frequency areas build up and become congested with different sounds, it
becomes more diffcult for things to cut through the mix and be heard clearly.
In particular, you will often fnd that, as your mix grows, the front end or
attack of sounds becomes less defned, becoming a little blurred.
You may think that reaching back for the attack dial on your compressor is
the way to go, and it could be. But you have probably already invested some
time in dialling that in to shape the sound in a way that is pleasing to you. So
perhaps adjusting that again now isn’t the way forward.
Instead, you can reach for a transient shaper. Transient shapers are a se-
cret weapon for producers. Their job is to shape the transient response of a
sound without affecting the overall level. Unlike compressors, transient shap-
ers transparently sculpt the shape of the sound from attack to sustain. They
traditionally just have two controls on them: Attack and sustain. So, if you
fnd that you’ve suddenly lost the punch of your kick or the thwack of your
snare, grab a transient shaper and try dialling in the attack to help it poke
back through your mix.
Day 25 – Sidechain EQ
The world of sidechaining doesn’t end there, though. Some compressors will
also come with a sidechain EQ function. In the same way that sidechain
compression allows you to use a specifc (external) source to trigger the com-
pressor, sidechain EQ enables you to use a particular frequency or frequency
range to affect how the compressor behaves. There are two main types of
sidechain EQ.
Firstly, you may come across a high-pass flter on the sidechain. This fl-
ter allows you to roll off the bottom end of a signal, meaning that the low
frequencies will no longer trigger the compressor. Note that this high-pass
flter isn’t fltering the sound you’re hearing, only the sound the compressor
is responding to. This technique is often used when compressing busses or
whole mixes to prevent undesirable pumping produced by the kick and bass
triggering the compressor.
Secondly, you may come across a dedicated sidechain EQ section in your
compressor. This will allow you to boost or attenuate specifc frequencies
feeding your compressor to make it respond more to particular frequency
areas. This technique is used often to de-ess vocals. By raising the sibilant
frequencies in the compressor’s sidechain EQ, the compressor will then act
more on these sibilant frequencies. Another example is on drum room mikes.
Unit 4: Compression 81
You can boost the upper frequencies in the sidechain EQ, so the compressor
works more on the cymbals, but reduce the low frequencies being fed in, so
the shells are not acted upon as much.
As you can see, sidechain EQ is a powerful device for shaping how your
compressor responds to your sound. Don’t expect to nail this technique overnight.
It’s advanced stuff!
TASK – A/B the same compression settings with a stock digital compressor
and with an analogue emulation. Can you hear the difference between the
two? Does one sound better to your ear?
82 Unit 4: Compression
This unit has been especially technical. I expect you’ll want to go back and
reread some chapters to consolidate what you’ve learned.
But what’s next? The last 10%, obviously! And this is where the exciting
stuff happens, where you get to be creative rather than technical. Next up,
we’ll look at creating depth in your mix with time-based effects.
Checklist
• Are you comfortable with what compression does and the anatomy of a
compressor?
• Are you familiar with the four analogue architectures? Can you select the
most appropriate one for the job?
• Are you confdent in identifying where best to compress in your signal chain?
• Have you explored parallel compression and understood its benefts?
• Have you practised gain staging post-compression extensively?
• Do you understand the beneft of serial compression?
• Have you experimented with transient design to complement your
compression?
• Have you conquered the challenging subjects of sidechain compression
and sidechain EQ?
• Have you begun to explore the benefts of multiband compression vs
broadband compression?
You should move on to the following chapter only once you can answer yes
to all these questions.
Further reading
1 Allen, E. (2016). Vintage King’s guide to VCA compressors. [online] vintageking.com.
Available at https://vintageking.com/blog/2016/03/vca-compressors/ [Accessed 9 Nov.
2022].
2 Sibthorpe, M. (2022). Optical compressors explained. [online] fortemastering.com.
Available at https://fortemastering.com/optical-compressors-explained/ [Accessed 9
Nov. 2022].
3 Fuston, L. (2022). UA’s classic 1176 compressor: A history. [online] uaudio.com.
Available at www.uaudio.com/blog/analog-obsession-1176-history/ [Accessed 9
Nov. 2022].
4 Fox, A. (2022). What is a Variable-Mu (tube) compressor and how does it work? [on-
line] mynewmicrophone.com. Available at https://mynewmicrophone.com/
84 Unit 4: Compression
what-is-a-variable-mu-tube-compressor-how-does-it-work/#:~:text=What%20is%20
a%20variable%2Dmu%20compressor%3F,reduction%20in%20the%20overall%20le-
vel [Accessed 9 Nov. 2022].
5 Haroosh, I. (2021). What is saturation? And how to use it [online] wealthysound.
com. Available at https://wealthysound.com/posts/what-is-saturation-how-to-use-
it [Accessed 9 Nov. 2022].
Unit 5
Reverb
DOI: 10.4324/9781003373049-5
86 Unit 5: Reverb
Figure 5.1 Illustrates the differences in time between direct and indirect sound.
Source: Image by Sam George.
that things sound unnatural, hollow, and thin without it. So, the simple an-
swer to the question is that you need reverb for something to sound natural.
So why do recording studios work so hard to treat their spaces to remove as
much reverb as possible? If our brains desire reverb, indeed, we should just leave
all our recording spaces untreated. This would be a much cheaper solution!
In some cases, this is precisely what happens. As a child, I was lucky enough
to be taken to the amphitheatre at Epidaurus. My Dad told me to stand right at
the top of the theatre in the seats furthest away from the playing area, and he
stood down in the centre. He then spoke to me at barely more than a whisper.
And to my surprise, I could hear every word he said with perfect clarity. It
turns out the Ancient Greeks understood acoustics pretty well, as they delib-
erately built their theatres to capture as much sound as possible to allow every
patron to hear the players, however cheap their seats were.1
But of course, the Ancient Greeks had no concept of recording this sound.
Their goal was simply to ensure everyone could hear what was going on. The
issue with untreated acoustic spaces is that they will only ever sound like
themselves. If you’re AIR Studios in London or the Royal Albert Hall, this
Unit 5: Reverb 87
isn’t such a bad thing. Having the characteristic of those spaces baked into
your recording is highly desirable. But the issue is that once the natural rever-
beration of a recording venue is committed ‘to tape’, it cannot be removed.
It’s there forever. So, it will only ever sound like that space. The reason peo-
ple desire clean recordings with no noticeable reverb is so that they can add
reverb later. In this way, they can add whatever reverb they want and make it
sound like it was recorded in the Sistine Chapel, the Sydney Opera House, or
Ronnie Scott’s, if they so wish.
TASK – Examine the reverb plugins you have. Can you identify the
reverb parameters mentioned previously (time, volume, absorption)?
TASK – Implement a hall reverb into a mix. Experiment with where you
use it, i.e., try it on individual components, groups, and the whole mix.
How does how you implement it alter how it affects your mix’s clarity?
Chamber reverbs are similar to halls in many ways in that they deliver a
lush, ambient-soaked sound. But unlike hall reverbs, they offer an additional
sense of clarity you don’t get from the washed-out feeling of a hall reverb.
Chamber reverbs generally sound pretty neutral, so they work well on all
sorts of sounds, especially vocals, strings, and acoustic guitar. They’re perfect
for a John Bonham-like drum sound too. They’re especially great on small
ensembles and classical music.
TASK – Take the same mix you used for your hall reverb experiment.
Swap the hall out for a chamber reverb. How do their characteristics
compare? Does one work better than the other in some places?
TASK – Same task as before, but insert your room reverb this time.
This will feel quite different. You’ll defnitely prefer it in some in-
stances, but not in all.
TASK – Use a plate reverb on your vocals and snare drum. Do you like
the shiny quality it brings? Compare it with something larger, like a
hall. Consider circumstances in which the different reverb types may
be more/less suited.
TASK – Do a bit of digging for IRs online. Can you fnd any quirky
(and preferably free!) IRs to load into your convolution reverb?
but experiment to fnd something you like. Secondly, they beneft from hav-
ing a reasonably consistent input signal. This is because you’re going to set a
threshold so that when the signal dips below it, the reverb will be cut off or
gated. If the signal coming into the gate is inconsistent, then the gated effect
will also be inconsistent, which is probably not what you’re after. Options to
deal with this are to ensure that the level of the sound you’re applying the
gated verb to is even, either by compressing it or adjusting velocities if it’s a
drum part. Alternatively, you can compress the reverb itself before the gate so
that the reverb’s signal is more even before being gated.
Once your reverb gets to the gate, it’s just a matter of adjusting the threshold,
attack, and release as you would on a compressor. A medium attack will allow
the transient of the original sound source through before the reverb blooms.
But the critical feature is to have a quick release so that the reverb tail is cut off
dramatically. That will give you the classic gated reverb effect.
recording it. It’s used a lot in horror, fantasy, and sci-f, but you’re probably
more familiar with it at the drop of some of your favourite dance tracks.
Finally, you’ve got non-linear reverb. A normal reverb tail is linear (in
actual fact, it’s not; it’s logarithmic, but let’s keep things simple). Any reverb
that is altered to decay differently is therefore non-linear. The science behind
the sound is a bit involved, but they mostly all end up sounding much like a
gated verb, so let’s lump them together, shall we?
TASK – Study your DAW’s stock reverb plugins. Can you identify the
type, size, decay, and mix on all of them?
94 Unit 5: Reverb
TASK – Review a previous mix. Look at how you’ve set up your re-
verbs. Are you using aux sends, inserts, or a combination of the two?
Are there more effcient ways to set them up?
The type of room you select should cooperate with the music you’re making. A
classical orchestra would beneft from a large hall, whilst a small room would
96 Unit 5: Reverb
be best in a heavy metal track. Notice how I’ve explicitly referred to room
types here rather than reverbs as a whole. Reverbs are generally considered
in two categories: Acoustic spaces, such as halls, chambers, and rooms, and
mechanical effects, such as plate and spring. I fnd it helps to keep the two
categories separate from each other. Use the acoustic spaces to give things a
location and glue things together; use the mechanical reverbs as effects to add
interest and character.
TASK – Review some previous mixes. Can you fnd any instances
where you’ve used an acoustic space where a mechanical effect may
have been better suited? Or vice versa? Make the relevant adjustment.
the refections begin to smudge together, making it impossible to tell the dif-
ference between sounds. This is an obvious sign that your decay is too long.
As you would imagine, the decay time and room size are intrinsically
linked in that you’d expect the decay time to increase as you increase
the room size and vice versa. Most plugins will do this for you. However,
you can force your reverb to oppose this traditional stance. It will sound
weird, but some fabulously creative sounds are to be had by experimenting
in this way.
There are a couple of common associations to be aware of: Longer reverbs
provide a ‘heavy’ ambience while shorter ones are ‘tight’. More extended reverbs
also tend to be louder, contributing to the masking issues they come with.
Try to think of your reverb as an additional rhythmic element in your mix.
If you want to avoid masking, then you should consider having it decay in
between phrases. On percussive elements especially, you may wish the reverb
on the snare to decay before the next kick. Or, on your vocal, you may want
the verb to decay before the entrance of the following phrase. Your decay
time will be affected by the track’s tempo and the part’s rhythm. There will
never be one size that fts all here. So, yes, use presets as a starting point, but
always dial them in to suit your track.
TASK – Explore the relationship between size and decay. Can you cre-
ate some unusual effects by forcing a large size with a short decay and
vice versa?
TASK – Explore different pre-delay times. Can you fnd the sweet
spot between separating the pre-delay from the transient without it
feeling disconnected? Bear in mind this will change depending upon
how transient-heavy the part is.
will provide warmer and smoother reverbs whilst less textured spaces will be
fatter and more refective, therefore providing clearer and brighter reverb.
Diffusion is also closely linked to decay time. If you like the length of your
reverb but want to smooth it out, then increasing the diffusion amount is an
excellent place to look. If you fnd that your reverb is making your sound too
metallic or clangy, then diffusion is, again, an ideal place to look to curb this.
The frst is from the point of view of reverb as an effect. If your reverb is
there to serve a specifc creative purpose in your track, then you’ll want to be
able to hear it in your mix. There’s no point in doing something creative that
sounds cool if it’s then buried under a heap of other things. So, as a rule, for
reverbs that are effects, ensure they can be heard.
The second angle is of reverbs that indicate the location of the perfor-
mance. This is often where people get it wrong. Less experienced producers
like the sound of their ambient reverbs so much that they overemphasise
them in their mix. This has the effect of washing out the mix, muddying it up
and losing the defnition of crucial components. My advice for getting your
ambient reverb level just right is this: Increase your send amount until it’s just
audible in the mix, and then back it off slightly. When you A/B your track
with and without the reverb send, you will notice its absence, but it won’t be
so apparent that it’s a distraction.
TASK – Revisit some mixes. Review the level of your reverbs. Are
your reverb effects audible? Are your ambience reverbs too loud?
These are the common issues to address.
space at the extremities of your stereo feld. This can be helpful for double-
tracked electric guitars, for example. If you’ve got your rhythm guitar panned
hard left and right in your mix with both going to a stereo reverb, then the
reverb will sit right behind the guitars. If you narrow the reverb’s width, you’ll
effectively bring the reverb closer to the centre and leave the guitars out
wide, allowing them to maintain their punch and clarity.
TASK – Experiment with pre- and post-fader sends. Ensure you under-
stand how to change your sends to pre/post-fader, the difference between
the two, and what fexibilities they give you.
recommend that you be even more attentive to how your reverb interacts
with your other mix elements above everything else. This is because it has
the power to make or break your mix. Dialled in appropriately, it will enhance
every aspect of your mix. Done poorly, it will leave you with a muddy, washy,
undefned mess.
It’s all well and good to say that you should pay it close attention, but what
are you actually looking for? An obvious place to look is in the low end. By
now, you’re probably well used to high-passing things to keep the low end of
your mix tidy. Your reverb is no different. High-passing the reverbs will keep
the low frequencies clear and defned, keeping your track well grounded.
You may also look to low-pass. Reverbs can often have a lot of high-end
content. This can become too much if not kept in check, making your mix
bright and brittle-sounding. So low-passing or using a high shelf to cut upper
frequencies is an excellent place to look.
You also want to ensure you’re not overloading specifc frequency points
too much. For example, you may have a vocal with plenty of 3kHz in it,
which you like. But by sending this to your reverb, you end up overdoing this
frequency point. Consider notching some of this frequency out to keep that
area in check.
I love to use a dynamic EQ side-chained to something else on the reverb
send. Let’s work with the previous example. Rather than just scooping 3kHz
altogether on my reverb send, I can place a dynamic EQ after my reverb and
set the sidechain input on that 3kHz notch to the vocal. In this way, when
the lead vocal is present, that frequency will be notched out of the reverb but
will bloom after the vocal exits so that it can still be heard in the reverb tail.
don’t sonically make sense. What you usually want from your reverb is for it
to be convincing, so sticking to one is a good way of achieving this.
I caveat this by saying that I don’t think the same applies to reverbs used
as effects such as plates and springs. To an extent, you can have as many
of these as you like, although clearly, the more you use, the trickier it will
become to manage how they all interact. In this instance, I refer you to yes-
terday’s tip about treating all reverbs as another instrument in your mix. Just
be sure to maintain awareness of how each reverb interacts with other mix
elements, and you should be fne.
But then, we haven’t considered pre-delay in this equation at all. You should
be considering your reverb length as a total of the pre-delay and the decay
time. To work this out, you can multiply the millisecond answer you got from
the previous equation by 0.0156 to give you your pre-delay time. For example,
with our small-room reverb of (60k ÷ 120) X2 = 1000ms, you can take the
104 Unit 5: Reverb
Figure 5.3 Displays the equations required to work out pre-delay and decay
time.
Source: Image by Sam George.
• What reverb is, why you need it, and what it does
• All the different types of reverb: Hall, chamber, room, plate, spring, con-
volution, gated, and some honourable mentions, too
• Type, size, decay, mix, pre-delay, early refections, and diffusion
• Aux sends vs inserts
• Setting size, decay, pre-delay, early refections, diffusion level, and wet/dry
balance
Unit 5: Reverb 105
Well done for making it through all of that. As you’ve probably fgured out by
now, there are an awful lot of things to consider under every topic of music-
production. This is why it takes so long to become an ‘expert’ (however you
judge that) in this feld. It’s all well and good talking about these things in black
and white in a book, but for them to make sense, you need to use them, explore
them, and experiment with them in your projects. So, go and implement all of
your newly acquired knowledge of reverb!
Checklist
• Do you fully understand what reverb is, what it does, and why you need it?
• Are you familiar with all the different reverb types?
• Are you confdent in selecting an appropriate reverb type for your required
application?
• Are you confdent in adjusting all the reverb parameters, including pre-delay,
early refections, and diffusion?
• Do you understand the difference in setting up a reverb on an aux send vs
as an insert?
• Do you understand the difference between pre- and post-fader sends?
• Do you understand the importance of pre-delay in maintaining clarity?
• Are you confdent in setting appropriate reverb levels in your mix?
• Are you confdent in EQ’ing reverbs?
• Have you explored mono and stereo reverbs?
You should move on to the following chapter only once you can answer yes
to all these questions.
Further reading
1 Upton, E. (2013). Why the Epidaurus theatre has such amazing acoustics. [online] giz-
modo.com. Available at https://gizmodo.com/why-the-epidaurus-theatre-has-such-
amazing-acoustics-1484771888 [Accessed 9 Nov. 2022].
2 Jackets, T. (2021). Sabine’s formula and the birth of modern architectural acoustics. [online]
thermaxxjackets.com. Available at https://blog.thermaxxjackets.com/sabines-formula-
the-birth-of-modern-architectural-acoustics [Accessed 9 Nov. 2022].
3 Virostek, P. (2019). The quick and easy way to create impulse responses. [online]
creativefeldrecording.com. Available at www.creativefeldrecording.com/2014/03/19/
the-quick-easy-way-to-create-impulse-responses/ [Accessed 9 Nov. 2022].
106 Unit 5: Reverb
4 McAllister, M. (2021). Haas effect: What is it and how it’s used. [online] produce-
likeapro.com. Available at https://producelikeapro.com/blog/haas-effect/ [Accessed 9
Nov. 2022].
5 Fuston, L. (2022). Comb fltering: What is it and why does it matter? [online] sweet-
water.com. Available at www.sweetwater.com/insync/what-is-it-comb-fltering/#:~:
text=Eliminate%20Comb%20Filtering-,What%20Is%20Comb%20
Filtering%3F,%2C%20up%20to%2015ms%E2%80%9320ms [Accessed 9 Nov. 2022].
Unit 6
DOI: 10.4324/9781003373049-6
108 Unit 6: Delay and modulation effects
with the primary difference between the two being whether or not the human
ear can discern the original and the delayed signal as two distinct sounds. But
the human ear is far more sensitive than that. We can divide delay times into
fve categories.
There are different characteristics and delay types associated with each of
these lengths, which I’ll explore with you in this unit.
Tape delay can generally be described as lo-f but warm, with feedback that
gets more and more distorted as it trails off. Tape isn’t a precise medium, so
you can expect to encounter pitch fuctuation and wobble. If you’re after a
vintage, gritty delay, then tape delay is your answer.
conversion. Therefore, you can expect digital delays to sound clean, particu-
larly in comparison to tape and analogue delays.
These days, though, loads of digital-delay plugins aim to recreate tape and
bucket brigade delays. This makes them a sort of hybrid delay, bridging the
gap between the analogue and digital worlds.
One of the main advantages of digital delay is the ability to tweak mul-
tiple characteristic features. You can often adjust how coloured or transpar-
ent and long or short the delay is. You may even get MIDI control. Some
older digital-delay models don’t have the best A/D and D/A converters, but
modern ones generally offer 24-bit resolution. This is just something else
to bear in mind.
TASK – You certainly will have digital-delay options with your DAW.
Explore the settings you have available. Do you have feedback? Cross-
feed? Filters?
TASK – Create a slapback echo. Try with both a tape-delay plugin and
a digital delay. Which one’s characteristics do you prefer?
Unit 6: Delay and modulation effects 111
TASK – Often, with multitap delays, you can adjust multiple param-
eters per tap. In Logic’s Delay Designer, you can adjust each tap’s cutoff
frequency, resonance, transposition, panning, and level. See what you
can achieve in your DAW.
TASK – There are some great emulations of the Roland Space Echo
out there. Universal Audio, IK Multimedia, and Overloud all do one.
If dub is within your wheelhouse, I recommend picking one up. If not,
watch a few demonstrations on YouTube.
114 Unit 6: Delay and modulation effects
Chorus, therefore, affects the pitch and timing of your sound source. This
effect simulates multiple singers or instruments who would never perform
perfectly with each other but would be reasonably close.
Because your pitch is modulated in the additional voice(s) rather than the
original sound, it’s rarely at the same frequency as the original. This means
that constructive and destructive interference is minimal.
You can use chorus to wash out sounds, making them feel more ambient.
You can push the effect if you want it to sound prominent, but the result will
lose some presence in your mix. For this reason, it can be great on textural
layers and things that don’t need to be at the front of your mix.
TASK – Explore chorus within your DAW. The critical parameters are
rate, intensity/depth, and the number of voices.
TASK – A/B an audio sample with all three effects. Try to confgure
the main parameters (rate and intensity) equally so you can compare
their sounds as fairly as possible. Do you have a favourite?
You can affect so many aspects of a sound with delay. You may focus on
changing its front-to-back position or its left-to-right. You may use your delay
to provide weight to something or to create ambience and depth. A slapback
delay will be indicative of specifc genres. How you alter the timing of your
delay may reinforce or change the overall groove of your track.
When confronted with all these questions, the good news is that most de-
lay plugins will come armed with a range of presets. These are often an excel-
lent place to start. They will most likely be named with valuable things that
indicate if it’s most appropriate for a specifc instrument, genre, or particular
effect. As delay plugins will often be tempo-synced to your project, you can
usually get away with calling up a suitable preset, blending the wet/dry and
leaving it. But as you grow in confdence, you will undoubtedly want to delve
deeper into the fner details so as to enhance your sound and sit things per-
fectly within your mix.
TASK – Spend some time browsing the presets you have within your
delay plugins. How are they presented? Are there any indications as to
genre, tempo, rate, or complexity?
TASK – Create a delay ‘sense check’ list and have it somewhere vis-
ible. On it, ask if you’re looking to add ambience, add a sense of direc-
tion, evoke a specifc style, or add emphasis. If you don’t want any of
those things, you don’t need delay.
TASK – I’ve just given you a bunch of exciting ways to automate your
delay. I know we haven’t covered automation yet but give these a go
now. They’re great fun!
Now consider a backing vocal. This vocal is already panned to one side of
your mix so as not to interfere with your lead vocal. You don’t want to put a
stereo delay on it as it doesn’t need to sound wide like the lead vocal. Instead,
you may use a mono delay. In this case, it is common practice to pan the
mono delay opposite the source signal in the stereo feld. This will provide an
additional sense of width without detracting from anything else and will keep
your stereo image nicely balanced.
What about if your sound source is a stereo signal to begin with? In this
case, you should consider precisely what it is you want to achieve with your
delay. The sound is already present on your mix’s left and right sides, so a
stereo delay is probably less effective. On the other hand, maybe a mono slap-
back delay is what you’re after, or perhaps a ping-pong delay that will bounce
from side to side. Just remember to have an intention, and you’ll be fne.
TASK – Review the delay in a mix. Have you employed delays that
run throughout the whole track? Could you automate these on/off in
specifc sections to provide width in certain places?
could be just the hook that you will want to focus on. Maybe there are two or
three signifcant melodic moments that you want to enhance. Whatever you
add value to, by being selective rather than painting whole passages with the
same brush, you will undoubtedly create more exciting and successful mixes.
You can apply this same selective process to any aspect of your mix. It
could be a singular snare hit, a bend in the guitar solo you want to bring out,
a bass slide or swoop, a horn stab, or a keys glissando. The applications are
almost limitless.
TASK – Can you fnd the most important parts of your song and add
value by emphasising them with a delay? This style of delay can be
more prominent than a generic delay that is used more as a textural
device. Delay for emphasis should be heard clearly.
There has been a lot of information in this unit. The best way to become
familiar with all these delay types and techniques is to go and experiment
with them one at a time. Only by spending time with them will you be able
126 Unit 6: Delay and modulation effects
to internalise how they sound and therefore be able to select and employ
them with ease.
Checklist
• Do you know the fve delay time categories?
• Are you familiar with tape, analogue, and digital delays and their subtypes?
• Have you explored and understood the differences between the modu-
lated delay types (chorus, fanger, phaser)?
• Have you explored both tempo-synced and off-grid delay times?
• Have you reaffrmed the importance of having intention and being
selective?
• Have you worked through all my top tips to ensure you understand how
to process and apply delays in various contrasting contexts?
You should move on to the following chapter only once you can answer yes
to all these questions.
Unit 7
Automation
DOI: 10.4324/9781003373049-7
128 Unit 7: Automation
automation won’t be played back if this mode is not engaged. Always remember
to change your mode back to Read once you have written in any new automa-
tion. This will ensure that you don’t accidentally overwrite this information
and that what you’ve written in will be performed and therefore heard.
The second mode is Write. This is also straightforward. Any parameter
changes you make to any plugin whilst the track is playing will be written
into the automation lane in Write mode. This is your frst way of capturing a
performance. Just remember to change the mode back to Read when you’re
done, and you’ll be fne.
Mode number three is Touch. Touch mode is handy for making minor
changes. It’s helpful because when you stop moving any parameter, it will
snap back to where it was before you began your alteration. So, you can make
minor adjustments on the fy, making momentary tweaks that will revert to
their original state when you let go.
The opposite of this is Latch mode. Latch means that the automation will
latch onto the fnal position once you make your adjustments and remain there.
This is great for creating more signifcant, long-term changes to your track.
The other major thing to point out is that most DAWs will allow you to
write automation directly into a region or onto a track. This will affect how
the automation is stored, processed, and accessed, so it’s worth considering
how you want to use it long-term before you write it in.
TASK – Find how to turn automation modes on and off in your DAW.
Identify what different automation modes you have available to you.
Also, discover if your DAW will allow you to store automation by re-
gion as well as by track.
the way through, and therefore relax. However, when your machine sees a
track with automation present, it knows that it needs to watch that track the
whole way through in case anything changes at any point. In other words, a
track with automation needs to be constantly monitored. The more chan-
nels you have with live automation, the more likely you will encounter the
dreaded system overload.
A good remedy here is to bounce your tracks out. Different DAWs facili-
tate this in different ways. Freezing or rendering amounts to the same thing,
too; they’re variations on the theme. The priority here is to commit an auto-
mation move into the audio region so that it doesn’t need to be processed in
real time. Your focus here should be to commit to any software instrument pa-
rameter changes you’re making. Running any software instrument, especially
a sample-based one, will be demanding on your CPU – to have moving parts
within that instrument even more so. It’s good practice to mix with 100%
audio and 0% MIDI anyway.
Now you’re probably thinking, ‘That’s all well and good, but what if I need
to go back and change a part or tweak a sound?’ That’s not an issue. Once
you’ve bounced the track out to audio (committing to the automation moves
in the process), go back to the MIDI track, turn off the software instrument
and any other plugins on the channel, freeze it, mute it, and hide it. It’ll all
still be there if you need to go back and make changes, but it won’t be fogging
up your system in the meantime.
This stage is all about solving obvious issues; things that are detracting
from the overall success of your mix. Don’t get hung up on looking for crea-
tive nuggets of ear candy at this stage. That will come later. Focus on iden-
tifying things that noticeably detract from your mix. If it’s catching your ear
negatively, then it’s likely to be catching a listener’s ear in the same way.
TASK – Review a previous mix. Can you identify any problems that
could be addressed with automation? Fix them!
favourite part of a cake, too much of it will throw out the balance, making
the whole thing too sickly sweet. Whilst there are no limitations, you want
to avoid detracting from the song at all costs. Yes, this is your chance to
show your competency as a producer, but you can demonstrate competency
through restraint. You should look to add just enough fair and complexity to
make your mix shine without making people’s stomachs turn.
One way to ensure you don’t overdo creative automation is to consider it
subtractively rather than additively. This means that when looking to crea-
tively draw the listener’s ear for a moment, you do it not by adding something
that steals the focus but by taking something away. By subtracting something
from your mix, perhaps by automating the level or send amount or fltering
aggressively, you’ll fnd that you create space in your mix for something else
to shine through. So, rather than forcing the attention onto something by
adding in level, you can defect attention by creating space. To me, this is
a more intelligent, subtle, and overall, more experienced way of creatively
automating that demonstrates knowledge and experience.
The single exception to this rule is if you need to create automation for
sound-design purposes. If you need to automate a particular synth parameter
or flter to help you build a clearer picture of how your track will sound, then,
by all means, do this during your arrangement stage.
The clear justifcation for saving automation until post-mixdown is this:
It needs its fnal context to work in to know whether an automation move
is working in your track or not. There’s no point automating a level change
or effects-send change until you see what it will be competing with at that
moment. Otherwise, you’ll be guessing. Even things like automating cutoff
frequencies and flters need context. Whilst you may want to lay down mark-
ers in your project to note where you want certain things to move later, I
would recommend keeping a list of automation moves you know you want to
make in your project notes so that they can be dialled in with accuracy at the
right and proper time.
You can apply this principle to anything: Perhaps multiple mikes on a guitar
amp, a clean and distorted bass channel, or the amount of double-tracked
vocal sitting underneath your lead.
But this is only 50% of the conversation at this point. While considering
the overall level of musical sounds, you should also consider your effects.
Treating effects as another instrumental component will help you truly em-
bed them into the mix. You may wish to bring up the level of your vocal
reverb or delay in the verse to make it feel more refective but reduce it in the
chorus to get that strong, direct sound. You may have some fanger or phaser
on your guitar but decide that it’s more appropriate just to have it featured in
the bridge of your track.
You can create so much interest in your mix just by automating the levels
of instruments, vocals, and effects. It’s worth spending a reasonable amount
of time experimenting just with level automation to explore its capabilities
thoroughly.
What could you do with your vocals? You could use a short stereo delay
to create width in your choruses on a single lead vocal. If you have a double
and triple-track, you could use just the double-track to support the lead in
the verse but use both in the chorus, moving them left and right of the lead
to create width. You could use mono backing vocals in your verses placed in
specifc locations in your stereo feld but use stereo backing vocals panned
hard left and right in pairs in your choruses.
There is so much that you can do to create contrast in your stereo im-
age. However, the caveat is always not to get carried away. With so many
options available, you can effortlessly throw automation at almost anything
that crosses paths with your cursor. Allow yourself to slow down, ensure you
ask yourself the critical question: ‘What am I trying to achieve here?’ and you
should be fne.
TASK – Today’s task is a refection. Think about any time you could
hear an issue in your mix but couldn’t locate it. Think about how frus-
trating that was. We’ve all been there! Now, use that frustration as the
catalyst to stay in control of your mixes moving forward.
your verse. This is because the bed in which it lies is different. There will be
changes in instrumentation, texture, dynamics, etc., all of which will affect
how your vocal comes across.
You can automate any auxiliary send amount. I thought it would be help-
ful to focus your mind on the most common aux sends you’re likely to need
to automate.
First is reverb. For precisely the reasons I have just mentioned, you
should be automating your reverb send amounts. Start on the most promi-
nent aspects of your mix. Your reverb send level will be far more audible on
your lead vocal than on a background pad. Once you’ve worked through
the most focal points in your mix, you may fnd that’s enough. The need
for automation on effects sends becomes less the further back elements are
in your mix.
The same principle should be applied to your delay sends, but with a slight
twist: Your delay amount is more likely to need to be looked at on a phrase-
by-phrase, or even word-by-word, basis. This is because, depending on the
length of the phrase you’re working with, the space you’ll have in between for
the delay to shine through will differ. Therefore, you’re more likely to want to
automate your delay on a smaller scale.
The other two leading players here are parallel compression and paral-
lel saturation. Saturation is a topic I’ll talk more about later in the course,
so don’t worry if you’re currently totally baffed by the term. In simple
terms, it means distortion. Contrary to popular belief, not all distortion
is bad. A bit of distortion is an excellent thing. To put things as simply
as possible, when you saturate something, you’re adding additional fre-
quency content, which will fll out and round out the sound. So, this is
an excellent device for making something more prominent at a particular
point in your track.
Parallel compression works in a not-too-dissimilar fashion. It’s a bril-
liant tool for helping parts stand out in a mix without removing the dy-
namic range and musicality from the performance. This is because you’re
blending a heavily compressed signal back in underneath the natural per-
formance that still has all the undulations and fuctuations in it. Pushing
your parallel compression level on, for example, your lead vocal during
the chorus, or your lead guitar in the guitar solo, can help it cut. But
you certainly wouldn’t want that amount of parallel compression present
throughout.
TASK – Review a previous mix. Have you automated any aux send
levels? If not, use this opportunity to do so.
140 Unit 7: Automation
places to look to add interest. As a quick disclaimer, if I use terms you don’t
understand here, don’t fret. I’ll be covering the fundamentals of synthesis
later in this course.
First, probably the most common parameter to automate is the cutoff.
In most synths, the type of cutoff frequency is changeable, so you can make
it a low-pass, high-pass, or even band-pass filter. Therefore, it’s excellent
for making a point of difference between sections and for transitional
effects.
Next is your oscillator blend. Generally, synths will come with at least
two oscillators. Each oscillator can create an entirely individual sound. The
oscillator blend allows you to mix different amounts of the two. So, you
could have more A and less B in the verse and vice versa in the chorus, for
example.
Thirdly, I’d look for the attack and release time in your envelopes.
Particularly regarding the amp envelope, adjusting the attack and release
times will allow you to create more or less clarity at the front and back
end of your synth lines. This will allow you to push parts backwards and
forwards in your mix without needing to reach for things like reverb or
volume automation.
Another great thing to automate is the resonance. This puts a frequency
bump just before your cutoff frequency. Adding a resonance boost will help
your synth to stick out. This is useful if you want to pull your synth forward
for a few moments before pushing it back into your mix.
I also like to automate the pitch of my oscillators, particularly the cents
value. Increasing or decreasing the cents of one oscillator will create more of
a chorusing effect with the other. This is great for thickening your sound. Just
don’t go so far that you make it sound out of tune (unless that’s what you’re
after).
There are loads of other creative ways to automate your synths. These are
just a few to get you going.
You then have cymbals: Hi-hats and a ride (used to keep time in the groove,
typically with eighth or sixteenth notes), and one or two crashes (to accent
important beats). A drummer has four limbs, so they can play a maximum of
four of these eight or nine components simultaneously. So, if a drummer can
play half of the kit simultaneously, how do you create a point of difference for
a drum fll to make it stand out?
The obvious place you must start is the arrangement, of course. Carefully
arranging what instruments are used with what rhythms at specifc times will
add importance to your flls. Then there’s the performance, which will make
or break how the fll projects in the mix. But once you get the performance
into a densely populated mix with other things going on around it, you may
fnd that your flls lack weight.
The simplest way to solve this is to automate your drum flls to add the
required amount of weight. As with most things, you have options avail-
able to you here. You can reach straight for the overall level. This will work
fne. You could reach for your parallel compression amount. This will be
slightly more subtle but perhaps more tasteful and less noticeable. If you’re
working with acoustic drums, you could be a bit more creative and bring
out your flls by increasing your room mikes. You could even increase your
saturation level.
As you’ve probably worked out by now, there’s more than one way to skin
a cat, as the expression goes. The takeaway point is that automation is your
friend when adding weight and importance to your drum flls.
Day 20 – Tidy up
I absolutely love using chokes in my writing. A choke is where all musical
content stops simultaneously. Whether it’s a choke before a heavy drop or a
choke to end a track, I think both sound great. I’m probably guilty of using
them too frequently. But all too often, I hear them used poorly in amateur
productions. There are two possible ways to do a choke badly.
The frst way is not to do anything at all. This means that whilst you’ve
written a choke into the arrangement with all instruments performing it, you
haven’t produced it into the mix. This will inevitably leave you with multi-
ple tails ringing through what you want to be a choked space. You should be
looking out for cymbal decays, guitar tails, synth release times, etc. – any po-
tential sound that could be ringing. You can tidy up these things with volume
automation. You should also look for reverb and decay tails that ring through.
Unit 7: Automation 143
Generally, rather than choking the level of these time-based effects to zero,
you’ll want them to duck but still be audible to keep a natural feel.
This leads nicely onto the second way people get it wrong: They overpro-
duce a choke. You can do this by reducing the level of time-based effects to
zero, effectively creating a complete silence in the track, which will stick out
like a sore thumb. But you can also get it wrong by automating your instru-
ment levels too early. Typically, this happens when trying to produce a choke
into a poorly performed part. This results in cutting off parts before their
natural break. You want to look for the moment just after the drummer grabs
the cymbals to stop them ringing, just after the guitarist mutes the strings,
and so on. Look for the natural break in the track. A tight performance is
critical to ensure you can make the most of the choke in the mix.
TASK – Tidy up the chokes in your track. If you haven’t got a project
that contains a choke, write one into something new!
TASK – Manually create some throws. You can try it on a vocal or any
other instrumental part to which you wish to add importance.
Day 23 – Automate EQ
As you know already, EQ is one of the Big Four when it comes to mixing. And
I’ve already mentioned it in the context of automation with my references to
Unit 7: Automation 145
fltering. But there’s much more that you can do with it. Let’s look at some
examples.
Think about your classic low-end conundrum. You feel you’ve balanced
your kick and bass well, and they’re working throughout 90% of your track.
But there’s this one moment where you want the low end to feel even thicker.
The best way to achieve this is by automating an EQ band for that specifc
point in your track.
Consider a lead guitar or lead synth part. For the most part, it’s sitting
well within your mix, complimenting your vocal but not obstructing it. But
there are short windows of opportunity for it to shine through a little more.
Rather than automating the volume, which could feel too obvious, you could
automate a presence boost in the lead line to bring it forward a bit for those
specifc moments in time.
Or, you’ve got a lovely breakdown in your track where you want to hear
the lead vocal reverb shimmer a bit more. Rather than automating the whole
send amount, you decide to boost the upper frequencies of your reverb with
a shelf.
Having read these examples, you may be thinking, ‘Can’t I achieve all of
that through dynamic side-chained EQ?’ The answer is yes, you could. You’ve
already realised that there is often more than one way to achieve the desired
result in the world of music-production. How you choose to get from point A
to point B is a personal decision. My job here is simply to make you aware of
as many potential routes as possible. The journey is then yours to take.
change each note individually in your piano roll or step sequencer. Doing so
will give you maximum control. The short-handed method here is to auto-
mate the velocity in your sampler. If you automate your velocity with an LFO
that oscillates against the timing of your track, you’ll get some very useable
velocity variation.
The second place to look is at the pitch of the sample. Each time you hit
a real drum, the pitch will be very slightly different depending upon the posi-
tion on the drum and the velocity with which you strike it. By automating
your sample’s pitch, you can, to a certain extent, replicate this effect. You’ll
want to affect the fne rather than the coarse pitch, meaning that you’re af-
fecting cents rather than semitones. Again, an LFO here means you can set
it up and forget about it, rather than having to write in constant streams of
automation.
The last, less obvious place to look at is your sample’s release time. The
decay on a drum will often vary depending upon how you tune the batter and
resonant heads, where and how hard you strike it, the stick type, and so on.
Subtly automating the amp envelope’s release time will give you some of this
gentle variation that will take your production away from the amateur and
towards the experienced.
Of course, all these principles apply to cymbals as well, most importantly
hi-hats and rides. Because the frequency with which you strike these time-
keeping cymbals is so much higher than your drums, you can be more pro-
nounced with your velocity, pitch, and release variations.
sawtooth wave. The square wave will act as an on-off switch, whilst the saw-
tooth will gradually slow down with a quick switch back on.
You can experiment with track levels in this same tempo-synced way too.
The waveform that you choose will denote the overall feel. For example, a
backing vocal coming in and out on a sine wave will feel smooth, whereas a
pad that chops in and out on a square wave will feel more aggressive.
A creative way you can use this effect is on flters. Think about the sound
you’ll achieve by tempo-syncing the automation on a low-pass flter.
If you like the direction in which the effect is taking you, but it feels a bit
too much, remember you don’t need to go hard on it. If you’re auto-panning,
you don’t need to go from hard left to right. You could go just 50% either way.
If you’re automating track levels, you don’t need to go all the way off and
on. You can reduce by any amount that you feel compliments your mix well.
Small details always add up, and three or four small changes will probably
sound better than one massive one.
In the same vein, you could gently automate your mix’s stereo width. Nar-
rowing your verses by 5% can make your choruses open out even more.
What about applying a high-pass flter to your verses? A high-pass flter at
35Hz that rolls off in your chorus to let that extra bit of low end through can
make all the difference to your drop.
The frst caveat with all these things is that I advise you to do them once
you’ve fnished the rest of your mix. It would be counterproductive to auto-
mate your entire mix before having fnalised individual components. The
second is that less is more here. You shouldn’t be looking to make changes on
your master buss that are clearly audible in your mix. They should be subtle
changes that are felt rather than obvious moves that are noticeable.
TASK – Implement some subtle mix buss automation. Try volume and
stereo width automation as a starting point.
TASK – If you have a song that may beneft from a new, more expres-
sive tempo map, use that. If not, create something new in order to
explore this topic.
• What automation is
• Automation modes: Read, write, touch, and latch
• Automation types: Fades, curves, binary, step, and spike
• Bouncing automation
• The three automation stages: Problem-solving, fow, and creative
150 Unit 7: Automation
If you weren’t sure just how deep the automation rabbit hole was before, you
certainly are now!
By this stage in the course, having had seven months to practice and im-
plement many of the lessons you’ve learned so far, you should be seeing con-
siderable changes in the overall quality of your mixes. Until now, I’ve focused
on working with the musical information that is within your session. I’m now
going to change course.
In the coming units, we’ll look at all things vocals and synthesis. I could
easily have covered these topics at the beginning of the course, but I felt it
was better to cover the mixing process in full frst before distracting you with
capturing and creating sounds.
So, without further ado, let’s talk about vocals.
Checklist
• Do you understand all your DAW’s automation modes?
• Can you write automation into regions and tracks?
• Can you create exponential and logarithmic automation curves?
• Can you create binary, step, and spike automation?
• Are you familiar with bouncing MIDI to audio and the benefts of doing so?
• Are you familiar with the three stages of automation? (Problem-solve →
Flow → Create)
• Have you adjusted to automated after your static mix is fnalised?
• Have you explored the subtleties of each of the six uses of automation?
• Have you explored transitional effects?
• Have you explored automating synth parameters?
• Have you automated drum flls?
• Have you automated chokes?
Unit 7: Automation 151
Vocals
DOI: 10.4324/9781003373049-8
Unit 8: Vocals 153
your takes, you’ll have conficting tails, decays, and refections all over the
place. Equally, don’t burn in any aggressive EQ or compression. Whilst you
may end up cutting or boosting a lot of frequencies, it’s better to have them all
there in your project and have the option of what to manipulate in the mix.
The rule for remembering which type you need for recording is simple:
Closed-back closes everything else out, and open-back lets in other sounds.
Therefore, for recording, you need closed-back headphones. You don’t want
the sound you’re sending to your singer to bleed from the headphones and
feed back down the vocalist’s microphone.
There is a wide range of closed-back headphones on the market that
cater to any budget. I’m not going to stick my neck out and say you should
buy this or that model, as products on the market change. What I will say
is do your research. Read reviews, read blogs, and watch comparisons from
knowledgeable people on YouTube. And then make a decision that works
for your budget.
I can hear you asking, ‘But if I need closed-back for tracking, do I need
open-back, too?’ The answer is, no, you don’t. If you can only afford one set
of headphones, ensure they’re closed-back. You can use these for tracking
and mixing. Theoretically, open-back is much better and more comfortable
for mixing as they provide a more natural, less isolated listening experience.
However, if you’re mixing in a space that isn’t made for that purpose and
therefore has ambient noise, then they’re not that helpful. In a noisy work-
ing environment, you’ll want to be as isolated as you can. Open-back is only
benefcial if you’re working in a space that is treated well enough to notice
the beneft.
For the average amateur producer, one good pair of closed-back head-
phones is the way to go.1
large enough to cover your mike’s diaphragm. If your vocalist moves around
a lot when they sing, then a larger diameter is better too. Secondly, you can
purchase both nylon and metal flters. There are pros and cons to both. Ny-
lon ones are cheap and great for removing plosives, but they can sometimes
flter high frequencies and are easily damaged. Metal ones have wider holes,
so they don’t flter the sound, and they are reasonably durable, but it’s easy to
bend the metal sheet if you’re not careful, and they can develop a whistling
sound over time.
Once you’ve got your pop shield, you need to know how to set it up for
optimum performance. The general rule is to position the shield three inches
away from the microphone and position the singer a further three inches
behind this.
TASK – If you don’t own a pop shield, buy or make yourself one.
TASK – Research some DSPs. The market is changing all the time.
I would recommend checking out UAD and Antelope Audio, to
begin with.
mike. This low-cut switch will help reduce the proximity effect. The proximity ef-
fect is a phenomenon that causes an increase in low frequencies as you move the
mike closer to the singer. The closer you get, the more of a bass boost you’ll get.
On the face of it, this may sound like a bad thing, but it has pros and cons.
The proximity effect is what allows radio DJs to have that stereotypically thick,
rich tone of voice. It’s also allowed many singers through the years to enrich
their voices beyond the realms of realism. Conversely, it can easily make things
less intelligible and muddy and make vocals get in the way of bass instruments.
So yes, you can use it to fatten things up, but you need to be careful.
Using your low-cut switch will help you avoid some EQ’ing later. With
that said, there’s nothing stopping you from applying this high-pass flter in
your DAW post-recording. The beneft of doing it this way round is that you
can adjust the flter frequency and slope. There isn’t a right or wrong way to
do something, just personal preference, as with most things.2
This concept isn’t unique to amateur vocalists. The same has been used
countless times with professionals, too. The beauty of working in the digital
domain is that your computer will come with a large amount of storage space,
and you can buy additional space relatively inexpensively. You can also cre-
ate as many tracks as you like within your session; you’re not limited by the
number of channels on your desk.
One of the most credible skills to master as a producer is to be able to
identify good bits of takes in real time. It could be a line from warm-up A, two
lines from take two, a phrase from warm-up B, and that long note from take
fve. It’s good practice to get your singer to provide you with a printed copy of
the lyrics for your session. You can use these to mark up good bits of various
takes as you go. Doing this will save you from having to listen back through
everything afterwards. You can just hone in on the bits you already know you
liked the sound of. Your singer is also likely to be impressed by this. Time is
money. The less time you spend trawling back through takes, the more time
you have to produce the track.
Something I’ll also generally track is a lower and higher octave of the lead.
This is assuming that your singer can reach the notes. They don’t need to be
performed at the same dynamic as the lead, so long as there’s still conviction.
But they’re great to have to layer into your mix subtly. Almost not being able
to hear them is usually perfect.
After these layers, gather whatever harmonies you and the singer want to
put down. Generally, the more the merrier. It’s much better to have lots of
options and creatively select what you want to use in your mix. I recommend
getting at least two versions of each line. You can then hard pan these in op-
position, creating wide backing vocals.
Finally, your vocalist may have specifc ad-libs they wish to interject. Usu-
ally, they’ll have loads of ideas and will want to put them all over the place.
That’s fne, let them! Again, you can then select the few that will make the
cut in the mix. But the more options you have, the better.
TASK – Get used to capturing all the vocal layers: Double track, triple
track, low and high octave, harmonies, and ad-libs. Focus on keeping
your session well organised, with each part on its own track and every-
thing labelled appropriately.
Working this way keeps me 100% in control of what I hear and what I don’t.
Most DAWs will have a function called ‘remove strip silence’ or something
similar. This function is a sort of middle ground between the noise gate and
the manual method, where you can automatically cut out silence from your
region based upon a threshold and length of silence that you specify.
Another option if you’re set on using a gate is to use the reduction func-
tion. This will usually be labelled as a percentage. It means that once the
signal dips below your threshold, it will be reduced in level by the percentage
you specify. In this way, you can avoid completely cutting stuff out.
TASK – Explore noise gates on your recorded vocal. Most likely, your
DAW will have one as stock. See if you can dial in the sweet spot so
the gate opens and closes naturally between sung phrases.
TASK – Find out how to, and practise, comping in your DAW. All
DAWs will allow you to comp in one way or another. You’ll want
to fnd some content online that is specifc to your DAW to learn
this.
TASK – Learn how to adjust timing and tuning in your DAW. Some
DAWs have excellent timing and tuning functionality built in. Others
don’t. In which case, you may wish to consider investing in a plugin to
assist with this process. Do your research and make an informed decision.
Day 15 – Subtractive EQ
Once you have your vocal perfectly edited, it’s time to begin working it into
your mix. The usual starting point is with subtractive EQ. You should be lis-
tening to the tonal qualities of your vocal here, specifcally, aiming to tame or
remove any problem frequencies.
A typical starting point is to remove excessive low-end build-up. The hu-
man voice doesn’t have much valuable frequency content below 80–100Hz,
so you can safely roll this off with a high-pass flter. This is more likely to be
necessary for male voices than female, where the proximity effect is expected
to be more of a factor. The exact frequency that you set your flter at will
depend on the singer’s voice and the pitch at which they are singing. It’s com-
mon to select one cutoff frequency and apply it across all your vocal channels,
but this is lazy. There will be more low-end content in a lower passage, so a
lower cutoff frequency may be more appropriate.
You should also look out for any harsh frequencies. Some people will ad-
vise you to grab a narrow band, boost it, sweep it until you hear a frequency
that sticks out more than the rest, and then cut this out. I fnd this approach
heavy-handed and it can leave you with a soulless vocal if you’re not care-
ful. If you’ve taken the care to get your recording right in the frst place, you
shouldn’t need to cut harsh frequencies too aggressively. Instead, you can still
grab a narrow band, sweep it until you hit a pokey frequency, but then use a
dynamic band to attenuate it gently rather than cutting it completely. This
will mean the frequency is only being acted upon when it exceeds that band’s
threshold, which will be much more pleasant and far less intrusive.
As a reminder, for subtractive EQ, it’s good practice to use a transpar-
ent equaliser. Your DAW’s stock EQ will likely ft the bill, although it may
not have dynamic bands. Options that I turn to are FabFilter’s Pro-Q3, Slate
Digital’s Infnity EQ, and Waves’ F6.
As a fnal reminder, use narrow Qs when you’re cutting frequencies. You
only want to cut the offending frequency, not those around it.
• For pop, RnB, and most electronic music, you will be heavily processing
the vocal. You’ll want lots of top-end shimmer, clearly audible processing,
and a consistent dynamic throughout.
• Hip-hop is like pop, but with less top-end shine and fewer effects. You’ll
want more aggression and presence in the upper mids and a heavy low end
to provide more power.
• For rock music, you’ve got less top end but more high mids and body. Your
vocal will generally sit a little deeper in the mix too.
• In jazz, subtlety is the name of the game. You won’t want any noticeable
processing and will want to keep all your dynamics intact.
• Metal and hardcore use a lot of heavy compression, which helps achieve
the distinctive, aggressive tone. It’ll have less low end, more body and
high mids.
Only by thinking about the sort of music you’re making at this stage will
you be able to make solid decisions about how you will process your vocals.
This also requires you to know a bit about the stylistic features a listener will
expect from the music you’re producing. The more information you’re armed
with, the more likely you are to get the production just right.
Day 17 – Tone-shaping EQ
Now that you’ve got your head in the right state of mind for the style you’re
working with, you’re ready to start shaping the tone of your vocal. I’ve delib-
erately labelled this as tone-shaping rather than additive EQ because shaping
the tone of your vocal doesn’t just mean that you’re going to be enhanc-
ing the things you want to hear more. As you may have assumed from the
implications in yesterday’s tip, this stage may also call for some subtractive
EQ. The main difference between the subtractive EQ here and before is that
you’re not looking to remove unpleasant things this time. Think of yourself
Unit 8: Vocals 165
now as a sculptor working with a beautiful piece of wood. The wood grain
is gorgeous throughout, but that doesn’t mean you need to use it all. You’re
still going to carve away parts of it to enhance the bits to which you want to
draw attention.
Therefore, when making tone-shaping EQ moves, you’ll want to apply
broader brush strokes. This means using a wider Q. Keeping your bandwidth
wide will maintain the natural qualities of your vocal. In Unit 3 we talked
about using analogue EQs for tone-shaping. On some analogue EQ models,
you won’t even fnd a Q control to utilise. A Neve 1073 or an API 560 are
great examples of this.
On the other hand, SSL models do have bandwidth controls. What you
use is down to personal preference and the sort of tone that you’re looking
to get out of your analogue emulation. Some professional engineers will al-
ways use the same analogue EQs, regardless of the genre within which they’re
working. This may be because that’s the console that they have in their stu-
dio, or it may be that they just feel comfortable working with that model.
There aren’t many rights or wrongs in the music-production world, as you
know by now. You can often plot many different routes to reach the same
location.
Day 18 – Compression
As previously mentioned, the style of compression you use and how much of
it you apply will be completely different, dependent upon the genre. Jazz and
hardcore are at opposite ends of the spectrum. What I’ll do here is take you
through some standard techniques. How much of this guidance you choose
to apply is up to you.
You already know that different compressors have contrasting tonal char-
acters. What you choose to use will depend on how you want it to affect your
vocal. Using a 1176 or SSL-style compressor frst in your chain is common.
A good starting point is to dial in 3–6dB of gain reduction. You don’t want
the compressor to be working all the time. It should just be engaging on the
loudest peaks.
The attack and release times are critical. To begin with, dial in the slowest
attack and fastest release times. Increase the attack gradually until you start to
shave the transient off the vocal and then back off. Then decrease the release
until your compressor is breathing in time with your track.
166 Unit 8: Vocals
You don’t want to be compressing all your vocal signal at this stage. It’s
OK for your needle to return to zero before engaging on the next transient
peak. You don’t want to squeeze the breath out of it, but you do want to add
energy and power.
Whilst you’re learning to hear this, I would advise using a compressor
with a visible needle that will bounce. A 1176 gives you this. Just be careful,
because if your 1176 plugin is built to model the original hardware truly, the
attack and release dials will work backwards. So, when you turn the dials up
to 7, this is the fastest time, and as you back off towards 1, they get slower.
This is just a quirk of the original outboard gear that most manufacturers
choose to emulate.
compression means the sibilance will have become more evident due to the
levelling out of the dynamics in the performance via the compression, which
should make it easier to identify and target.
A de-esser is simply a compressor that targets a specifc frequency range. A
good one will have a monitor feature built in. This monitor feature will allow
you to listen to the affected frequencies only, meaning you can dial in just the
right frequency range and reduction amount. Every manufacturer makes one,
but the one that comes as stock in your DAW is probably OK too.
The vital thing with de-essing is not to be heavy-handed. Going too hard
with it will leave you with a vocalist that sounds as if they have a lisp. You
don’t want to eliminate the ‘s’ and ‘t’ sounds, as this would leave you with an
inarticulate, unnatural-sounding performance. Be gentle and ensure that the
vocalist sounds human.
TASK – Practise de-essing without going too far. Put the de-esser in
different positions in your signal chain. What difference does this
make?
Saturating a vocal can make it feel brighter and more exciting. It can help
it hold its own more within your mix and make it feel thicker. How you apply
the saturation and with what is up to you. You can place it in series or parallel,
as subtly or aggressively as you wish.
clear. Think about them creatively rather than trying to follow a predefned
set of rules for your BVs.
Here are some informative questions that you should ask yourself:
By identifying the sort of BV you have in front of you, you’ll be able to make
a more informed decision about where it should be panned within your mix.
As a rule, the more fundamental the vocal is, the closer to the centre of
your mix you should have it. For example, having a countermelody panned
hard out to the left would sound odd.
However, double-tracked BVs are different beasts. You can pan these hard
left and right in opposition to provide a wide feel to your mix. Adjusting the
width of these elements throughout your mix will create contrast.
In general, automating the position of BVs is a creative way to make your
mix more interesting. Having one thing static in your track from start to fn-
ish can get stagnant after a while. Consider moving things around subtly to
keep things fresh.
TASK – Revisit backing vocals in a few mixes. Ask yourself the previ-
ous four questions and see if their positioning is appropriate. Make
relevant adjustments.
The critical question is, why would you want to do this? And the answer is
simple: It will allow you to layer other instrumental sounds underneath your
vocal to support it. Maybe you want a synth doubling your lead vocal an octave
higher; perhaps you want to bolster your BVs with some synth choir sounds
to thicken them up. There are lots of creative ways that you can use this tool.
The other thing that this is good for is creating sheet music. By convert-
ing your audio to MIDI, you can then use it as your starting point for creating
a score. Professional programs such as Sibelius and Nuendo can successfully
import MIDI. Doing your tidying up in your DAW frst and then transferring
the MIDI over will save you a lot of time. This is a pretty niche scenario that
won’t apply to most, but it’s worth considering for some.
I hope that now you’ve scrubbed the concept of a stock channel strip for vo-
cals from your mind. As you now appreciate, every vocal will have its own set
of characteristics and requirements that cannot be catered to by something
premade. There are so many factors that affect the vocal’s quality that you
must consider that trying to apply a ‘one-size-fts-all’ approach is doing your
vocal a disservice.
We’re now very close to the home stretch of this course and the unit that
many of you have been waiting for: Mastering. But before we get there, I want
to cover what many consider the next most daunting topic: Synthesis.
Checklist
• Have you optimised your recording environment as best you can?
• Do you have the right headphones for recording?
• Have you got a pop shield?
• Is your microphone’s positioning correct?
• Are your recording levels good?
174 Unit 8: Vocals
Having this checklist by your side whilst you’re implementing the steps into
your workfow will help you.
Further reading
1 Koester, T. (2022). Open-back vs closed-back headphones: What’s the difference? [on-
line] sweetwater.com. Available at www.sweetwater.com/insync/open-back-vs-
closed-back-headphones-whats-the-difference/ [Accessed 9 Nov. 2022].
2 DPA Microphones. (2022). Proximity effect in microphones explained: How it affects
different sound sources. [online] dpamicrophones.com. Available at www.dpamicro-
phones.com/mic-university/source-dependent-proximity-effect-in-microphones
[Accessed 9 Nov. 2022].
3 Wregleworth, R. (2022). What dB should vocals be recorded at and why? [online] mu-
sicianshq.com. Available at https://musicianshq.com/what-db-should-vocals-be-
recorded-at-and-why/#:~:text=You%20should%20record%20vocals%20at,it%20
comes%20to%20recording%20vocals%3F [Accessed 9 Nov. 2022].
Unit 9
Synthesis
DOI: 10.4324/9781003373049-9
176 Unit 9: Synthesis
TASK – Look at the synths that you have. Most DAWs will come with
a range of options. Try to identify common features between them.
Study their layouts. Identify what is different between them.
Day 3 – Oscillators
Let’s begin to learn how to synthesise. A synth is nothing without an oscil-
lator because the oscillator is what creates the electrical signal. Without it,
there is no signal to be manipulated. So, this is your starting point.
Unit 9: Synthesis 177
will be. Again, you are most likely to instruct your oscillator of this informa-
tion without thinking about it. The velocity with which you play your MIDI
keyboard will probably instruct the oscillator of your desired amplitude.
So far, so good then. No nasty surprises here. And as we continue through
this topic, you’ll fnd that there aren’t any nasty surprises at all. Everything
is entirely logical.
TASK – Look back at your range of synths. Identify where the oscil-
lators are. Some synths may only have one; others could have two,
three, four, or more!
Day 4 – Waveshapes
The third way of manipulating an oscillator is through the selection of wave-
shape. The good news here is that the primary options you have at your
disposal are limited to just four individual shapes. These shapes are sine, saw-
tooth, square, and triangle, and each has its own identifable characteristics.
Before we dive into these characteristics, understand this: Musical sounds
are generally not just made up of one frequency but are a combination of
multiple different frequencies, called overtones or partial tones. The lowest
frequency (the fundamental) is what we perceive to be the sound’s pitch. All
the other partial tones combine to create the sound’s unique timbre. Bearing
that in mind, let’s defne our waveshapes.2
A sine wave is clean and smooth. It is as basic as sound can get. A sine
wave has no overtones. It is a pure fundamental. The sound you make when
whistling is about as close as you can humanly get to creating a pure sine
wave. Sine waves are great for making deep, smooth sub-bass that doesn’t
interfere with other mix elements.
Square waves are rich and buzzy. In addition to its fundamental, a square
wave will have multiple harmonics. To clarify, an overtone is any higher
frequency standing wave, whereas harmonics are integral multiples of the
fundamental. In a square wave, the harmonics occur in whole odd-number
multiples of the fundamental. Combined with the fundamental, these har-
monics give the wave its square shape. They can make crunchy, aggressive
kick drum sounds.
Triangle waves have the same odd harmonics as square waves. However,
rather than consistently disappearing into oblivion, in a triangle wave, they
taper off, providing the triangular shape. Its sound is somewhere between a
sine and a square wave. It’s not as smooth as a sine but not as buzzy as a square.
It’s clearer and brighter than a sine wave. Triangle waves are often likened
to reed instruments such as recorders and futes and are great for lead lines.
Unit 9: Synthesis 179
Sawtooth waves are jagged. They’re the buzziest-sounding and are even
harsher-sounding than square waves. This is because they’re the richest in
terms of harmonics. So, if you’re looking for something in-your-face, a saw-
tooth is what you want.
If you’re feeling wholly bamboozled with technical jargon at this point,
don’t panic. For our purposes, none of it matters. All you need to remember
is that you have four primary wave shapes, and they each have their own
personality.
I stated the ‘primary options’ previously because stemming from these four
shapes come all manner of variations. But ultimately, they all have their roots
in one or other of these origins.3
Day 7 – Filters
As a quick recap, the sound created by your oscillator is likely to contain a
fundamental and a harmonic series. These elements will combine uniquely
and can be defned as the instrument’s timbre. Timbre is commonly described
adjectively, with words like warm, gritty, and silky.
After your oscillator(s), the sound will travel to a flter section where you
can shape its harmonic characteristics. You’ll encounter four options: Low-
pass, high-pass, band-pass, and notch. These all work in the same way as on
your equaliser, but let’s refresh our memories for the sake of completeness.
A low-pass flter will set a cutoff frequency allowing everything below it to
pass and rejecting everything above it. This flter type is commonly used for
dark and warm sounds where you want to restrain your sound.
High-pass flters are the opposite. They set a cutoff frequency that allows
everything above it to pass and cuts everything below it. They’re primarily
used to remove unwanted low frequencies. They’re good for crisp and bright
sounds.
Band-pass flters allow you to select a group of frequencies that you want
to allow through, cutting the rest. This technique is often used to emulate
formant frequencies of the voice and is good for nasal sounds.
Notch flters are the opposite of band-pass flters. They allow you to spec-
ify a group of frequencies that you wish to prevent from passing through.
They’re commonly used to remove specifc, unwanted frequencies, but have
other creative applications, too.
Using flters is the most basic way there is of shaping a waveform. How-
ever, they have one extra secret weapon that has the power to make things a
whole lot spicier: Resonance. The resonance control will boost the frequen-
cies around the cutoff point. This will create a ringing sound which is ideal
for making the cutoff frequency point more recognisable. Proceed with cau-
tion, though, as too much resonance will become piercingly irritating. The
resonance boost is integral to the classic flter sweep sound, which we will
explore further later.
TASK – Identify the flters section on your synth. How does it affect
the oscillators? Can you assign it to specifc oscillators, or is it a global
flter?
dictates how the volume of your synth changes over time and commonly uses
four stages known as ADSR. Understanding the ADSR envelope fully is key
to understanding how to shape your sound.
The A stands for attack. The attack denotes the length of time it takes the
signal to reach its peak level after playing your MIDI keyboard. Short attack
times mean your sound will reach full blast quickly, whilst long attack times
mean your sound will gradually fade in.
The D stands for decay. The decay indicates how long it takes for your
sound to fall to its sustained level once it has reached its peak.
The S stands for sustain. The sustain is the level at which the sound will
hold after it has risen to its peak level and decayed down and is measured in
decibels. Note that a sound will only sustain if you hold a key on your key-
board. If you simply press and release, the sustain step will be missed.
The R stands for release. Once you stop holding your MIDI note, the
release dictates the length of time it takes for the sound to return to being off.
Short times mean the sound cuts off swiftly, whereas longer times mean the
sound will fall away gently.
On some synths, you will come across a ffth step in this chain. H stands
for hold and comes after the attack. The hold will denote how long the sound
will remain at its peak level before decaying down to the sustain level.
Day 9 – Modulation
Modulation is where synthesis gets interesting as it’s the stage where you start
to create movement in your sounds. If you substitute the term modulation for
movement, it’s much easier to comprehend.
Using an envelope is the most common way to create movement in your
sound. You can assign an envelope to almost any parameter within your synth
to make it move once in a specifc way. The simple principles of envelopes
that you just learned apply the same way: You can specify the A(H)DSR of
any parameter. The trickiest concept to grasp is that, rather than the enve-
lope controlling the volume, it controls the position of something else.
Let’s look at a couple of examples to clarify this: The most common pa-
rameter to modulate with an envelope is the cutoff frequency on the flter.
Picture this: The attack quickly modulates the cutoff frequency from around
500Hz to 5kHz. The decay quickly brings the cutoff down to 1kHz, where it
sustains for a second before slowly releasing back to its original position at
500Hz.
Let’s consider another example. Imagine the pitch of your oscillator has
an envelope assigned to it. The attack tells the oscillator to rise quickly by an
octave. There is no decay. The sustain is at full, so the pitch stays an octave
up whilst you hold the key down, and the release quickly falls back down an
octave to the original pitch.
There are two other important things to be aware of regarding your enve-
lopes. First, there are different trigger types. You may be able to tell your en-
velope to re-trigger only after all current MIDI notes have been released. This
is commonly called ‘single’ mode. ‘Multi’ means the envelope is re-triggered
by every MIDI note. These are the most common, but there are others too.
Secondly, you will be able to assign one envelope to multiple parameters.
You don’t need to set up a new envelope for every modulation assignment
unless you desire a different shape.
The unique factor to be aware of with envelope modulation is that it trig-
gers the movement once only. This is the opposite of an LFO.
184 Unit 9: Synthesis
Day 10 – LFOs
LFO stands for Low-Frequency Oscillator. Technically, it is another oscillator
in your synth. However, its frequency is so low that you can’t hear it. LFOs
can take any waveshape (dependent upon the limitations of your synth) just
like a regular oscillator. Their unique characteristic is that they are continu-
ously cycling around their waveshape rather than modulating a sound once
only as with envelopes.
You can adjust your LFO’s cycle rate in one of two ways: Either by fre-
quency (in Hz) or by tempo-syncing it to your project and using duration
values such as one-sixteenth, one-eighth, and so on.
So, what are some common things to modulate with an LFO? The pan-
ning position is one, where you want the sound to move from left to right
and back again continuously. Tuning, specifcally the cents of an oscillator, is
another where you want the pitch to fuctuate constantly, creating a kind of
vibrato sound. You could modulate your flter’s cutoff frequency with an LFO
so that its position moves up and down continuously. Or what about your
oscillator’s volume to create a tremolo effect?
The keyword to associate with LFOs is continuous. LFOs don’t stop after
one trigger; they cycle around and around.
The standard place to start with LFOs is with a sine wave. Due to their
smooth nature, an LFO with a sine waveshape will give you a smooth modu-
lation effect. The more interesting your LFO waveshape, the more interesting
the resulting modulation effect.
TASK – Set up an LFO and assign it to the same parameters you as-
signed your envelope to yesterday (ensure you remove the envelope
parameters). Experiment with tempo-synced and free cycle rates. Ex-
plore different LFO waveshapes.
make that specifc model unique. Whilst this list is by no means exhaustive,
it will cover most of the bases.
Sub oscillators are common. Typically, they are linked to the principal/
frst oscillator and are pitched an octave lower. They are most commonly
a sine wave whose job is to thicken and bolster the overall sound without
contributing additional harmonic richness.
You may well also come across a noise generator. This could be part of
one of your main oscillators or may run through its own independent oscilla-
tor. Commonly they will offer white and pink noise and are there to provide
textural complexity to your sound.
You’ll frequently encounter equalisers within your synth. These tend to be
global, i.e., affecting the whole synth rather than individual oscillators (as you
already have your oscillator flters to play with), and usually will be limited to
three or four bands rather than giving you complete parametric control.
Most modern synths will have an effects section included. In some synths,
the options here are exhaustive. In one of my favourites – Ana 2 from Sonic
Academy – you have a wide selection of reverbs and delays, all manner of dif-
ferent distortion types, multiple modulation options, a whole load of dynamic
processors, and some other hidden gems, too! You’ll fnd similar ranges in
other major players such as Massive and Serum.
The next things you could come across are MIDI effects. Your synth may
include an arpeggiator or even a chord trigger section. Sometimes, you’ll even
come across a step sequencer, which you can use as an additional modulation
option instead of an envelope or LFO.
Continuing along the alternative modulation route, you may see an
MSEG, which stands for Multi-Stage Envelope Generator. Think of this as an
envelope, but instead of having the fxed A(H)DSR points, you can draw in
as many points as you like to create a unique modulation shape. You can then
use it as an envelope to trigger once or link it to an LFO to continuously cycle.
Finally, on this list is a modulation matrix. On more complex synths, a
mod matrix is key to controlling what goes where. Put simply, a mod matrix
is the summary and control panel for all your routing.
As you may have realised by now, synths can become your one-stop shop
for all your sound design needs. Creating your unique sound and applying EQ,
compression, and effects within the same plugin can be great not only for keep-
ing your session tidy and clutter-free but also for preserving some vital CPU.
TASK – This is the crucial step. Learning all of the individual quirks
of your synths will allow you to get the most out of them. Learn what
every section of your synth does. Know it inside out. If you have more
than one, learn one thoroughly before moving on to another.
186 Unit 9: Synthesis
flter and shape its volume with an envelope. Think of it like sculpting a
block of stone. You start with something signifcant and chip away at it until
you’re left with everything you want.
This synthesis style is generally considered an analogue method, but it is
often replicated digitally through analogue modelling. Classic models include
the Moog Minimoog Model-D, Arturia Minibrute, and Roland Jupiter-8.
Some know subtractive synthesis as East Coast synthesis as it was created
by Bob Moog, who was based in New York.
TASK – How you approach the task for all ten synthesis types will be
up to you. You may have that type of synth already at your disposal. In
which case you can explore it. It may sound particularly interesting, in
which case you may wish to get one to play with. Or you may just want
to watch a few YouTube videos to get a better feel for what I’m talking
about. The choice is up to you.
exciting sound combinations when you consider that you could be moving
between four entirely different waveshapes in a short period.
Some stand-out models of vector synthesisers are the Prophet VS, Yamaha
SY, and Korg Wavestation.
• Leads: These are generally monophonic, meaning they can only play one
note at a time. They are bold, with rich harmonic content and strong
sustain, which helps them cut through the mix.
• Bells: These have fast attack and decay times but an extended, prolonged
release. Again, they tend to be harmonically rich and are often made us-
ing FM synthesis.
• Pads: These have long attack and release times and a high sustain level.
They frequently emulate the sound of a choir or a bowed instrument.
• Keys: These will always be polyphonic, meaning they can play more than
one note at a time and will emulate the sound of a piano or organ. They
tend to use simpler waveforms and are less harmonically complex. This
allows chords to sound more coherent.
• Plucks: These have a fast attack, decay, and release and emulate the sound
of a pizzicato (plucked) string instrument or palm-muted guitar. They are
like bells but are less harmonically rich and have shorter release times. They
often utilise Karplus-Strong synthesis to create a more convincing attack.
• Brass: These have a slightly slower attack and a fast release. They are
harmonically complex and extend over a wide frequency range. Like bells,
they’re often made using FM synthesis.
192 Unit 9: Synthesis
• Bass: These are almost always monophonic. They tend to be built upon
a sine wave, which ensures a solid fundamental frequency and will often
have another waveshape layered on top to add harmonic complexity.
Almost all synth sounds you can create, on pretty much any synthesiser type,
will fall into one of these categories. By exploring each of them, getting to
know them, and learning how to dial them in with pace and accuracy, you
will speed up your workfow and productivity to no end.
1. Not all parameters are created equally. Some controls will affect your
sound much more signifcantly than others. If you’re looking for sig-
nifcant changes, reach for your flters, amp attack and release, and LFO
depth. And if you want to fatten or warm up your sound quickly, look to
detune your oscillators.
2. The quickest way to access new tones is by adjusting your flter settings.
Get creative with what is modulating your cutoff frequency. You can try
modulating it with keyboard velocity or tracking, envelope amount, and
even the flter envelope settings. Another simple option for quick results
is to try some different wave shapes. Look to explore shapes beyond the
primary ones. The more complex the oscillator wave shape, the more
complex your sound will become.
Unit 9: Synthesis 193
TASK – Each of these tips contains information you can take and ap-
ply to your practice. Today, and over the next three days, explore each
of these tips systematically.
be deeper than you think. Ultimately, this comes down to knowing your
instrument.
8. Adjusting parameters whilst a synth is sounding can add a load of ex-
pression and dynamism to a performance. Some controller options to
play with are your pitch and mod wheels and joysticks, knobs and rotary
controllers, keyboard velocity and aftertouch, and foot pedals. You can
try linking any of these controllers to pan, pitch, volume, cutoff, and
LFO amount for quick results. Many presets will already have controller
parameters assigned to valuable things within the instrument.
9. Use your sequencer as a control module to animate your sound. Don’t
feel that you need to record notes and controller changes simultaneously.
It’s very common to perform notes frst and controller changes after-
wards. This lets you focus on playing the notes correctly frst and then
adding the creative movement in a second performance thereafter.
10. The most important part of a sound is usually the initial attack. It pro-
vides a lot of information to the listener about the sound. If your attack
doesn’t ft in with the overall style of the track, then it can be misleading
or confusing. So, pay careful attention to the front end of your sounds,
especially if you’ve changed tempo or if your tempo changes during the
track, as you may need to adjust your attack to sit better at different
tempos.
I hope you’re feeling inspired to throw yourself headfrst into the subject.
Don’t let it scare you. Own it!
I’ve decided not to include a checklist at the end of this chapter. Whilst
synthesis is a wonderful skill to possess, it isn’t as fundamental as everything
we’ve covered so far. So, I’ll understand if you wish to proceed without con-
quering it all!
Further reading
1 Burry, M. (2021). How we hear: A step-by-step explanation. [online] healthyhearing.
com. Available at www.healthyhearing.com/report/53241-How-we-hear-explainer-
hearing [Accessed 9 Nov. 2022].
2 Hopkin, B. (2022). Fundamental, harmonics, overtones, partials, modes. [online] bar-
thopkin.com. Available at https://barthopkin.com/fundamental-harmonics-over-
tones-partials-modes/ [Accessed 9 Nov. 2022].
3 Aulart. (2022). Oscillator waveforms: Types and uses – part I. [online] aulart.com.
Available at www.aulart.com/blog/oscillator-waveforms-types-and-uses-part-i/ [Ac-
cessed 9 Nov. 2022].
Unit 10
Mastering
DOI: 10.4324/9781003373049-10
198 Unit 10: Mastering
to them. They train in ways that are specifc to their position on the feld.
The way a defender trains is different to how a goalkeeper prepares. This
individual training is like mixing individual components in your mix, your
kick, bass, lead vocal, etc. Your method for mixing individual parts is unique
to that part.
You may also train groups of players together. The whole attacking unit
may prepare together, for example. This will help with cohesive movement
and coordination of the players belonging to that section. This is like your
buss processing. You can process or train groups of instruments together. For
example, you may process all your drums together or all your vocals.
Therefore, mixing can be considered as the micromanagement of the
channels within your track. It focuses on the individuals, on the small de-
tails that allow each cog in the machine to operate effectively. Without this
micromanagement, you would have an uncoordinated, incohesive machine.
Mastering is the process of managing all the individual components to-
gether. It’s like the manager of the team. You can attempt to manage a team
without frst coaching it, and it will undoubtedly help, but it will be much
less effective than managing a well-coached unit. Think of it as the team’s
tactics. It’s the overall tactical decisions taken from one game to the next that
are adapted depending on the selected team and the playing environment
they are entering. In mastering terms, it’s the decisions taken about the en-
tire track. These decisions affect everything, not just individual components.
Therefore, these decisions must be taken with consideration of the bigger
picture. That means ensuring that the decisions you make are benefcial for
the whole track, not just one or two individual elements within the track.
To put it much more simply, mixing is the hard work. It’s the nuts and
bolts, the bread and butter. Mastering is the polish and shine that comes
on top.
The mastering engineer’s job was to transfer the fnal tapes from the
mix/balance engineer, doing so as accurately as possible. The goal was to
duplicate the sound of the tape on the disc. As an apprentice, engineers
would listen to hundreds, if not thousands, of transfers. The huge beneft
here was to spend time with an experienced professional. As experience
and skills were gained, the apprentice stepped up the ladder to train with
the mix engineer and then the recording engineer. Seemingly, the dark art
of mastering that we alluded to yesterday is not so dark. The prized posi-
tion was that of the recording engineer, a job that is hardly ever mentioned
nowadays. Have you ever heard of someone talking about getting it right at
the source? There we go.
As relationships between labels and studios fractured over time, engineers
went freelance and began to work across multiple studios. This was challeng-
ing because each studio would have a unique mix environment. The job was
always the same though: Polish the mix they had in front of them with the
available tools (EQ, compression, and effects), albeit in a less familiar setting.
This is effectively where we are today. The mastering engineer’s role has de-
veloped to become the fnal quality control for not only technical aspects of
the recording but also artistic ones too.
individual components of the track but are now entirely focused on the
sum of the parts.
3. Check your levels. I’ll talk thoroughly about target levels tomorrow.
4. Bounce your mix down to a stereo fle. You should bounce your mix out at
the same sample rate and bit depth as the project fle. So, if your session
is at 48kHz 24-bit, then this is what you bounce at. You can master either
.wav or .aiff fles.
5. Take a break. Don’t try to master your track the same day you fnish mix-
ing it. I’ll talk more about this later too.
6. Import your stereo mix into a new project to master it. Don’t be tempted
to try to master it in the mix session. It’s not good practice. The main
reason for this is that you’ll be tempted to go back and fddle with mix
elements, thereby taking your mind off the task you should be focused on:
Mastering!
7. Listen through the song from start to fnish and take notes. You’ll identify
most of its issues in this frst listen.
8. Import your reference tracks into the session, and then make some A/B
comparisons between them and your mix. Write down the main areas you
need to address to make your master ft in with its target crowd.
Once you’ve done all this, you’ll then be ready to master. As you can now see,
a lot of work and effort goes into setting yourself up for success. I advise you
to follow these steps diligently.
TASK – Refect on this eight-point list. How many of these have you
been doing previously? Which points do you need to build into your
workfow?
doesn’t appear drastically louder than another. Most listeners these days
don’t listen to an album from top to tail. They listen to a playlist containing
songs from a range of artists. DSPs, therefore, employ loudness normalisation,
meaning that all songs will be normalised to the same loudness level. This
ensures a consistent listening experience for the consumer. Spotify’s normali-
sation level is –14dB LUFS integrated, for example. If you deliver a master
above this level, it will be turned down, making it quieter. Let’s quantify
this a little. You deliver your master to Spotify at –1dBTP, –8dB LUFS inte-
grated. Therefore, Spotify turns your song down by 6 dB, which means your
peak level becomes –7dB. This is called a loudness penalty. Therefore, a more
dynamic mix at the same integrated LUFS level will sound punchier when
streamed than an overly compressed or limited one.
The argument here has actually to do with perceived loudness, which is
unmeasurable. How loud does your song sound, regardless of how loud it is on
a meter? This is a much deeper rabbit hole, which I’ll touch upon later.
The real takeaway from this is that, when mastering, you shouldn’t be
concerned with your loudness on a meter. You should master according to
what works best for the song. If you distribute your tracks to a DSP, whether
it’s Spotify, Tidal, SoundCloud or any other, they’ll all normalise your levels
in one way or another. Likewise for any kind of radio broadcast. The only way
to ensure your track sounds as intended is to sell a physical copy. And even
then, the level at which the consumer listens to your record will affect how
they perceive it. As we learned much earlier, low frequencies are much more
perceivable at higher SPLs.
your mix – both the good and the bad. A large part of this is to do with
translatability. Part of being a mastering engineer is ensuring that the master
translates across every listening medium, whether in a car, on earphones, a
Bluetooth speaker, or anything else.
The size and proportions of a room play a signifcant part in how the room
sounds. Various highly complex mathematical equations help calculate ratios
between length, width, and height of walls and ceilings for better sound. If
you want to explore the subject, investigate Oscar Bonello’s research.2
Beyond the room’s proportions is the treatment of the space. This comes
in two forms: First, absorption, and secondly, diffusion. Absorption stops fre-
quencies from refecting to your listening position, meaning they don’t in-
terfere with the direct sound coming from your speakers. Diffusion works by
scattering problematic refections of sound in different directions. The theory
behind acoustic treatment is complicated. If you want to treat your space cor-
rectly, I advise speaking to an acoustician for some proper advice. However,
there are a lot of resources online that explain how you can do it reasonably
well on a budget.
this is what I aim for. This takes practice, of course. You must train yourself
to depersonalise the experience, to approach it with a fresh pair of ears from
a neutral stance.
Linked to this is taking some time between fnishing your mix and master-
ing it. Don’t master the day you hit print on the mix. There are a couple of
reasons for this too. First, you’ll want to check the mix on various playback
systems. You’re likely to want to make a couple of tweaks to it. There have
been a number of times when I’ve had a client ask me to master a track, only
to be told that they don’t like the tone of the bass, or the vocal, or something
else. Be 100% happy with your mix frst. Secondly, you should give your ears
some rest. You need to get the mix out of your ears for it to feel a little fresher
when you come to master it. Leave it alone for a couple of days as a minimum,
but a week or more if you can.
TASK – Look back at previous mixes. Are there moments where you
thought, ‘I’ll fx that in the master’? If so, fx them now. There’s no
such thing as fxing in the master!
TASK – Refect upon how you label fles in general. Are there im-
provements you can make to how you organise things?
As mentioned previously, the job of the master is not to make the mix
sound overwhelmingly better. The mix should already sound great. The
purpose is to help the mix sound its best on any listening device. It stands to
reason that experience counts for a lot in this area. Experienced engineers
will understand what areas to target to make the mix more transferable.
This shouldn’t deter you from doing it yourself. Everyone was inexperi-
enced once.
Any EQ adjustment you make at this stage will be done very gently.
Generally, you shouldn’t boost or cut by more than 1dB at a time when
mastering, perhaps 1.5dB at the most. This number is less than what others
may tell you. I’ve heard 3dB used a lot as a maximum. For me, this is too
much. If you have to adjust your master by that much to get it sitting where
you want it to, you need to revisit your mix. And, of course, you should
always use a reasonably broad bandwidth, so the move is subtle. Narrow
subtractive cuts should be kept solely for mixing and should not enter your
mind when mastering.
When applying EQ in a mastering context, you should use a linear EQ.
The difference between a linear and non-linear EQ is challenging to articu-
late simply. Still, you can think of it like this: Linear EQs are designed to work
on multiple instruments simultaneously. They’re also clean, meaning they
won’t impart any additional colour as an analogue EQ would. That’s not to
say that you shouldn’t use an analogue EQ on your master, but you should be
careful with its implementation. You don’t want to saturate the master to the
point where you are recharacterising the song’s overall feel.
Ensure that your EQ decisions are being made having regard to your refer-
ence tracks. Especially if mastering for digital release, the focus should be on
ensuring the track will sit comfortably amongst other tracks in the same feld.
You want your song to feel as if it belongs amongst other pro tracks. You don’t
want it to stand out negatively for any reason.
Beyond these broad pieces of advice, some more specifc matters can be
mentioned. For example, you’re safe to high pass up to 15Hz. Nothing this
low is perceivable. You can high pass much higher in some genres, even up to
somewhere around 50Hz in some cases. This will buy you headroom, allowing
you to make your master louder in the long run.
You can use mid/side EQ to enhance the width of your mix. For example,
a typical move is to high pass up to around 120Hz in the sides of your mix,
focusing on the low end in the centre.
It’s good practice to try to shape your mix subtractively rather than ad-
ditively as much as possible. Consider it as making room for the frequency
areas you want more of rather than taking away what you want less of. This
adjustment in stance can be tricky to grasp but liberating when it clicks.
Finally, always gain stage. I’ve said it enough times by now, but don’t be
fooled by changes in volume. Louder will sound better. Match your input and
output levels to ensure you’re not being fooled.
208 Unit 10: Mastering
TASK – EQing a master is all about context. You should pay careful
attention to your reference tracks to understand the context within
which your track will sit. This will inform your EQ moves.
then back off a little. This can work. However, I frequently set attack times
almost as fast as they’ll go to grab pokey snare transients as much as possible.
Part of your consideration here also needs to be the compressor you’re em-
ploying. Some respond much faster than others, so you need to know your
tools and employ the right one for the job.
Release times are similarly opaque. You’ll frequently be advised to start
with a fast release time and then back it off until the release just returns to
zero by the time the next percussive hit strikes and then back off a little. This
method aims to get the compressor breathing in time with the music. Again,
I often use the fastest release possible, so I’m just working on pulling the
transients back into the mix.
Finally, the knee. Not all compressors have a knee, so this may be redun-
dant information. But my advice here would be to start with a hard knee, dial
in the rest of the settings, and then roll back the hardness slowly until you feel
it’s become too soft, and then back off again. Remember, the knee denotes
how the compressor responds to the signal that exceeds the threshold.
Having said all this, I suggest you keep in mind the concept we’ve men-
tioned many times previously about having intent. Don’t just stick a compres-
sor on your master and fddle until you think it sounds good. Listen critically
to the master and form an opinion as to what, if anything, it needs. Does it
lack punch? Is its dynamic range too wide? Know what it is you’re setting out
to achieve frst. By doing this, you’re more likely to end up somewhere that
you like.
Apple Music, Deezer, YouTube, and SoundCloud all say the same. So, if
you’re mastering for digital release, −1dBTP should be your output ceiling.
The true peak part is essential. Ensure your limiter is set to monitor true peak
levels.
The process of limiting is the same as compressing: You have a threshold,
attack, and release control. However, the ratio is fxed at ∞:1. This means
that any signal exceeding your threshold will be chopped off completely.
This is what prevents clipping. For this reason, you should be extra cautious.
Again, I recommend not exceeding 3dB of gain reduction here. The more
controlled your dynamics are before they reach the limiter, the lower the
threshold will be able to go before limiting occurs, and thus the louder your
mix will become. Controlling your dynamics for loudness should happen at
every stage of your mix, not just at your fnal limiter. You should leave your
fnal limiter as little to do as possible to keep your master sounding natural
and not obviously slammed by a limiter.
As with compression, the same question marks about the attack and re-
lease time apply. These will be genre dependent. However, bear in mind that
they only really matter if they are responding to transient information. If your
mix is slammed so hard that all the signal exceeds the threshold, your attack
and release times become meaningless.
The one thing worth mentioning with release times is that times shorter
than around 30ms will tend to introduce distortion to your mix. So, in gen-
eral, try to keep your release time above this point.
TASK – How loud are your reference tracks? Can you limit your track
up to a comparable level without destroying its dynamics? If not, you
should revisit your mix.
True peak is the next thing to meter. For DSPs, −1dBTP is your target. For
CD and club play, you can shoot for −0.1dBTP. Both loudness and true peak
values are talked about a fair bit.
What’s hardly ever talked about is dynamic range (DR). The dynamic
range of your song is the difference between the loudest and the quietest
sections. A low DR value denotes a track that is over-compressed or over-
limited. Generally, aiming for a minimum DR of 9dB is good. You can push
this to 8dB for club mastering. Ensure you have some dynamic difference
between the different sections of your song, and you’ll be heading in the right
direction.
Swinging back around to loudness for a moment, it’s essential to have your
different LUFS values separated in your mind. You can hit a short-term LUFS
level of −8 or −9dB in your chorus or drop and still hit the −14dB LUFS int.
so long as you have enough dynamic range in your song.
When mastering, be sure to keep an eye on your correlation meter. Al-
though you’ve been paying it attention all the way through your mixing pro-
cess, there’s never been a more critical time than now. The last thing you
want on a master is phase cancellation!
A decent metering plugin will monitor all these things for you. There are
a few on the market.
TASK – If you haven’t already got a good metering plugin, get one!
Make sure you know where to fnd all these critical pieces of informa-
tion: LUFS, dBTP, DR, and correlation.
fade lasting roughly fve seconds. This should give your song enough room to
breathe at the end before the next song plays. The exception here is that if
you’re mastering songs that need to fow into one another as part of an EP or
album, you will need to make the adjustments at this point. So, if it’s meant
to run almost immediately into the next, your ending fade will be shorter and
tighter.
Those who are a little extra like putting curves on their beginning and
ending fades. I’m not one of them, but you may be! In which case, your intro
fade should be logarithmic, and your outro fade should be exponential.
TASK – Practise putting fades on some tracks. Ensure you don’t cut
off any audio. Make the ending feel natural. Add curves if you wish.
Day 18 – Bouncing/rendering/exporting
Bouncing, rendering, and exporting all mean the same. It depends on what
DAW you work in as to what it’s called. To do it is simple. Set a cycle that
goes just beyond the fade points you created, so you’re bouncing from and to
100% silence.
That’s the simple bit. Where people get in a mess is what fle formats to
export with. Let’s start with the simple stuff: Don’t bother with .mp3! .mp3 is
a lossy fle format, meaning that the quality of the data will be degraded as the
information is compressed to make the actual fle small. What is the point in
going to all that effort to make the best-sounding track you can just so that
you can throw it all down the toilet at the last moment when you bounce it?
You should be exporting as .wav. This is a lossless format, meaning the qual-
ity of the information kept in the fle isn’t degraded as the fle is compressed.
The next consideration is to do with sample rate and bit-depth. If you’ve
recorded at and kept your project at 48kHz 24-bit, then bounce at that too.
More and more platforms now support HD audio, and those that don’t will
catch up soon. Tidal was the frst to make a big deal of offering this. Apple
Music is now on board, and others are sure to follow (as of spring 2023). Even
SoundCloud supports it if you have a Pro package.
If your project is at 44.1kHz 24-bit, then that’s fne too. Export at that.
However, the general standard for CD is 44.1kHz 16-bit. If you are reducing
your bit-depth, you’ll need to include what’s called dithering in your bounce.
Dithering is low-level noise that’s added to your fles when bouncing down to
a lower bit-depth to mask the audio quantisation. I won’t baffe you with the
reasoning but be sure that it’s necessary. Note that you only need to dither if
you’re reducing your bit-depth. It’s not required in any other circumstance.
Also, note that there’s nothing to be gained from increasing the sample rate
214 Unit 10: Mastering
at this stage or any other point. You cannot increase the fdelity of something.
So don’t bother bouncing your 44.1kHz session at 48kHz. It’s completely
meaningless!3
Another important note: If mastering for CD, bounce as .aiff format. The
explanation is in tomorrow’s lesson.
TASK – Review how you’ve been bouncing your tracks. Have you
been formatting correctly?
Day 19 – Metadata
Metadata is information that is added to a music fle to identify and present
the audio. This information is vital as, without it, the music wouldn’t be cor-
rectly attributed to the relevant parties, and therefore the correct royalties
wouldn’t be paid.
There are a whole host of details required as metadata. These include art-
ist name, track title, album/release title, genre, songwriter credits, and track
numbers. This information is straightforward, so you should have it on hand.
You’ll also credit any additional artists, producers, writers, or engineers in-
volved at this stage.
One of the most critical pieces of data is the International Standards Re-
cording Code, or ISRC. This code is your track’s digital fngerprint and is
used to track your song’s plays and sales through DSPs. Your song cannot be
distributed digitally without it. Don’t fret, though. Most online distribution
services will assign ISRCs for you if you’re self-releasing. Note that the ISRC
is for the song, not the release. So, every song will have its own code.
If you’re only releasing digitally, then you don’t need to do anything about
adding metadata to the bounce in your DAW. All the metadata handling will
happen when submitting through your distributor. However, if you’re master-
ing for CD, you’ll need to include it. Metadata can be written into .mp3, .fac,
and .aiff fle formats, but cannot be written into a .wav fle, so bounce as .aiff
in this situation. You’ll also need to create what’s called a Disc Description
Protocol or DDP fle. The DDP fle is a precise electronic version of your mu-
sic that is immediately ready for duplication. Not all DAWs support DDP fle
creation, but you can create it in both Reaper and Studio One.
TASK – Start a spreadsheet. On it, store all of the metadata for all of
your music. Add to it as you write and keep it up to date. You’ll thank
me in ten years when you need to refer back to something old.
Unit 10: Mastering 215
When saturating your master, remember, as with anything, that you can au-
tomate it. Don’t just set and forget. Adjust the amount of saturation to add
additional weight to your choruses.
When adding saturation in whatever form you choose, look for an over-
sampling option and set it as high as it will go. I won’t bore you with the
technicalities here, but it’s important. Also, try to avoid saturating high
216 Unit 10: Mastering
frequencies. Saturating high frequencies will sound harsh and aggressive and
make them too loud. It can also make your transients too pokey, causing ear
fatigue more quickly.4
The second goal may be to add width and ambience. Reverb creates space.
Therefore, it can be used to glue parts of a master together by locating them
similarly. Large reverbs like halls will also provide additional width to your
track. Again, automating the reverb level is key to delivering contrast be-
tween sections.
Your master’s reverb should be placed on a buss, like a parallel process.
This way, you can easily automate the buss’s level as desired throughout the
track. As with any effect on a buss, ensure the reverb is 100% wet. Ensure
that you EQ your reverb here too. You can easily wash out your low end if
you leave too much in, or add too much shine making your track feel brittle.
Use a high-pass flter somewhere around 500Hz and a low-pass around 10kHz.
You may also want to keep your presence range tidy with a cut somewhere
between 2–5kHz.
The issues to look out for when adding reverb to a master are frequency
masking and/or build-ups, comb fltering, and loss of clarity. The best way to
avoid all of these is to be subtle. Remember, if you can obviously notice the
reverb in the mix, it’s probably too loud.
level, particularly when referencing other tracks. You’ll want to ‘feel’ the
music. But for all the technical aspects of mastering, of which there are
plenty, you don’t need to listen loudly. When you’re metering, working out
how to get your track sitting at your target LUFS level, automating, or
anything else technical, turn the volume down. Give your ears some rest.
This will allow you to maintain focus for longer and prevent you from going
ear-blind.
TASK – Practise turning your track down for periods of time. Think
about what level you need to listen at, and turn it down when you can.
Some mastering engineers prefer to work from stems. Perhaps this is be-
cause they’ve been sent so many poor mixes to master over the years that
they’ve learned that their life is made easier if they have the stems to work
from so they can correct any howling mix errors themselves. My POV differs
from this. I believe that good mastering is about a conversation with the
client. As a mastering engineer, you should understand what the end goal is
for the track. And you retain the ability to communicate any mix issues with
the client.
For this reason, it’s recommended to put faster, more energetic songs on the
outside of a record and slower tracks and ballads towards the inside.
You need to balance the lengths of your sides. If you have one side shorter
than the other, it will still be subject to the same accommodations that have
been made for the longer side. Your short side will also be louder.
Avoid wide stereo bass when mastering for vinyl. When cut into a re-
cord, it can cause the needle to jump out of the groove. The problem is com-
pounded if there is phase cancellation between the left and right channels of
the bass. Sibilance also causes additional issues on vinyl. Excessive sibilance
will cause distortion. The excess of high-frequency content at a relatively
high level will result in the stylus being unable to track the groove accurately.
Thus, distortion.
Through the decline in physical and the increase in digital releases, the
need to understand the intricacies of the medium almost disappeared. The
technical requirements of DSPs are far more consistent and predictable than
the relationship between record and stylus. This is precisely why you shouldn’t
fear mastering your own music. Assuming you’re mastering for digital release,
CD, or club, you can absolutely do it yourself. However, if you’re considering
cutting a vinyl, maybe consider engaging a professional. Expertise in this area
really can’t be learned in a book!
TASK – There’s no real task today. Just a general appreciation for the
subtle nuances involved in mastering for vinyl.
I’m sorry to break it to you, but now comes the hard part! I hope over the past
12 months you’ve been applying your newfound knowledge, day by day, bit
by bit. Now you need to consolidate that knowledge. That probably means
revisiting various chapters that feel less familiar and doing some additional
reading in areas that particularly spark your interest.
This book is a thorough introduction to what I consider to be the key
aspects of producing music the right way. But there is a lot more detail to be
unearthed. I hope your enthusiasm has been sparked and that you’re now mak-
ing music consistently on a daily basis. Good luck, and happy music-making!
Further reading
1 Mayes-Wright, C. (2009). A beginner’s guide to acoustic treatment. [online] soundon-
sound.com. Available at www.soundonsound.com/sound-advice/beginners-guide-
acoustic-treatment [Accessed 9 Nov. 2022].
2 Foley, D. (2015). Ideal room size ratios and how to apply the Bonello graph. [online]
acousticfelds.com. Available at www.acousticfelds.com/ideal-room-size-ratios-ap-
ply-bonello-graph/ [Accessed 9 Nov. 2022].
3 Keeley, E. (2021). What is dithering? The ultimate guide for beginners. [online] emas-
tered.com. Available at https://emastered.com/blog/what-is-dithering-audio [Ac-
cessed 9 Nov. 2022].
4 Mantione, P. (2021). Oversampling in digital audio: What is it and when should you use
it? [online] theproaudiofles.com. Available at https://theproaudiofles.com/oversam-
pling/ [Accessed 9 Nov. 2022].
Index