Last year I was approached by indie punk band Relentless Turmoil to master their latest album, Loud Isn’t Loud Enough. If you read anything about mastering on internet forums and what not, you’ll hear time and time again that it’s a strange black art. For a home studio based producer, it’s supposed to be an elusive task. But like most things when it comes to production work, the devil is in the details. Mastering is also about creative decisions just as much as it is about technical know-how, and most of the moves are quite subtle.
The process depends on what I’m mastering. For a single song, I consider overall tone, dynamics, and final volume. For an album of songs, I also consider sequencing, gaps between the songs, and relative volume between the tracks.
Sequencing and silence
Loud Isn’t Loud Enough is a unique album. Nine of the eleven tracks are full songs; one is a live track, and the last is a track of random “mess-ups.” The entire thing can heard in just over ten minutes. The shortest track is 22 seconds, and the longest clocks in at 1:33. There are great riffs and big energy to almost all the tracks. I decided to let some of the tracks overlap with each other, or leave no silence at all. I wanted to keep the album moving and energetic. The live cut and “mess-ups” track finishes off the sequence.
It’s been a while since I’ve heard an album that serves up a surprise final track. On the 1995 Tea Party album The Edges of Twilight, the final track, Walk With Me, clocks in at 14:20 with a five-minute song, a long silence, and an instrumental outtro. You’d never hear the instrumental bit if you stopped playback after the song itself, and maybe you only heard it if you forgot to stop the playback.
I brought a bit of this spirit to the final “mess-ups” track by adding almost 40 seconds of a buzzing guitar amp sound off the top, at a reduced volume. If a listener heard the album at a low level (who would ever do this with a punk record?), they might not even hear the buzzing and think the album was over, or they’d hear a very faint sound, turn up the volume, and be suddenly assaulted by the screaming band members a minute later. The band loved the idea and the way I did it.
Loud Isn’t Loud Enough
This is punk music. Punk is by its nature loud; like, your amp goes up to 12 kind of loud. Loudness is about more than the final mastered volume; it’s about the dynamic range. In other words, the difference between the loudest moment and the quietest moment in the song. The loudness wars aren’t about the maximum volume as much as they’re about the drastically reduced dynamic range.
Relentless Turmoil was clear in their directive – they didn’t want to squash the dynamics. They wanted the music to breathe with the beat and rock out like a classic punk or rock record. I fully agree with this approach, and I don’t believe you need to overdo the loudness in order to have a great energetic record.
Tone, dynamics, and final volume
I referenced a classic Sex Pistols record to achieve a similar tone for the Relentless Turmoil record. I adjusted each track with a little more “oomph” in the bass and a bit less harshness. The overall tone is warmer than the final mixes provided by the band.
Like the subtle moves to the tone, I also made adjustments to dynamics with subtle settings to overall compression and mutiband compression. For the final volumes, I hit the limiter a little bit harder.
Neither relentless nor turmoil
This album was great fun to work on. I don’t listen to a lot of punk music, but to me, any good music deserves the same attention to detail and care. The songs grew on me, and it’s a really fun album. Take a listen below and name your price on Bandcamp to download it.
If you’re just starting out your home studio, or looking to upgrade your audio interface, there are many factors to consider in order to make an informed decision that gets you the best bang for your buck. An audio interface is the traffic cop of your home studio, controlling all the physical inputs and outputs.
Using an audio interface is always better for a home recording studio than the built-in soundcard on your computer. An audio interface:
will allow you to connect guitars, synthesizers, and professional microphones
can achieve lower latency, so you don’t hear a delay while recording or playing a software synthesizer
is designed to record and playback at the same time; a soundcard, not so much
With the right feature set for your home studio, you can improve your workflow and focus on the creative rather than the technical. Don’t get me wrong, though – you still have to understand the technical, so here we go.
Number of inputs
The first and potentially most important thing to consider is the number of inputs you have. You need as many inputs as things you’ll be recording at the same time. Interfaces generally come with 2, 4 or 8 analogue inputs. Manufacturers usually state the number of inputs in the model name, and almost all of the time, it’s the first number. For example, a Focusrite 2i4 has 2 inputs (and 4 outputs, but we’ll get to that in a bit). A Presonus AudioBox 44VSL has 4 inputs and 4 outputs.
What’s crucial to understand here is that you only need enough inputs for one recording pass. For example, if you record guitar first, then vocals, then bass, you really only need one input. If you’re recording all three at the same time, you’ll need three inputs. Simultaneous recording not only captures the magic of musicians playing off one another, but recording them on discrete channels also gives you isolated tracks in your software for better mixing. (Note, mic bleed is a topic unto itself.)
Types of inputs
Inputs for microphones are XLR inputs – an XLR mic cable has three large pins in a circle on one end. Often, audio interfaces feature “combo jacks” which can take an XLR (mic) cable, or a standard ¼” cable, like a guitar patch cord. Other inputs may only take a ¼” cable.
Typical XLR mic cable
XLR combo jack
1/4″ guitar patch cable
Inputs are usually designed for one or more impedance levels. The definition of impedance doesn’t matter – just note that microphones, guitars (or any stringed instrument with a ¼” output), and synthesizers all have different types of output, and require three different setting for inputs. Mic level is the weakest of the three, instrument level (for guitars) is higher, and line level (synthesizers, CD players, your mobile music device) is the strongest signal. While most combo jacks automatically detect a mic or line level signal, only some feature a switch or option for instrument level. Activating the instrument level switch (sometimes called Hi-Z), if it’s available on your interface, ensures you get a good signal level from the instrument. This is about the same as patching your guitar through a DI box.
Input channels may also features a 20dB pad. This switch cuts the signal by 20dB, which is a significant drop. This is useful if you’re recording anything particularly loud, like a guitar amp or a drum, and ensures you won’t distort the signal. This feature is usually not found on the least expensive interfaces.
Some interfaces also include 5-pin MIDI input and output. While not strictly part of the audio system of your home studio, this can save you from investing in an additional USB MIDI interface if you have some older synthesizers you want to use. Most modern synthesizes and MIDI controllers connect directly to your computer via USB.
When you plug in a microphone, the gain knob controls the volume of the input and engages the preamp. Ideally, these knobs are laid out beside each input, so it’s easy to know which knob controls which input. For most home studio setups, the preamps in modern audio interfaces are low-noise and transparent sounding. While some interfaces feature premium quality preamps (for a premium price), you need to keep in mind you should also have a premium microphone and an acoustically treated recording environment to really take advantage.
Phantom Power, or 48V
Every audio interface will have a switch or button for 48V power, also known as Phantom Power. This is required for using condenser microphones. Just remember to always switch on Phantom Power after plugging in your microphone, and switching it off before unplugging your mic. Phantom Power will not affect your dynamic microphones.
Some interfaces put the switch for 48V on the back of the interface. Ideally, the switch is on the front and has a light to indicate that it’s on. Some may even have the switch as part of the software interface, which in my opinion, is the least desirable place for it.
Most audio interfaces will have balanced TRS (tip, ring, and sleeve) connections for outputs. The TRS cable looks similar to a ¼” patch cord (unbalanced TS cable), but it has an additional ring on the connector pin, indicating that it can be used for a balanced TRS connection, or carries a stereo signal, like your headphone cable. Generally, balanced TRS connections are less susceptible to introducing hum or noise in your signal path over longer distances.
Outputs are normally reserved to connect your studio monitors. This takes two outputs – one for the left speaker, one for the right. Interfaces with more than one output pair can be used to connect additional speakers, or connect to a desktop mixer. Connecting to a second set of speakers can be useful in testing your mixes.
Don’t discount the value of a big honking volume knob. Some interfaces feature this, and personally, I think it’s a great value add. Volume of your playback is one of the most frequently used controls in your home studio, and sometimes you need to adjust it quickly; you don’t want to be mousing around to find the control. Some interfaces also feature a mute button, which is ideal (i.e. you don’t want your monitors sounding while you’re recording from a microphone).
Digital inputs and outputs
Some interfaces also include digital inputs and outputs. These are used if you have a device with a corresponding output (S/PDIF or optical). The optical (sometimes called ADAT) signal can carry 8 discrete channels. For example, you could expand your two-input setup with an 8-channel preamp with an optical output, for a total of 10 microphone or instrument inputs. The S/PDIF connection only carries a 2-channel stereo signal, and is usually found on synthesizers or CD players.
Sometimes audio interfaces are marketed as having 10 inputs, while only two mic inputs are visible. That’s because the manufacturer is also counting the 8 digital inputs via an optical connection.
All interfaces have an option for zero-latency monitoring. Generally, if you want to include a reverb in software for your vocalist while recording, you’ll introduce latency while the computer processes the signal, applies the reverb, and sends it back out to be heard. A zero-latency switch (often called direct monitor, or input) allows you to hear the input in real time, without latency, along with the computer playback. Some interfaces allow you to adjust the relative levels of input and playback material.
Some interfaces include on-board digital effects processing. This allows you to record with very low latency and still apply a reverb or other effect to the monitored input. In my opinion, this is only an issue with older or very budget-level computers. With most modern systems and non-DSP audio interfaces, you can get latency down to a few milliseconds and use a software reverb. It’s best to use a low-CPU taxing plug-in for this. It’s important to note that the reverb in this case won’t be recorded; it’s only used for monitoring. You can still use a better reverb plug-in after the recording is complete. Many singers prefer a bit of reverb in their headphones while recording.
Many of the two-channel interfaces are powered by their USB connection to your computer, making them ideal for a mobile recording studio. The larger interfaces, with four or more inputs, usually require a separate power supply in addition to the USB connection, and a power switch. This is something to consider if you’re planning to go mobile with your studio.
The fact is, there are a lot of interface options out there, especially in the two-channel range. One way to decide which one to buy is to look at their bundled software. Many manufacturers include a light version of recording software, like Cubase or Abelton Live. If you prefer one software choice over another, but you haven’t invested in it yet, sometimes getting the light version with your interface gives you a discount when upgrading to the full version. You can check with the software companies to find out more, or download trial versions if you haven’t settled on one yet.
Ultimately, I can’t tell you what interface to buy. If you want to know which one I have, and which ones I’ve had in the past, check out my blog post about the history of my home studio. You have to assess your needs and come to your own conclusion. Hopefully, this article has armed you with the knowledge to make a good choice.
Do you see anything on an interface that I haven’t covered here, or have any questions? Comment below and let me know, and I’ll get back to you. I also accept heaps of praise and accolades.
Last year (late 2015), Adi contacted me with a request to have his songs produced as an album. We had a brief meeting during which I got to know Adi a bit more, and really saw his personality as a generous, people-loving individual, and how that shone through in his songs. We came to an agreement, and began work shortly thereafter. The plan was to produce 12 songs for an album.
Adi would send me his demo recordings, along with lyric and chord sheets. For most recordings, I would set up a session in Sonar with a simple drum loop (of my own creation, of course, since I am a drummer). He would then come over and record guitar and vocals to the beat. Often he would ask for extra “guitar licks” tracks and/or vocal doubles.
The producer brain
For some of the songs, I would make melodic suggestions for the guitar licks, or arrangement ideas for when to include instrumental breaks. I also added drums, bass, piano, strings, and other instruments using my keyboard and MIDI. Of all the aspects of producing, I enjoy this arranging process the most. It takes a careful listen to each song, finding creative ways to supplement the original performance, and at the same time, taking it up a notch. My piano and bass parts were often quite understated, providing a foundation for Adi’s performance without overpowering it. I think this is a key point for any successful production.
For one song, Midnight, Adi had written a lovely arpeggio pattern on the guitar for the intro. The rest of the song rocked out. I suggested a break in the middle where he would repeat the intro pattern at tempo. This served to open the song up and provide a breath before the final chorus.
Adi had a neat riff and chord progression for a song, but no lyric. We worked together as I made chord suggestions (on piano) and a key shift for the bridge. Adi worked out lyrics about racial diversity and inclusion, with some tweaks from me. We share the songwriting credit for Colours.
For Who I Am, Adi had written it as a medium-tempo guitar rocker with harmonica. He wanted to try it out as a piano ballad, so I took his chords and developed a piano, strings, and drums arrangement. We had to re-record his vocals, as the rocker style didn’t really fit with the more ballad-esque piano arrangement. We also forewent the harmonica in favour of a cello solo. I think this song helps to open up the variety on the album.
Adi wasn’t entirely happy with his song Eden. I made a suggestion for chord changes in the chorus, which opened up the song to sound bigger. Interestingly, this song is almost entirely comprised of major chords (only one minor chord). In some ways, it’s my favourite track on the album, as it has elements of progressive rock.
Mixing, mastering and fine-tuning
I spent a lot of time going through each song with a fine-tooth comb, fixing notes in the MIDI tracks and tightening up the timing. For some, I used a fixed tempo grid to quantize all the tracks, and for others, I used Adi’s guitar recording as a tempo map. Since they were mostly recorded to a fixed drum loop, they were fairly consistent, but minor tempo variations still occur, and sometimes it’s better to embrace them rather than forcing them to fit a fixed tempo.
I also mixed and mastered the songs. I wanted punchy, clear drums and bass, and forward vocals to ensure all the lyrics were well heard. My new best friend became Native Instrument’s Transient Master.
Ironically, the sonically simplest song, She Now Flies, presented the greatest mixing challenge. It’s actually easier when you’ve got 6 or more instruments in the mix, with guitars, piano, bass and drums, than mixing a song with only guitar and vocals.
For the mastering process, I suggested to Adi that we each come up with a sequence for the album, then compare notes. He then arrived at a sequence that was a combination of my list and his. I made minor tweaks to the EQ of some songs, and applied the final volumes. There’s some finesse here too, as I didn’t want the softer ballads mastered to the same volume as the rockers. Hopefully someone out there still listens to complete albums!
It’s been an absolute joy working with Adi on this record. He had a very balanced approach to owning his songs and being open to suggestions for changes. As the producer, I would always take the approach of allowing Adi the veto power, to reject any suggestion I made. As it turns out, he took most of them. You can’t be too precious about your ideas, and understand that the vision for the record should be the artist’s, not the producer’s.
I’ve been playing drums and percussion for a while for singer-songwriter Adi Aman, aka Beige Shelter. The band name comes from Adi’s feeling that the colour beige is neutral and can express a wide range of emotions, and music being the place of shelter where he can best express them.
I started my collaboration with Adi as his producer. We’ve worked through twelve songs for his debut album release later this year. He expressed an interest in performing more to promote the album and get his music heard. I volunteered to back him up on percussion. So far I’ve played a drumkit when it’s available, and also cajon and shaker. In the future, we hope to rope in a bassist and lead guitarist.
Our gig at Page One cafe in downtown Toronto felt like our first “real” show. We had a great turnout, thanks to our friends and to FXRRVST and Madison Galloway, our supporting performers. I’ve branded my look, always performing in a pressed shirt and bowtie.
So far it’s been a high performing with Adi. His songs are very well written (we even co-wrote on one of them) and his performances are passionate and energetic. Together I think we make a good team, as I also give him tips on improving his performance and creating interesting song arrangements.
There’s nothing quite like getting on stage on putting on a show of great original songs. You get into a “zone” where the world around you fades into the background and for a moment, it’s all about the music.
I has previously heard the idea that life on earth may have originated by some organic goo being deposited on our humble little planet by a meteor or comet. Recently I found a youTube video that explained that while indeed speculative, the theory is given the perfect name: “Panspermia.”
It’s one thing to write a song simply about a speculative theory, but that could come across as a high school essay or research paper. To be a good song, I’d have to inject my own commentary or reaction to it. I did this in a Facebook post, positing that every living thing in the universe is united by the same goo, and that makes us all Gooians.
A short while later, I jotted down the lyrics for the chorus. A few months later, I conducted additional research online to generate keywords, making sure I captured proper terms, scientifically speaking. I also happened to attend a public lecture at the University of Toronto on the topic of Planetary Habitation on the day I finished the lyrics. At the lecture’s reception, I approached the speaker, Dr. John E. Moores, Assistant Professor of Space Engineering at York University, who agreed to review my lyrics for any scientific faux-pas. He followed through, and suggested only one minor change, which I took. Dr. Moores also introduced me to the “nerdcore” genre.
I wrote the music bed (using only piano) and melody in one day. The verse melody suggested lines of lyric that lasted a little more than two measures of 4/4 time. So, I introduced a two-measure loop for the verse that was made up of one 4/4 measure and one 5/4 measure. This introduced a very quirky and offbeat rhythm to the song. I then proceeded to layer on the bass sound, the synth pads, and other sounds to fill in the music bed. The original piano track was archived.
I presented the song at the monthly Songwriter’s Cafe Meetup Group, and it was generally liked. One group member commented how the song is a good, clear, explanation of the theory, and has educational value. Many people in the group felt that the 4/4 + 5/4 pattern was too jarring for no particularly good reason. So I revisited the pattern and tried out a 4/4 + 6/4 pattern, which was still a little offbeat, but easier to digest due to its symmetry. I decided it did in fact work better for the song.
After hearing the fascinating podcast Song Exploder, where a song is deconstructed and examined into its separate parts, I stumbled upon an episode featuring composer Ramin Djawadi and a breakdown of the Game of Thrones theme music. Hearing and learning the individual parts prompted me to arrange my own cover version, similar to the original in its groove, but with electric guitar and synth strings as points of departure. I also ended up discovering several other great versions of the theme on Soundcloud, my favourites including industrial, prog rock, and smooth jazz versions.
I had the opportunity to be interviewed by one of the journalists at The Scope at Ryerson last week. Alexia Kapralos hosts her weekly podcast, The Plug-In, around the latest in Canadian and international rock music. I sat with her at the Ryerson studios for a short interview on my musical journey over the year and Song Talk Radio. She also featured my song “One Great Mistake” on the episode. It starts at about the 8:50 mark of her show. Thanks Alexia!
We practiced and developed our individual parts for about 90 minutes, then recorded several takes. Once again, I recorded both audio and video, and captured the vocals as a separate track and mixed them in during post-production.
All in all it was a fun afternoon, and I feel honoured to play with such talented guys.
Dokter Nomi, dance-pop music virtuoso approached me several months ago with a collaboration offer for his song Love is a Virus. He had the vocal track already recorded and had a couple of bed tracks already completed by other producers. This is the way he typically works, since he doesn’t play any instruments. He comes up with great lyrics and a melody and then collaborates with a producer to create the music.
I started with only piano to compose the chord structure. Once I had a chord pattern I was happy with, I then layered on bass, drums, and synths to complete the track. The piano was no longer a part of the song, but it served as a template to structure the other instruments. We presented it at a Songwriter’s Cafe Meetup, and I made several more tweaks afterwards, mostly with tightening up the arrangement.