Gain staging, or why you should record quietly

The volumes of your recordings are of utmost importance while you are recording, mixing, and mastering. In other words, it really matters how loud your stuff is. This affects the quality of your finished product and the ease by which you can manage multiple tracks and get the best sonic results.

The trouble is, it can be confusing at times to know how loud is appropriate. How loud should we record? How loud should we mix? What about the ever elusive mastering stage? That’s what I’m here to explain.

Two important facts

I was convinced that recording to a level of 0 dB was the “sweet spot” for recording when I first starting digital recording. The problem with this statement is lack of context. 0 dB is a sweet spot only if you’re recording to an analog system and you’re measuring average volumes. The first fact to realize is that analog and digital measurements for volume are very different.

A typical analog meter measures volume from silence to a maximum much higher than 0 dB. A digital meter measures volume to a maximum of 0 dB. If you line up the two meters, you can see that the “sweet spot” of 0 db in analog is about the same level as -18 db in digital.

digital vs analog meters
digital vs analog meters

But the nature of recording hasn’t changed from analog to digital. You still need a reasonably loud source to record at a conservative level. The difference lies in how the measurement systems work. In analog, volume is measured in dbVU (decibel volume units). In digital, it’s measured in dbFS (decibels full-scale).

Analog meters show average levels (also called RMS levels). Digital meters show peak levels. In the digital realm, peak levels are critically important for one simple reason. If your signal exceeds 0 dbFS in digital, digital distortion is introduced. This is not the warm, fuzzy distortion of analog tape or tube saturation. Digital distortion is nasty and not musical in any way. In other words, you want to stay way clear of peak levels reaching that 0 dbFS mark.

Here’s a sample vocal recording I did where the peak levels reached about -12 dbFS.

And here’s another sample where the peak levels exceeded 0 dbFS.

This brings us to our second important fact. A finished, mastered song peaks very close to 0 dbFS. This is how we hear songs on CD, on the radio, and on streaming services. They are loud. But this loudness is the result of mastering, not recording. All professional recordings are captured and mixed at much lower levels. Part of the mastering process boosts the volume to get close to 0 dbFS without going over.

Recording

So you’ve armed a track for recording in your DAW (digital audio workstation) software. Note that in most DAWs, when a track is armed for recording, the meter shows the level of the incoming / recorded signal. When the track is not armed, the meter shows the playback level; this can be lower or higher than the recorded level depending on fader position and/or plug-ins. There also may be a setting for meters to show average levels, peak levels, or both. I prefer to have all my meters showing both.

When you’re performing (guitar, vocal, whatever) pay close attention to the peak levels. Most meters will “sticky” the peak level, meaning it shows a little mark where the highest peak occurred and keeps it there until there’s a higher signal. This is important because you want to make sure the highest peak doesn’t get anywhere close to 0 dbFS. I usually shoot for peaks around -12 dbFS to give myself enough room, in case something does peak at -10 dbFS or -6 dbFS.

Only things in the analog world determine the recorded level: the volume of your performance (or output level of your synth or amp), the distance from the microphone, and the gain setting of your preamp (i.e. the gain knob on your audio interface). That’s it. If you exceed 0 dbFS in your recording (referred to as “clipping”) then there’s nothing in the digital realm that can fix the distortion. You have to re-do the take at a lower level by reducing the level of the audio before it hits the digital converters.

If the signal sounds too quiet in your headphones, increase the level of the headphones. If the waveform looks small on the screen, zoom the view to see it better. You can always monitor at whatever level you like, just don’t record too loud.

Mixing and Summation

Avoiding digital distortion isn’t the only reason to record quietly. As you overdub track after track, each signal adds gain to the mix. Compare the peak levels of the individual tracks against the summed master channel below; it peaks much higher.

mixing summation
mixing summation

If the level in your master channel is getting too high (sometimes referred to as “hot”) then reduce the levels of your individual tracks. With projects having several or many tracks, they can be playing back at much lower levels than the recorded audio.

Plug-ins like compressors and EQs are optimized to work at conservative levels, usually around -18 dbFS or -12 dbFS. The specifics here don’t matter as much as just making sure your tracks are playing back at a reasonable level.

Sonic Clarity

Recording and mixing at conservative levels can result in mixes that are more dynamic, open, and detailed.  Plug-ins and your DAW have room to breathe, and perform all their complex algorithms in their optimized zones.

In the final mastering phase, a limiter is applied to achieve the final volume. A limiter is a plug-in that limits the volume to a prescribed level (usually just shy of 0 dbFS) without letting anything clip. For most mixes, the limiter can apply several decibels of volume, making the mastered song much louder than the mixed version. This is the only time in the production any level gets close to 0 dbFS; the final glossy coat that finishes the recording and prepares it for release to the world.

How to use the Circle of Fifths to write songs

I know many of you don’t care for music theory. It’s clinical, it’s boring, and it sucks the soul out of songwriting. Well, news flash: you’re using music theory whether or not you intend to. For myself, I know my theory pretty well, as I learned it at young age. I couldn’t tell you if I’m playing in a Mixolydian or Phrygian mode, though, except that it’s fun to throw “Phrygian” into normal conversation.

Case in point: the Circle of Fifths (the Circle). Download a hi-res copy here. I’ve been asked before if a certain chord progression is an example of the Circle of Fifths. The question is missing the point. The Circle of Fifths isn’t a technique like modulation or chord substitution. It’s a way of understanding the essential elements of western music: the notes, the intervals, the chords, and the relationships between them.

It’s the relationships between chords that make a chord progression. Referring to the Circle of Fifths can help you discover interesting chord progressions, particularly when you’re stuck for what the next chord wants to be.

Just like clockwork

The Circle looks much like a clock. Just like there are 12 hours on a clock, there are 12 notes on the Circle. (If you haven’t downloaded a copy yet, you’ll want to so you can refer to it as you read the rest of this article.)

Moving clockwise, each note is a fifth above the last one. A fifth, as we know, is the third note of a major or minor triad (3-note chord), and the fifth note of any major or minor scale. For example, the C-major chord is C-E-G. The G is a fifth above C, and one “hour” past C on the Circle of Fifths. Similarly, an A-major chord is A-C#-E. The E is a fifth above A, and one segment after A on the Circle. This pattern holds true for any starting point on the Circle of Fifths.  And it comes full circle; if you start on C and go up a fifth 12 times, you’ll be back to C.

But the Circle can also be used to represent chords. The outer circle refers to major chords, and the inside circle to their relative minor chords. Remember, the relative minor is always the VI chord in a major key.

For example, in the key of C-major, the 6 major and minor chords are:

I Chord II Chord III Chord IV Chord V Chord VI Chord
C-major D-minor E-minor F-major G-major A-minor

How many songs in C use a variation of these 6 chords? Many popular songs use only 3 or 4 of them. Now look at the Circle of Fifths. The chords touching the C-major are the other five major and minor chords in the key of C major.

Just like the notes, this hold true for whatever key you’re in, or your base starting point on the Circle. In the key of G, all the chords touching the G correspond to the other major and minor chords in that key.

There’s also a great youTube video that explains this well, specifically for guitarists.

Get experimental

Developing a chord pattern based on the six major or minor chords is tried and true. Even if you use the I, IV, and V chords in your verse and chorus, you could try starting with the II, III, or VI chord for your bridge. All the notes you’re using belong to the scale you’re in—i.e. you’re never going out of key.

Raise your hand if you’ve ever used a seventh chord. That’s the one that sounds bluesy or jazzy. It’s called a seventh chord because it’s a major chord plus a flattened seventh note. That is, the seventh note of the scale is taken down by one half-step or semitone. That note is out of key, technically, and it sounds Phrygian awesome (see? It works).

The point here is that going out of key is cool. It creates musical interest, adds tension and can really open up a song.

So how does this relate to the Circle of Fifths, you must be asking? Say you’re writing a song in the key of C-major, and you’re using the tried and true chords—the ones on the Circle that touch C. If you want to extend a little, say for your bridge, or heck, the third line of your verse, try a chord that’s “two hours away” from C. So, try a D-major, B-minor, Bb-major, or G-minor.  Just like the seventh chord, these chords have one note that’s out of the base key signature or scale. The other two notes of each chord remain grounded in the base key signature.

If you want to experiment further, try the chords that are “three hours away” from your I-chord. Once you start introducing chords that have two notes out of key, things start sounding weirder or more dissonant. The trick here is to stay grounded in your home base. It’s fun to travel to strange and exotic places, but it’s reassuring to come back home soon.

And of course, this holds true right around the clock. You can start on any chord and you’ll have five other chords that are guaranteed to work in consonant harmony with it. Try chords that are 2 or 3 hours away from your base, and things can get interesting.

A fine example

The Beatles were masterful at creating interesting changes in the songs without compromising catchiness. In other words, they did some weird stuff without making it sound weird.

Take A Hard Day’s Night as a fine example. The A part starts with G-major, F-major, and C-major. Even though it starts with a G-major, the section is clearly in the key of C-major. The G and F are either side of C in the Circle of Fifths. It’s very consonant (i.e. not dissonant). The second half, however, things start to shift. They introduce a D-major chord (“things that you do”). The D is “two hours away” from C. They quickly return to the G-C-G pattern they introduced at the top to finish off the section. It’s like you’re walking along the curb of a street, and just for a moment, you step on to the road (maybe into a puddle), then right back on the curb.

In the B section (“when I’m home…”) they shift to a B-minor. This is a great contrast for two reasons: it’s “two hours away” from the base C-major, and it’s the minor flavor (it’s also Paul taking the lead vocal from John). By the end of this section, they’ve returned to the D-major chord that we’ve heard before.

The twists are both subtle and noticeable. There’s no mistaking the B section for anything else when it comes in. They always return to base fairly quickly.

Light a new path to songwriting

I co-wrote a song called Light Your Way with Adi Aman for our band Beige Shelter. We released it in May for Mental Health Awareness month.

Download the chord/lyric sheet here.

The verse, pre-chorus, and chorus all remain in the E-major key with no dissonant chords. The pre-chorus introduces the F#m chord which wasn’t used in the verse. For the guitar solo section and bridge, we flipped over by “3 hours” to the C#-major chord. By the end of the bridge, we’re back on B-major which is perfectly consonant with returning to E-major for the final choruses. Sometimes when you step far away from your base key signature, it can be tricky to get back to base.

Writing with purpose

I don’t deny that sometimes you just stumble upon some magical moment when you’re writing; you don’t know why it works, but it sounds cool and different, and you go with it. For myself, I’ve been trying to embrace my intuition for writing more recently. Knowing about the theory doesn’t destroy your intuition; in fact, I think it strengthens it. If you practice writing with purpose enough, you’ll begin to forget the reasons you make excellent snap decisions, but you’ll make better ones and feel more confident that they’re right. Keep on writing.

The (not so) secret ingredient to making your mixes sound good

When you start creating mixes, you quickly realize that the low and low-mid frequencies are problematic. Without care and attention, they can sound muddy, boomy, and unclear. Mid- and high frequencies are much easier to manage—you can have multiple instruments and voices taking up the same sonic bandwidth and still hear everything clearly. Try this in the low frequencies and it’s a mess.

We feel low frequencies in our bodies. It’s where the punch, the groove, and the drive of a track lives. It’s critical to your mix that instruments in the low frequencies be clear, full, and bold.

As with many mixing decisions, it’s not about increasing the level or power of bass elements; rather, it’s about eliminating the stuff that gets in their way.

The Secret

Enter the ubiquitous high-pass filter. The name is fairly self-explanatory; it filters out sound so that only high-frequencies pass through. The high-pass filter is sometimes called a low-cut filter. It’s easy to see how it works with a diagram. The horizontal represents frequencies, and the vertical represents amplitude, or volume.

a typical high-pass filter
a typical high-pass filter

In this example, the filter is set at 80 Hz, which means everything under 80Hz will be reduced in volume—just follow the slope of the line. For reference, 80 Hz is about the same as the low E string on a guitar. The low E string on a bass is one octave lower, about 40 Hz (that’s E2 and E1 on the piano).

For most instruments, including the human voice, there’s very little of value below 80 Hz. The bass guitar and thud of the kick drum usually live between 40 Hz and 250 Hz. So, the general wisdom is to high-pass everything except the bass and kick drum to 80 Hz or higher. The kick and bass will then have room to be heard clearly, which usually adds punch and groove to your mix.

Every single digital audio workstation (DAW, the software you use to record audio) has a high-pass filter. Usually, it’s a feature of your EQ (equalization) plug-in.

How high is your high pass?

The next question, then, is how high should you set your high-pass filters? That depends on your material. The rule of thumb is to dial up the filter during playback on a track until it starts sounding thin, then back down a bit. If you high-pass too much on guitars, pianos and vocals, you could rob the mix of warmth and body. If something sounds thin in solo, it could be just right in the mix; never judge your settings when listening to a track in solo—it only matters if the mix sounds good.

High-pass filters can also be used on bass and kick drum, but normally they are set very low. For dance music, you may want to include and emphasize the sub-bass (below 40 Hz). For most rock, pop, country and folk tracks, I recommend minimal high-pass filters on the bass and kick drum. Again, what matters is the kind of sound you’re after. Mix with purpose and you’ll get what to where you want to be much faster.

Microphones with high-pass filters

Some microphones have a switch for a high-pass filter. Usually it looks like this, where the crooked line indicates the “on” position for the filter. Usually they’re fixed at 80 Hz.

Microphone showing hi-pass filter switch
Microphone showing hi-pass filter switch

If you’re recording vocals or guitar (where there will be a bass in the arrangement), it’s advisable to use the high-pass filter on your microphone. Eliminating low frequencies you know you’re not going to need during the recording phase allows you to record a more consistent, louder signal.

Always mix with purpose

Finally, it’s easy to get carried away and high-pass everything judiciously. If you mix with purpose and subtlety, the overall effect can be dramatic. In other words, a subtle high-pass filter on 8 tracks, when they are all mixed together, can make a big difference.

I’ve personally found using high-pass filters to be the one technique that is universally effective on just about any mix, from a simple voice-over narration to a full band. Next to volume, it’s the move that makes the biggest difference to your mixes—in a good way.

If you’ve never mixed with using a high-pass filter, try it out on an old mix and see if it doesn’t clear out the muddiness and open up the sound. I’d love to hear your thoughts—leave a comment below.

Light Your Way collaborative songwriting

As part of the indie rock band Beige Shelter, we were approached to write a new song for a youth gang prevention event. Although we declined to perform for the event, we realized our new song was also a great message for mental health awareness and conversation.

My friend and Beige Shelter frontman Adi Aman had written a song a few years ago with a message to help out a friend going through some tough times. Adi sent me a rough recording and his lyric/chord sheet to play around with. In particular, he said he wasn’t very happy with the melody. Before I even got a chance to look at it, he followed up with a revised lyric that was more poetic and a bit more abstract.

The rewriting process

At the time, we were still involved in the youth prevention event, and I took this angle when rewriting the song. I thought a more direct lyric would be more effective in reaching young people. I also wanted to highlight the aspect of reaching out for help and getting it from friends and family. This, to me, is at the cornerstone of good mental health—people need to be willing to come forward and talk to someone they trust, and their communities need to be willing to listen, empathize and help as best they can.

I printed out Adi’s lyrics and chords and sat at my piano to work on the song. Starting with small edits, I quickly found myself rewriting entire phrases. I realized that using Adi’s lyrics as springboards, I could develop a much more direct song, and marry a melody to the words more easily. This is the sort of lyric I never would have come up with on my own, but using Adi’s original take as inspiration gave me the direction and focus I needed. Here are the working pages I used:

Page 1
Page 2
Page 3

I took care to develop a simple, flowing chord progression and catchy melodies. It was amazing how much mileage I could get from using C, G, F, and Am by playing around with the time between each chord change. I introduced a new, unheard chord to start the pre-chorus section. In other words, the Dm had not been heard in the song yet, but the rest of the pre-chorus chords were also used in the verse. This, along with the melodic centre change, was enough to give the listener a sign-post that the pre-chorus was a new section. For the chorus, I returned to the base C major chord but lifted the melody again.

Back and forth

I presented the revised song to Adi and he liked it very much. He had a few revisions for some of the chord changes, especially the unusual chords I used to end the chorus. Adi felt keeping it simple would be more effective, and once he sung it with his rich voice, I was compelled to agree.

Our bass player Tom made a suggestion for a lyric change at the end of the second verse:

Me: It goes “For your grief, but you know…” which is kinda cheap. We need a good word that rhymes with “grief.”

Tom: Believe.

Adi (singing): For your grief, but believe…

Me: And that flows great into the pre-chorus lyric “You have got the strength to carry on…” — well done, Tom!

Feedback from other songwriters

I presented the song at a Songwriter’s Cafe Meetup by playing back the recording from our latest rehearsal. Members found the song to have an inspiring message without being didactic, and with a good flow to the chords and melody.

We adopted two points from the group to improve the song:

  1. Revised the chorus lyric “And you think that there’s no way to see the light” to “And you think there’s no way out of your plight” so that the word “light” isn’t featured twice in the chorus.
  2. Extended the ending to repeat the main hook “We’ll be lighting your way” a few times before finishing the song.

Recording and Producing

We wanted to release Light Your Way as a single during the CMHA (Canadian Mental Health Association) Mental Health Week between May 1 and May 7. I knew this would be a tight schedule to get it arranged, recorded, mixed, and released.

During our first recording session, we were still finessing lyrics and making small changes to the chords. I used a rehearsal recording to set the tempo for a drum loop. I recorded Adi playing his acoustic guitar and then recorded his vocals.

Tom recorded a bassline at his home studio and sent it to me. Meanwhile, I developed a drum track and added some piano comping. Our lead guitarist, Karan, was busy with final exams and couldn’t commit to the recording session. I asked singer-songwriter and guitarist Paul Vos to contribute lead guitar based on some noodling I had done on my keyboard. Paul did an awesome job with the last minute crunch and played the part with great finesse.

During the mixing stage, I decided the piano track wasn’t helping and re-recorded an electric piano track with a little more interest than simple comping. I still wanted the acoustic guitar to be the main rhythm instrument—the electric piano was just there to add some weight to the track. I also added a string pad and a tambourine to thicken up the choruses. Finally, I recorded some vocal doubles with Adi for the choruses, again, to give them a little more thickness.

Final release

We wanted something unique for the cover art. Adi happened to see a canvas watercolour painting of tulips that my wife Hema had done a few years ago. He liked it enough to ask her if we could use it for the cover art. She gave us her blessing, and I took a photo of it to develop the cover. We kept it very simple, with the Beige Shelter logo and the title. A big thanks to Hema for her beautiful contribution!

Here’s the final track, which is available on Spotify, iTunes, Apple Music, Google Music and other digital retailers. It was a great joy and privilege to write and produce this song with Adi, Tom, and Paul. Enjoy!

7 ways to bring variety to your collection of songs

When writing a collection of songs, whether for a album release or in general, we sometimes end up playing it safe and resorting to tried and true motifs and ideas for every song.

For myself, when I become a fan of an artist or band, I like to hear a variety of songs. Sometimes the differences are obvious, like a ballad vs. a rockin’ out song. And sometimes, the variety comes in more subtle ways—-ways that only looking closer reveals. Your audience will know something feels different and unique, but only the more discerning listeners will know the how and the why.

More than likely, you’re already doing some of these “7 ways” — they are by no means truly unique ideas, as my examples of popular songs will show. Some of them may not work for you, and this list is by no means exhaustive. Hopefully, looking at these will spur on some more ideas. So let’s get into it.

One: Play with the structure

The typical verse, chorus, verse, chorus, bridge, chorus structure is a go-to for many songwriters. But you don’t have to look any further than the Beatles for excellent examples of structural inventiveness. In I Feel Fine, for example, the title occurs at the end of each verse. Then there’s a “B” section that almost sounds like a bridge, until it repeats later, and then maybe you can call it the chorus. Who knows? And more importantly, who cares? It’s all catchy, the title is clear, and the changes are frequent, regular, and interesting. They did something similar with A Hard Day’s Night, and we argued about discussed it on an episode of Song Talk Radio.

When you play around with structure, the parts of the songs sometimes defy conventional nomenclature. Call it a bridge or a chorus, it doesn’t matter; it’s merely semantics. Sometimes it’s more effective to use terms like “A section”, “B section”, and “tag.”

Sometimes the narrative you establish can inspire an unconventional structure. For my song, Depend on Me, I established a narrative with three distinct parts: an easy going afternoon drive, a car accident, and the aftermath. This structure inspired me to begin the song with a simple verse chorus, verse chorus, then a bridge (for the accident) and a completely new section for the aftermath.

Two: Write a song with very few or no perfect rhymes.

Rhymes are usually an integral part of any song in a popular medium. If there’s anything most genres have in common, it’s rhyming. More “pop” songs characteristically have lots of perfect rhymes. At the other end, folk songs tend to have fewer perfect rhymes.

First, let’s talk briefly about rhyme types. Perfect rhymes are pairs of words which have both final vowel sounds and final consonant sounds the same – e.g. space / race, moan / cone,  exemplify / diversify. Assonance rhymes have the same final vowel sound, but different final consonant sound, and the result is softer – e.g. lost / cough, graze / lake, policy / bakery.

The tricky part might be writing a song that minimizes perfect rhymes. Fast Car by Tracy Chapman comes pretty close, using mostly assonance rhymes to end her verses.

For my own song Hurting. Choosing. Learning, I managed to get through four verses and two choruses with absolutely no rhymes, and writing the verses as haiku poems to boot.

Three: Do a few songs in 3/4 or 6/8 time

This one is fairly common, but still, many songwriters fall back on the ubiquitous 4/4 time signature.

Before I get into examples, let’s go over what time signatures mean and how they work. Time signatures are normally expressed as two numbers (four-four, six-eight, or three-four). 4/4 time is sometimes called Common Time (go figure). The first number, or the one on top, is how many beats there are in one bar or measure. The bottom number represents the note division of the beats. So, if the bottom number is 4, the song is counted in quarter notes. If it’s 8, it’s counted in eighth notes (half the duration of a quarter note). It’s far less common to see a 2 for the note division.

For example, 4/4 time is counted as “1,2,3,4” in a moderate pace. 6/8 time is counted as “1,2,3,4,5,6” where each beat is about half the duration of the quarter notes. Of course, tempo plays a big part in exactly how fast the song is; the note divisions are relative to each other and also represent rhythmic emphasis—i.e. most of the time, there’s a strong emphasis on the “1”, otherwise known as the downbeat. In 3/4 time, the emphasis is usually on the 2 and 3, and in 6/8 time, the emphasis is usually on the 1 and 4. You can usually focus on the kick drum and snare drum hits in a song to indicate the stressed beats.

Compare the songs Wrapped in Grey by XTC and I Go To Sleep by The Pretenders. See if you can identify which is in 3/4 time and which is in 6/8 (hint: in the chorus of Wrapped in Grey, the snare hits on the “2” of every other measure).

Four: Treat your title differently

Many songwriters write from titles, which is a great way to get your song moving in a focused direction, and sticking to that focus. Sometimes the title is a phrase, at other times, a single word or pair of words.

Context is important here – does your title stand alone, or is it part of a larger phrase that maybe connects it with the verse or pre-chorus? Take note of your collection of songs; do you stick to one way of singing your title?

Consider Billy Joel’s song The Stranger. In this song, the “stranger” shows up frequently but it’s always part of a larger phrase in the verse. There’s a catchy “B” part which might be called the chorus, except that the title doesn’t show up there.

Contrast that with a song like Layla, where the title is the main hook (apart from the classic guitar riff), tops each chorus and melodically stands by itself.

Then look at the classic rock song Closer to the Heart by Rush; here the title is a full phrase that ends each verse in a verse-refrain structure (there is no chorus).

You can examine just about any song and note other ways in which the title shows up. Consider melody, narrative and what it might mean if the title was incorporated differently. For example, often when the title shows up as part of a larger context or phrase, the song is following a verse-refrain structure (see tip One above).

Five: Try a song with a quiet / small chorus

Where is it written that your chorus has to be the “big” part of your song? Typically, your chorus has a melodic center change to a higher, more expansive and catchy melody. But in Pretty Good Year by Tori Amos, the chorus is the most understated part of the song. The melody goes nowhere, it’s dynamically quieter, and very simple. The bridge is the section that takes on more characteristics of a chorus, expect the presence of the title and refrain (multiple repetitions in the song).

For a different example, check out the song Pretty by Miggs. The verse has a good amount of melodic range, and is fairly resolved. The chorus (“If it’s worth it..”) has more tension and less melodic range. Similarly to Pretty Good Year, it’s the C-section of the song that takes off with the catchiest, most energetic part of the song (“It takes a lot of steps…”). Call this part the post-chorus, maybe.

I took the “quiet chorus” approach when writing my own song, Brave DaughtersIn this case, the chorus lyric was more reflective and less direct than the angrier verse lyrics, so it led me to treat the music with a lighter energy.

Six: Open a song with your chorus

Opening your song with the chorus is a great way to give it a great kick off, particularly if your chorus is catchy and tight. This works usually when your chorus expresses the central theme of the song, and it doesn’t spoil anything to give it away up front. If you’re used to working in a double chorus at the end of your songs, this is an opportunity to keep that a single, lest you have too many choruses in your song.

A couple of good examples are We’re Not Gonna Take It by Twister Sister, and All About the Bass by Meghan Trainor. (Note: The song doesn’t kick in until over 2 minutes in We’re Not Gonna Take It, but that first couple of minutes is classic music video satire at its finest.) In both songs, the opening chorus is treated more like an introduction, with lighter arrangements than the full-blown choruses that come later.

Seven: Try a different mode in a song

I’ve saved the (arguably) most complex tip for last. Using different modes assumes you know about scales and key signatures, but after that, it’s fairly simple. Customarily, we begin a chord progression on the I chord of the key we’re playing in. But what about starting on the II chord, or III chord? Doing this imparts a subtle tension to your song, and especially works well if you resolve to starting your chorus with the I chord.

Try playing a major scale using the same notes but going from the II tone to the II tone. For example, the Dorian mode in C major would go like this:

D – E – F – G – A – B – C – D

You would also stick to chords in the base key signature. So in the the key of C major and the Dorian mode, this would mean you start your chord progression with a D minor. Famous songs in the Dorian mode include Scarborough Fair (made popular by Simon and Garfunkel), and Eleanor Rigby by the Beatles.

I tried this myself in a collaboration I did with my friend Shari Archinoff, called Winter Without You.

Note that the special, unusual chord progressions start with the II, III, IV, or V chord. It’s very common to start with the VI chord, known as the Aoelian mode or natural minor—just think of every song starting on A minor and using C major, F major, and G major.


You can also combine any of these tips into a single song. Please comment below about any of these 7 ways you’ve used, or about other tips you have for adding some variety to your collection of songs.

How to shop for an audio interface

If you’re just starting out your home studio, or looking to upgrade your audio interface, there are many factors to consider in order to make an informed decision that gets you the best bang for your buck. An audio interface is the traffic cop of your home studio, controlling all the physical inputs and outputs.

Using an audio interface is always better for a home recording studio than the built-in soundcard on your computer. An audio interface:

  • will allow you to connect guitars, synthesizers, and professional microphones
  • can achieve lower latency, so you don’t hear a delay while recording or playing a software synthesizer
  • is designed to record and playback at the same time; a soundcard, not so much

With the right feature set for your home studio, you can improve your workflow and focus on the creative rather than the technical. Don’t get me wrong, though – you still have to understand the technical, so here we go.

Number of inputs

The first and potentially most important thing to consider is the number of inputs you have. You need as many inputs as things you’ll be recording at the same time. Interfaces generally come with 2, 4 or 8 analogue inputs.  Manufacturers usually state the number of inputs in the model name, and almost all of the time, it’s the first number.  For example, a Focusrite 2i4 has 2 inputs (and 4 outputs, but we’ll get to that in a bit). A Presonus AudioBox 44VSL has 4 inputs and 4 outputs.

What’s crucial to understand here is that you only need enough inputs for one recording pass. For example, if you record guitar first, then vocals, then bass, you really only need one input. If you’re recording all three at the same time, you’ll need three inputs. Simultaneous recording not only captures the magic of musicians playing off one another, but recording them on discrete channels also gives you isolated tracks in your software for better mixing. (Note, mic bleed is a topic unto itself.)

Types of inputs

Inputs for microphones are XLR inputs – an XLR mic cable has three large pins in a circle on one end. Often, audio interfaces feature “combo jacks” which can take an XLR (mic) cable, or a standard ¼” cable, like a guitar patch cord. Other inputs may only take a ¼” cable.

Inputs are usually designed for one or more impedance levels. The definition of impedance doesn’t matter – just note that microphones, guitars (or any stringed instrument with a ¼” output), and synthesizers all have different types of output, and require three different setting for inputs. Mic level is the weakest of the three, instrument level (for guitars) is higher, and line level (synthesizers, CD players, your mobile music device) is the strongest signal. While most combo jacks automatically detect a mic or line level signal, only some feature a switch or option for instrument level. Activating the instrument level switch (sometimes called Hi-Z), if it’s available on your interface, ensures you get a good signal level from the instrument. This is about the same as patching your guitar through a DI box.

Input channels may also features a 20dB pad. This switch cuts the signal by 20dB, which is a significant drop. This is useful if you’re recording anything particularly loud, like a guitar amp or a drum, and ensures you won’t distort the signal. This feature is usually not found on the least expensive interfaces.

Some interfaces also include 5-pin MIDI input and output. While not strictly part of the audio system of your home studio, this can save you from investing in an additional USB MIDI interface if you have some older synthesizers you want to use. Most modern synthesizes and MIDI controllers connect directly to your computer via USB.

5-pin MIDI IN and OUT ports
5-pin MIDI IN and OUT ports

Preamps

When you plug in a microphone, the gain knob controls the volume of the input and engages the preamp. Ideally, these knobs are laid out beside each input, so it’s easy to know which knob controls which input. For most home studio setups, the preamps in modern audio interfaces are low-noise and transparent sounding. While some interfaces feature premium quality preamps (for a premium price), you need to keep in mind you should also have a premium microphone and an acoustically treated recording environment to really take advantage.

Phantom Power, or 48V

Every audio interface will have a switch or button for 48V power, also known as Phantom Power. This is required for using condenser microphones. Just remember to always switch on Phantom Power after plugging in your microphone, and switching it off before unplugging your mic. Phantom Power will not affect your dynamic microphones.

Some interfaces put the switch for 48V on the back of the interface. Ideally, the switch is on the front and has a light to indicate that it’s on. Some may even have the switch as part of the software interface, which in my opinion, is the least desirable place for it.

Outputs

Most audio interfaces will have balanced TRS (tip, ring, and sleeve) connections for outputs. The TRS cable looks similar to a ¼” patch cord (unbalanced TS cable), but it has an additional ring on the connector pin, indicating that it can be used for a balanced TRS connection, or carries a stereo signal, like your headphone cable. Generally, balanced TRS connections are less susceptible to introducing hum or noise in your signal path over longer distances.

TRS vs TS cable
TRS vs TS cable

Outputs are normally reserved to connect your studio monitors. This takes two outputs – one for the left speaker, one for the right. Interfaces with more than one output pair can be used to connect additional speakers, or connect to a desktop mixer. Connecting to a second set of speakers can be useful in testing your mixes.

Don’t discount the value of a big honking volume knob. Some interfaces feature this, and personally, I think it’s a great value add. Volume of your playback is one of the most frequently used controls in your home studio, and sometimes you need to adjust it quickly; you don’t want to be mousing around to find the control. Some interfaces also feature a mute button, which is ideal (i.e. you don’t want your monitors sounding while you’re recording from a microphone).

Digital inputs and outputs

Some interfaces also include digital inputs and outputs. These are used if you have a device with a corresponding output (S/PDIF or optical). The optical (sometimes called ADAT) signal can carry 8 discrete channels. For example, you could expand your two-input setup with an 8-channel preamp with an optical output, for a total of 10 microphone or instrument inputs. The S/PDIF connection only carries a 2-channel stereo signal, and is usually found on synthesizers or CD players.

Sometimes audio interfaces are marketed as having 10 inputs, while only two mic inputs are visible. That’s because the manufacturer is also counting the 8 digital inputs via an optical connection.

Monitoring options

All interfaces have an option for zero-latency monitoring. Generally, if you want to include a reverb in software for your vocalist while recording, you’ll introduce latency while the computer processes the signal, applies the reverb, and sends it back out to be heard. A zero-latency switch (often called direct monitor, or input) allows you to hear the input in real time, without latency, along with the computer playback. Some interfaces allow you to adjust the relative levels of input and playback material.

On-board DSP

Some interfaces include on-board digital effects processing. This allows you to record with very low latency and still apply a reverb or other effect to the monitored input. In my opinion, this is only an issue with older or very budget-level computers. With most modern systems and non-DSP audio interfaces, you can get latency down to a few milliseconds and use a software reverb. It’s best to use a low-CPU taxing plug-in for this. It’s important to note that the reverb in this case won’t be recorded; it’s only used for monitoring. You can still use a better reverb plug-in after the recording is complete. Many singers prefer a bit of reverb in their headphones while recording.

Power supply

Many of the two-channel interfaces are powered by their USB connection to your computer, making them ideal for a mobile recording studio. The larger interfaces, with four or more inputs, usually require a separate power supply in addition to the USB connection, and a power switch. This is something to consider if you’re planning to go mobile with your studio.

Bundled software

The fact is, there are a lot of interface options out there, especially in the two-channel range. One way to decide which one to buy is to look at their bundled software. Many manufacturers include a light version of recording software, like Cubase or Abelton Live. If you prefer one software choice over another, but you haven’t invested in it yet, sometimes getting the light version with your interface gives you a discount when upgrading to the full version. You can check with the software companies to find out more, or download trial versions if you haven’t settled on one yet.

Conclusion

Ultimately, I can’t tell you what interface to buy. If you want to know which one I have, and which ones I’ve had in the past, check out my blog post about the history of my home studio. You have to assess your needs and come to your own conclusion. Hopefully, this article has armed you with the knowledge to make a good choice.

Do you see anything on an interface that I haven’t covered here, or have any questions? Comment below and let me know, and I’ll get back to you. I also accept heaps of praise and accolades.

Do you write songs from the heart or from the head?

Often on Song Talk Radio, this question arises.  Sometimes, it’s fun for the hosts to try and guess.  “Your song sounds very cerebral,” or “Your song sounds very intuitive.”  The guests themselves tell us how well considered every decision in their songwriting process is, or tell us “It just came to me.”  This question of process in creative endeavour is as old as the creative endeavours themselves. On Blair Packham’s show, he talked about his own journey on both the intuitive and the cerebral roads.

Most songwriters and musicians know the history of the Beatles.  In the early 60’s, before they were famous, they played for hours every night in clubs in Hamburg, Germany.  They learned their chops, got better at harmonizing together and playing tightly together.  Author Malcolm Gladwell, in his excellent book Outliers, describes this as the 10,000 hours rule: practice anything for 10,000 hours and you’ll be an expert.  The Beatles played more shows in a few short years than many contemporary bands play in their entire career.  Gladwell uses evidence-based examples to show that the most successful people are those who put in the time.

In another book, Blink, Gladwell champions the subconscious mind as a powerful decision maker, and how little information can be beneficial in making positive, snap decisions.  He cites such examples as fine art experts who can spot a forgery at a glance (and can’t explain how they know they’re looking at a forgery) and orchestras who hold blind auditions to reduce conscious biases.

So let’s bring this back to our central question.  It may be possible that songwriters who feel they channel their songs from some outward source, may in fact be so well practiced they make decisions in a “blink” and rely more heavily on their subconscious experience to guide their songwriting decisions.  “That chord progression just felt right.”  On the other hand, some songwriters are deliberate and conscious in their writing, and know the reasons their songs work the way they do.

I recall clearly learning to play the drums many years ago.  I started with simple rhythms on a single drum, and practiced many hours to coordinate my hands and feet on a drumkit.  The moment I could successfully coordinate kick drum and snare hits with a running cymbal rhythm, something in me clicked and I’ve never forgotten how to do it, no matter how long it’s been since I’ve last played a drumkit.  These days, I don’t think about it – I just follow my subconscious to feel the beat and play along.  If I’m playing in an unusual time signature, like 5/4 or 7/4, I need to engage more of my conscious mind.

I think the same applies to songwriting.  As songwriters, we can rely on our ability to “blink” and know if a songwriting or performance decision is the right one.  However, we can also study more conscious tools of songwriting to change things up, overcome writer’s block, and think outside the boxes we have created ourselves through our experience.

For myself, how do I answer the question of do I write from the heart or the head?  Historically, I’ve been a head-dominated writer, but lately I’ve been “consciously” relying more on my snap judgements, and perhaps surprisingly, they’re mostly right.  So, like everyone else, I’m somewhere in the middle.

Let us know how you look at your own process.  Do you write from the heart or the head, or both?

Song Talk Radio articles

I’m a regular contributor to Song Talk Radio’s blog and newsletter, with writing original content on topics of interest to our songwriter audience.

Check out my articles at the Song Talk Radio website.

What does it mean to be an “amateur” songwriter?

On Song Talk Radio we have a wonderful variety of guests and songwriters, and one way to group them is whether they are professional or amateur songwriters. Often, when we refer to amateur, there’s a negative connotation that implies a less polished, unsophisticated, or otherwise lesser craft. When we talk about being professional, it implies a polished, well-considered, or elevated craft.

However, if we consider the word amateur and its inherent meaning, there’s a better way to look at it.  Amateur is derived from the Latin amatorem, which means “lover of.” So, if you love writing songs, you’re an amateur. This doesn’t say anything about the quality of your writing. Surely, many guests on Song Talk Radio, both amateur and professional, are superb songwriters.

Of course, there’s a caveat. Those songwriters who have devoted their careers, either full-time or part-time, to songwriting and performing, tend to have more polished and carefully considered songs. But consider if this is because they are “professionals” and earn money from their songs, or because they have made a decision to approach their craft with commitment, seriousness, and time.

Also consider the advantages of being an amateur writer. You don’t have to answer to anyone, or consider if your songs are “radio-friendly.” You can take risks, be experimental, and pretty much do as you please. (Another caveat – yes, there are commercial songwriters who can and do pretty much as they please and still sell records.)

The bottom line is if you love what you’re doing, you’re an amateur. You can still put in the time and commitment to polish your craft, and above all, embrace your amateur status with passion, integrity and creativity. Keep on writing.

Evolution of my home recording studio

I recently purchased a new audio interface for my studio, after the mic inputs on my old one starting giving me static or no signal at all. I thought it would be interesting for others to see how I went about making the choice of which interface to buy, within the framework of the evolution of my studio.

So, even though I starting making MIDI-based instrumental music on my computer back in 1988 on a Commodore Amiga 2000, I didn’t really get into audio recording until around 2002 on my first Windows-based PC. One of the first purchases to be made was an audio interface.  At the time, I had a Roland D70 synthesizer, a drum machine, and (potentially) a microphone. I knew that I only needed to be able to record one track a time into the software on my PC. The first thing I learned was the difference between consumer-level “audio cards” (e.g. Soundblaster) vs. professional interfaces.  Primarily, the professional ones allow you to more effectively record audio while playing back audio at the same time. This is essential for any multi-tracking studio.

I opted for an M-Audio 24/96 interface, which was really just a PCI card with 2 RCA inputs, 2 RCA outputs, and MIDI in and out jacks. I fronted the interface with a 12-channel Behringer mixer with an Alt-bus (or submix bus). This allowed me to send only the keyboard, or only the mic signal, for example, into the M-Audio to be recorded on the computer, while still using the mixer to listen to playback from the computer and my keyboard.

Understanding this flow of signals, both MIDI and audio, was essential to making the purchasing decisions. Suffice to say, I figured out exactly how I was going to connect everything before ever laying down a dime. (Incidentally, this process also allowed me to know and purchase only the cables I needed.)

Over the next few years, I slowly expanded my studio to include monitors (speakers) and a couple of guitars. The extra inputs on the mixer made it easy to patch any of these extras in and use the alt-bus to send each one to be recorded on the PC.

The home studio, circa 2006
The home studio, circa 2006

Of course, as with most budget-gear, the Behringer mixer started crackling and hissing with static after a few years of use. At this point, I figured a multi-input interface would be a good idea. This would allow me to eliminate the hardware mixer entirely, thereby simplifying gain staging and improving signal path quality. I opted for an E-MU 1820 in 2007, which had a digital PCI card and a “break-out box” with 2 mic inputs and a few analogue inputs. Of course, it also had MIDI I/O.

By this time, I had sold off the old Roland D70 synth and got a CME U7 keyboard controller and a Roland JV-1010 sound module on eBay. This, together with the plethora of virtual synthesizers on the computer made my old Roland seem very quaint and limiting. I also gave up on the guitars (instead investing in killer guitar software).The EMU interface served me very well for several years. I attached the Roland JV-1010 and another drum synth to the other inputs. Theoretically, I could record up to 8 tracks at once, but the opportunity to do so never came up.The one limitation with the EMU was the lack of a dedicated output knob. I ended up picking up a Mackie 402-VLZ3 mixer which was used as a glorified volume control and mute button for my monitors, and in a pinch, I could also use its mic inputs if I needed.

Finally on to my latest setup. On July, I purchased a Focusrite Scarlett 2i4 interface. While my old EMU allowed me to record up to 8 tracks at one, the Focusrite only allows 2. I figure if I hadn’t needed more than 2 inputs in the last few years, chances are I’m never going to. It has a dedicated volume knob, so I no longer need the Mackie mixer. However, my Roland JV-1010 has nothing to plug into (previously it plugged into line inputs on the EMU). This is not such a big loss, as I haven’t used the Roland in several months, since software synthesizers and samples are getting better and better.

The Focusrite is wonderfully simple. The only driver interface is to set the latency buffer. No software mixer panel, no built-in effects suite, just pure input and output. I picked the 2i4 model over the 2i2 model to have the variable control over the input vs. playback monitoring, and I thought I could use the extra outputs to feed into my Mackie, but didn’t end up using them. Plus I prefer to have real, old-school MIDI connections rather than USB for my keyboard.I did also check out the Presonus 44VSL interface, which has 4 mic inputs, for a possible future when I might actually need to record more than 2 tracks at once. The Presonus was more costly, but quality-wise felt and sounded about the same as the Focusrite. However, I was unable to get the latency for software synths to work – it was quite bad, in fact. I chose to shop at Long & McQuade, who offer a 30-day no-questions-asked return policy, so the Presonus went back and I kept the Focusrite.

So I think the take-away message here is to really examine your needs, do your research, and make informed purchases. You don’t need to spend a fortune to get good quality sound. If there’s one truth to the evolution of my studio, it’s this: the longer I do this, the simpler my system becomes – in other words, fewer parts. Part of this is the fact that newer computers can handle more of the workload, so your outboard gear can be pretty minimal, but part of it is also understanding signal flow and boiling your setup down to the essentials.

The studio earlier in 2013
The studio earlier in 2013