I recently purchased a new medium telephoto lens for my camera and put it to the test with photographing a couple of live music events. I enjoyed a “songwriters in the round” event at 120 Diner in Toronto. I already knew three of the performers, and got to meet and hear several new ones.
Beige Shelter was playing a show at Skeaky Dee’s, and I took photos of bands The Thick, The Cashews, and singer-songwriter David Dino White. For this show, the stage was bathed in a very bad blue light, so I converted the photos to black and white in post-production.
For another Beige Shelter show, I took photos of supporting acts Brian Sasaki and Wilson & The Castaways. The show was a great success at the Amsterdam Bicycle Club in Toronto.
The new lens is great for capturing sharp photos in low light. I find the keys to great photos on stage are using spot metering and adjusting the focus point as you shoot. I like to capture high emotional moments in the performances and where possible, get them with their eyes open. Framing with odd angles also adds a cool dynamic.
If you’re just starting out your home studio, or looking to upgrade your audio interface, there are many factors to consider in order to make an informed decision that gets you the best bang for your buck. An audio interface is the traffic cop of your home studio, controlling all the physical inputs and outputs.
Using an audio interface is always better for a home recording studio than the built-in soundcard on your computer. An audio interface:
will allow you to connect guitars, synthesizers, and professional microphones
can achieve lower latency, so you don’t hear a delay while recording or playing a software synthesizer
is designed to record and playback at the same time; a soundcard, not so much
With the right feature set for your home studio, you can improve your workflow and focus on the creative rather than the technical. Don’t get me wrong, though – you still have to understand the technical, so here we go.
Number of inputs
The first and potentially most important thing to consider is the number of inputs you have. You need as many inputs as things you’ll be recording at the same time. Interfaces generally come with 2, 4 or 8 analogue inputs. Manufacturers usually state the number of inputs in the model name, and almost all of the time, it’s the first number. For example, a Focusrite 2i4 has 2 inputs (and 4 outputs, but we’ll get to that in a bit). A Presonus AudioBox 44VSL has 4 inputs and 4 outputs.
What’s crucial to understand here is that you only need enough inputs for one recording pass. For example, if you record guitar first, then vocals, then bass, you really only need one input. If you’re recording all three at the same time, you’ll need three inputs. Simultaneous recording not only captures the magic of musicians playing off one another, but recording them on discrete channels also gives you isolated tracks in your software for better mixing. (Note, mic bleed is a topic unto itself.)
Types of inputs
Inputs for microphones are XLR inputs – an XLR mic cable has three large pins in a circle on one end. Often, audio interfaces feature “combo jacks” which can take an XLR (mic) cable, or a standard ¼” cable, like a guitar patch cord. Other inputs may only take a ¼” cable.
Typical XLR mic cable
XLR combo jack
1/4″ guitar patch cable
Inputs are usually designed for one or more impedance levels. The definition of impedance doesn’t matter – just note that microphones, guitars (or any stringed instrument with a ¼” output), and synthesizers all have different types of output, and require three different setting for inputs. Mic level is the weakest of the three, instrument level (for guitars) is higher, and line level (synthesizers, CD players, your mobile music device) is the strongest signal. While most combo jacks automatically detect a mic or line level signal, only some feature a switch or option for instrument level. Activating the instrument level switch (sometimes called Hi-Z), if it’s available on your interface, ensures you get a good signal level from the instrument. This is about the same as patching your guitar through a DI box.
Input channels may also features a 20dB pad. This switch cuts the signal by 20dB, which is a significant drop. This is useful if you’re recording anything particularly loud, like a guitar amp or a drum, and ensures you won’t distort the signal. This feature is usually not found on the least expensive interfaces.
Some interfaces also include 5-pin MIDI input and output. While not strictly part of the audio system of your home studio, this can save you from investing in an additional USB MIDI interface if you have some older synthesizers you want to use. Most modern synthesizes and MIDI controllers connect directly to your computer via USB.
When you plug in a microphone, the gain knob controls the volume of the input and engages the preamp. Ideally, these knobs are laid out beside each input, so it’s easy to know which knob controls which input. For most home studio setups, the preamps in modern audio interfaces are low-noise and transparent sounding. While some interfaces feature premium quality preamps (for a premium price), you need to keep in mind you should also have a premium microphone and an acoustically treated recording environment to really take advantage.
Phantom Power, or 48V
Every audio interface will have a switch or button for 48V power, also known as Phantom Power. This is required for using condenser microphones. Just remember to always switch on Phantom Power after plugging in your microphone, and switching it off before unplugging your mic. Phantom Power will not affect your dynamic microphones.
Some interfaces put the switch for 48V on the back of the interface. Ideally, the switch is on the front and has a light to indicate that it’s on. Some may even have the switch as part of the software interface, which in my opinion, is the least desirable place for it.
Most audio interfaces will have balanced TRS (tip, ring, and sleeve) connections for outputs. The TRS cable looks similar to a ¼” patch cord (unbalanced TS cable), but it has an additional ring on the connector pin, indicating that it can be used for a balanced TRS connection, or carries a stereo signal, like your headphone cable. Generally, balanced TRS connections are less susceptible to introducing hum or noise in your signal path over longer distances.
Outputs are normally reserved to connect your studio monitors. This takes two outputs – one for the left speaker, one for the right. Interfaces with more than one output pair can be used to connect additional speakers, or connect to a desktop mixer. Connecting to a second set of speakers can be useful in testing your mixes.
Don’t discount the value of a big honking volume knob. Some interfaces feature this, and personally, I think it’s a great value add. Volume of your playback is one of the most frequently used controls in your home studio, and sometimes you need to adjust it quickly; you don’t want to be mousing around to find the control. Some interfaces also feature a mute button, which is ideal (i.e. you don’t want your monitors sounding while you’re recording from a microphone).
Digital inputs and outputs
Some interfaces also include digital inputs and outputs. These are used if you have a device with a corresponding output (S/PDIF or optical). The optical (sometimes called ADAT) signal can carry 8 discrete channels. For example, you could expand your two-input setup with an 8-channel preamp with an optical output, for a total of 10 microphone or instrument inputs. The S/PDIF connection only carries a 2-channel stereo signal, and is usually found on synthesizers or CD players.
Sometimes audio interfaces are marketed as having 10 inputs, while only two mic inputs are visible. That’s because the manufacturer is also counting the 8 digital inputs via an optical connection.
All interfaces have an option for zero-latency monitoring. Generally, if you want to include a reverb in software for your vocalist while recording, you’ll introduce latency while the computer processes the signal, applies the reverb, and sends it back out to be heard. A zero-latency switch (often called direct monitor, or input) allows you to hear the input in real time, without latency, along with the computer playback. Some interfaces allow you to adjust the relative levels of input and playback material.
Some interfaces include on-board digital effects processing. This allows you to record with very low latency and still apply a reverb or other effect to the monitored input. In my opinion, this is only an issue with older or very budget-level computers. With most modern systems and non-DSP audio interfaces, you can get latency down to a few milliseconds and use a software reverb. It’s best to use a low-CPU taxing plug-in for this. It’s important to note that the reverb in this case won’t be recorded; it’s only used for monitoring. You can still use a better reverb plug-in after the recording is complete. Many singers prefer a bit of reverb in their headphones while recording.
Many of the two-channel interfaces are powered by their USB connection to your computer, making them ideal for a mobile recording studio. The larger interfaces, with four or more inputs, usually require a separate power supply in addition to the USB connection, and a power switch. This is something to consider if you’re planning to go mobile with your studio.
The fact is, there are a lot of interface options out there, especially in the two-channel range. One way to decide which one to buy is to look at their bundled software. Many manufacturers include a light version of recording software, like Cubase or Abelton Live. If you prefer one software choice over another, but you haven’t invested in it yet, sometimes getting the light version with your interface gives you a discount when upgrading to the full version. You can check with the software companies to find out more, or download trial versions if you haven’t settled on one yet.
Ultimately, I can’t tell you what interface to buy. If you want to know which one I have, and which ones I’ve had in the past, check out my blog post about the history of my home studio. You have to assess your needs and come to your own conclusion. Hopefully, this article has armed you with the knowledge to make a good choice.
Do you see anything on an interface that I haven’t covered here, or have any questions? Comment below and let me know, and I’ll get back to you. I also accept heaps of praise and accolades.
Last year (late 2015), Adi contacted me with a request to have his songs produced as an album. We had a brief meeting during which I got to know Adi a bit more, and really saw his personality as a generous, people-loving individual, and how that shone through in his songs. We came to an agreement, and began work shortly thereafter. The plan was to produce 12 songs for an album.
Adi would send me his demo recordings, along with lyric and chord sheets. For most recordings, I would set up a session in Sonar with a simple drum loop (of my own creation, of course, since I am a drummer). He would then come over and record guitar and vocals to the beat. Often he would ask for extra “guitar licks” tracks and/or vocal doubles.
The producer brain
For some of the songs, I would make melodic suggestions for the guitar licks, or arrangement ideas for when to include instrumental breaks. I also added drums, bass, piano, strings, and other instruments using my keyboard and MIDI. Of all the aspects of producing, I enjoy this arranging process the most. It takes a careful listen to each song, finding creative ways to supplement the original performance, and at the same time, taking it up a notch. My piano and bass parts were often quite understated, providing a foundation for Adi’s performance without overpowering it. I think this is a key point for any successful production.
For one song, Midnight, Adi had written a lovely arpeggio pattern on the guitar for the intro. The rest of the song rocked out. I suggested a break in the middle where he would repeat the intro pattern at tempo. This served to open the song up and provide a breath before the final chorus.
Adi had a neat riff and chord progression for a song, but no lyric. We worked together as I made chord suggestions (on piano) and a key shift for the bridge. Adi worked out lyrics about racial diversity and inclusion, with some tweaks from me. We share the songwriting credit for Colours.
For Who I Am, Adi had written it as a medium-tempo guitar rocker with harmonica. He wanted to try it out as a piano ballad, so I took his chords and developed a piano, strings, and drums arrangement. We had to re-record his vocals, as the rocker style didn’t really fit with the more ballad-esque piano arrangement. We also forewent the harmonica in favour of a cello solo. I think this song helps to open up the variety on the album.
Adi wasn’t entirely happy with his song Eden. I made a suggestion for chord changes in the chorus, which opened up the song to sound bigger. Interestingly, this song is almost entirely comprised of major chords (only one minor chord). In some ways, it’s my favourite track on the album, as it has elements of progressive rock.
Mixing, mastering and fine-tuning
I spent a lot of time going through each song with a fine-tooth comb, fixing notes in the MIDI tracks and tightening up the timing. For some, I used a fixed tempo grid to quantize all the tracks, and for others, I used Adi’s guitar recording as a tempo map. Since they were mostly recorded to a fixed drum loop, they were fairly consistent, but minor tempo variations still occur, and sometimes it’s better to embrace them rather than forcing them to fit a fixed tempo.
I also mixed and mastered the songs. I wanted punchy, clear drums and bass, and forward vocals to ensure all the lyrics were well heard. My new best friend became Native Instrument’s Transient Master.
Ironically, the sonically simplest song, She Now Flies, presented the greatest mixing challenge. It’s actually easier when you’ve got 6 or more instruments in the mix, with guitars, piano, bass and drums, than mixing a song with only guitar and vocals.
For the mastering process, I suggested to Adi that we each come up with a sequence for the album, then compare notes. He then arrived at a sequence that was a combination of my list and his. I made minor tweaks to the EQ of some songs, and applied the final volumes. There’s some finesse here too, as I didn’t want the softer ballads mastered to the same volume as the rockers. Hopefully someone out there still listens to complete albums!
It’s been an absolute joy working with Adi on this record. He had a very balanced approach to owning his songs and being open to suggestions for changes. As the producer, I would always take the approach of allowing Adi the veto power, to reject any suggestion I made. As it turns out, he took most of them. You can’t be too precious about your ideas, and understand that the vision for the record should be the artist’s, not the producer’s.
As part of the band Beige Shelter (drums, percussion), we played our biggest show yet at the historic Lee’s Palace. The crowd was receptive, enthusiastic and supportive. Of course, since this was a “real” concert venue, the stage lights made it almost impossible to see anyone in the audience. But we know what we heard.
The Beige Shelter line up is: Adi Aman (songs, guitar, uke, vocals), Neel Modi (drums, percussion), Tom Kuczynski (bass guitar), and Karan Sabharwal (lead guitar).
I’m thankful to be playing with such talented musicians and Adi’s songs are passionate, heartfelt, and even spiritual. This is music in fine form.
I’ve been playing drums and percussion for a while for singer-songwriter Adi Aman, aka Beige Shelter. The band name comes from Adi’s feeling that the colour beige is neutral and can express a wide range of emotions, and music being the place of shelter where he can best express them.
I started my collaboration with Adi as his producer. We’ve worked through twelve songs for his debut album release later this year. He expressed an interest in performing more to promote the album and get his music heard. I volunteered to back him up on percussion. So far I’ve played a drumkit when it’s available, and also cajon and shaker. In the future, we hope to rope in a bassist and lead guitarist.
Our gig at Page One cafe in downtown Toronto felt like our first “real” show. We had a great turnout, thanks to our friends and to FXRRVST and Madison Galloway, our supporting performers. I’ve branded my look, always performing in a pressed shirt and bowtie.
So far it’s been a high performing with Adi. His songs are very well written (we even co-wrote on one of them) and his performances are passionate and energetic. Together I think we make a good team, as I also give him tips on improving his performance and creating interesting song arrangements.
There’s nothing quite like getting on stage on putting on a show of great original songs. You get into a “zone” where the world around you fades into the background and for a moment, it’s all about the music.
I has previously heard the idea that life on earth may have originated by some organic goo being deposited on our humble little planet by a meteor or comet. Recently I found a youTube video that explained that while indeed speculative, the theory is given the perfect name: “Panspermia.”
It’s one thing to write a song simply about a speculative theory, but that could come across as a high school essay or research paper. To be a good song, I’d have to inject my own commentary or reaction to it. I did this in a Facebook post, positing that every living thing in the universe is united by the same goo, and that makes us all Gooians.
A short while later, I jotted down the lyrics for the chorus. A few months later, I conducted additional research online to generate keywords, making sure I captured proper terms, scientifically speaking. I also happened to attend a public lecture at the University of Toronto on the topic of Planetary Habitation on the day I finished the lyrics. At the lecture’s reception, I approached the speaker, Dr. John E. Moores, Assistant Professor of Space Engineering at York University, who agreed to review my lyrics for any scientific faux-pas. He followed through, and suggested only one minor change, which I took. Dr. Moores also introduced me to the “nerdcore” genre.
I wrote the music bed (using only piano) and melody in one day. The verse melody suggested lines of lyric that lasted a little more than two measures of 4/4 time. So, I introduced a two-measure loop for the verse that was made up of one 4/4 measure and one 5/4 measure. This introduced a very quirky and offbeat rhythm to the song. I then proceeded to layer on the bass sound, the synth pads, and other sounds to fill in the music bed. The original piano track was archived.
I presented the song at the monthly Songwriter’s Meetup Group, and it was generally liked. One group member commented how the song is a good, clear, explanation of the theory, and has educational value. Many people in the group felt that the 4/4 + 5/4 pattern was too jarring for no particularly good reason. So I revisited the pattern and tried out a 4/4 + 6/4 pattern, which was still a little offbeat, but easier to digest due to its symmetry. I decided it did in fact work better for the song.
In addition to being a co-organizer and co-host of the meetup group The Songwriter’s Cafe in Toronto, I also participate by presenting my own songs to the group and getting their feedback. There’s nothing quite like getting constructive feedback from fellow songwriters that’s always supportive and encouraging. The group experience never fails in inspiring me to improve my own songwriting, as well as meeting the extraordinarily talented songwriters in the room. There’s also great opportunities for collaborations between members beyond the meetups.
The group recently surpassed 1,000 members and I’m proud to be part of its growth over the last few years.
The photo was taken by one of the members, who, like me, is a budding photographer as well as a songwriter. Thanks Alexander for the great capture!
The latest Song Talk Radio backup band jam session was for my own song, I Never Write Her a Song. We rehearsed for a couple of hours, and then performed about three takes for the video recording. Thanks to the guys who put their hearts and souls into the performance, especially David St Bernard who took on the vocal part with great verve! Phil (bass) also took us through an exercise to ensure all four instrumentalists knew each other’s parts well, and worked together to create a unified groove.
David St Bernard – vocals
Neel Modi – piano, songwriter
Joe Romasanta – guitar
Phil Emery – bass
Gary Duke – drums
Bruce Harrott – consultant
I captured the room audio, and the vocal audio to separate tracks so I could mix them in post-production to get the best sound quality and ensure the vocal sat nicely above the mix.
After hearing the fascinating podcast Song Exploder, where a song is deconstructed and examined into its separate parts, I stumbled upon an episode featuring composer Ramin Djawadi and a breakdown of the Game of Thrones theme music. Hearing and learning the individual parts prompted me to arrange my own cover version, similar to the original in its groove, but with electric guitar and synth strings as points of departure. I also ended up discovering several other great versions of the theme on Soundcloud, my favourites including industrial, prog rock, and smooth jazz versions.
Often on Song Talk Radio, this question arises. Sometimes, it’s fun for the hosts to try and guess. “Your song sounds very cerebral,” or “Your song sounds very intuitive.” The guests themselves tell us how well considered every decision in their songwriting process is, or tell us “It just came to me.” This question of process in creative endeavour is as old as the creative endeavours themselves. On Blair Packham’s show, he talked about his own journey on both the intuitive and the cerebral roads.
Most songwriters and musicians know the history of the Beatles. In the early 60’s, before they were famous, they played for hours every night in clubs in Hamburg, Germany. They learned their chops, got better at harmonizing together and playing tightly together. Author Malcolm Gladwell, in his excellent book Outliers, describes this as the 10,000 hours rule: practice anything for 10,000 hours and you’ll be an expert. The Beatles played more shows in a few short years than many contemporary bands play in their entire career. Gladwell uses evidence-based examples to show that the most successful people are those who put in the time.
In another book, Blink, Gladwell champions the subconscious mind as a powerful decision maker, and how little information can be beneficial in making positive, snap decisions. He cites such examples as fine art experts who can spot a forgery at a glance (and can’t explain how they know they’re looking at a forgery) and orchestras who hold blind auditions to reduce conscious biases.
So let’s bring this back to our central question. It may be possible that songwriters who feel they channel their songs from some outward source, may in fact be so well practiced they make decisions in a “blink” and rely more heavily on their subconscious experience to guide their songwriting decisions. “That chord progression just felt right.” On the other hand, some songwriters are deliberate and conscious in their writing, and know the reasons their songs work the way they do.
I recall clearly learning to play the drums many years ago. I started with simple rhythms on a single drum, and practiced many hours to coordinate my hands and feet on a drumkit. The moment I could successfully coordinate kick drum and snare hits with a running cymbal rhythm, something in me clicked and I’ve never forgotten how to do it, no matter how long it’s been since I’ve last played a drumkit. These days, I don’t think about it – I just follow my subconscious to feel the beat and play along. If I’m playing in an unusual time signature, like 5/4 or 7/4, I need to engage more of my conscious mind.
I think the same applies to songwriting. As songwriters, we can rely on our ability to “blink” and know if a songwriting or performance decision is the right one. However, we can also study more conscious tools of songwriting to change things up, overcome writer’s block, and think outside the boxes we have created ourselves through our experience.
For myself, how do I answer the question of do I write from the heart or the head? Historically, I’ve been a head-dominated writer, but lately I’ve been “consciously” relying more on my snap judgements, and perhaps surprisingly, they’re mostly right. So, like everyone else, I’m somewhere in the middle.
Let us know how you look at your own process. Do you write from the heart or the head, or both?