Basics: Five Questions about Effects Placement

By Craig Anderton

There are plenty of places in SONAR where you can process the audio signal, but you need to know how to choose the right one.

What’s an “insert” effect? Don’t you always “insert” an effect? You indeed “insert” effects, but there’s a specific effect type usually called an Insert effect that inserts into an individual mixer channel. In SONAR, this inserts into a channel’s FX Bin or the ProChannel (Fig. 1).

Fig. 1: The FX bin for two channels have insert effects, as does the ProChannel for the Vocals channel.

Insert effects affect only the channel into which they are inserted. Typical insert effects include dynamics processors, distortion, EQ (because of EQ’s importance, it’s a permanent ProChannel insert effect), flanging, and other effects that apply to a specific sound in a specific channel.

Then what’s a “send” effect? Also called an (more…)

Share

Basics: Five Questions About Panning Laws

By Craig Anderton

It’s not just a good idea, it’s the law…panning law, that is. Let’s dispel the confusion surrounding this sometimes confusing topic.

What does a panning law govern? When a mono input feeds a stereo bus, the panning law determines the apparent and actual sound level as you sweep from one side of the stereo field to the other.

But why is a “law” needed? Doesn’t the level just stay the same as you pan? Not necessarily. Panning laws date back to analog consoles. If a pan control had a linear taper (in other words, a constant rate of resistance change as you turned it), then the sound was louder when panned to center. To compensate, hardware mixers used non-linear resistance tapers to drop the level, typically by -3 dB RMS, at the center. This gave an apparent level that was constant as you panned across the stereo soundstage. If that doesn’t make sense…just take my word for it, and keep reading.

Okay, then there’s a law. Isn’t that the end of it? Well, it wasn’t really a “law,” or a standard. Come to think of it, it wasn’t a specification or even a “recommendation.” Some engineers dropped the center level a little more to let the sides “pop” more, or to have mixes seem less “monoized” and therefore create more space for vocalists who were panned to center. Some didn’t drop the center level at all, and some did custom tweaks.

Why does this matter to a DAW like SONAR, which doesn’t have a hardware mixer? Different DAWs default to different panning laws. This is why duplicating a mix on different DAWs can yield different results, and lead to foolish online discussions about how one DAW sounds “punchier” or “wimpier” than another if someone brings in straight audio files and sets the panning and faders identically.

A mono signal of the same level feeds each fader pair, and each pair is subject to different SONAR panning laws. Note the difference in levels with the panpot panned to one side or centered. The tracks are in the same order as the descriptions in SONAR’s panning laws documentation and the listing in preferences. Although the sin/cos and square root versions may seem to produce the same results, the taper differs across the soundstage between the hard pans and center.

This sounds complicated, and is making my head explode—can you just tell me what I need to do so I can go back to making music? SONAR provides six different panning law options under Preferences, so not only can you choose the law you want, the odds of being able to match a different DAW’s law are excellent. The online help describes how the panning laws affect the sound. So there are really only two crucial concepts:

  • The pan law you choose can affect a mix’s overall sound if you have a lot of mono sound sources (panpots with stereo channels are balance controls, which is a whole other topic). So try mixes with different laws, choose a law you like, and stick with it. I prefer -3 dB center, sin/cos taper, and constant power; the signal level stays at 0dB when panned right or left, but drops by -3 dB in each channel when centered. This is how I built hardware mixers, so it’s familiar territory. It’s also available in many DAWs. But use what you like…after all, I’m not choosing what’s “right,” I’m simply choosing what I like.
  • If you import an OMF file from another DAW or need to duplicate a mix from another DAW, ask what panning law was used in creating the file. One of SONAR’s many cool features is that it will likely be able to match it.

There, that wasn’t so bad. Ignorance of the law is no excuse, and now you have answers to five questions about panning laws.

 

Share

Basics: Five Questions About Using Stompboxes with SONAR

by Craig Anderton

Plug-in signal processors are a great feature of computer-based recording programs like SONAR, but you may have some favorite stompboxes with no plug-in equivalents—like that cool fuzz pedal you love, or the ancient analog delay you scored on eBay. Fortunately, with just a little bit of effort you can make SONAR think external hardware effects are actually plug-ins.

1. What do I need to interface stompboxes with SONAR? You’ll need a low-latency audio interface with an unusd analog output and unused analog input (or two of each for stereo effects), and cords to patch these audio interface connections to the stompbox. We’ll use the TASCAM US-4×4 interface because it has extra I/O and low latency, but the same principles apply to other audio interfaces.

2. How do I hook up the effect and the interface? SONAR’s External Insert plug-in inserts in an FX bin, and diverts the signal to the assigned audio interface output. You patch the audio interface output to a hardware effect’s input, then patch the hardware effect’s output to the assigned audio interface input. This input returns to the External Effect plug-in, and continues on its way through the mixer. For this example, we’ll assume a stompbox with a mono input and stereo output.

3. What are correct settings for the External Insert plug-in parameters? When you insert the External Insert into the FX bin, a window appears that provides all the controls needed to set up the external hardware.

  • Send. This section’s drop-down menu assigns the send output to the audio interface. In this example, the send feeds the US-4×4’s output 3. Patch this audio interface output to your effect’s input. (Note that if an output is already assigned, it won’t appear in the drop-down menu.)
  • Output level control. The level coming out of the computer will be much higher than what most stompboxes want, so in this example the output level control is cutting the signal down by about -12 dB to avoid overloading the effect.
  • Return. Assign this section’s drop-down menu to the audio interface input through which the stompbox signal returns (in this example, the US-4×4’s stereo inputs 3 and 4). Patch the hardware effect output(s) to this input or inputs.
  • Return level control. Because the stompbox will usually have a low-level output, this slider brings the gain back up for compatibility with the rest of the system. In this example, the slider shows about +10 dB of gain. (Note: You can invert the signal phase in the Return section if needed.)

4. Is it necessary to compensate for the delay caused (more…)

Share

Mixing Heavy Metal with the ProChannel & Softube Mix Bundle

The Softube Mix Bundle is a strong and creative addition to SONAR’s ProChannel strip.  This bundle adds 5 solid effects, great for any mix, to the Softube Saturation Knob already in SONAR X3 Producer.

For this article I’ve mixed a Heavy Metal track from the group Dark Ride using mostly Softube ProChannel effects. You can download the project here and follow along if you have the Softube Mix Bundle. If not then the screenshots in this article should suffice.

Setting up the Mix

Listen & add Markers

At first listen I put in Markers throughout the entire project to make navigation and looping sections much easier. Using the shortcut M – it’s pretty easy to drop in a Marker wherever your Now Time Marker resides. After that, you can name them accordingly. This paticular song was relatively short and included an introduction, two verses, 3 choruses, bridge, solo section, and breakdown.

Routing, grouping, and track folders

While you’re mixing it’s easy to become slightly overwhelmed by larger projects. What I do in this instance is make a stereo bus for every group of instruments that I have in the project. This allows me to apply mixing effects to the instrument groups as whole before they hit my main mix bus. The tracks route directly to the buses and then the buses route directly to the 2 bus. For each instrument group I also assigned them a color category and a track folder to make things a bit easier to manage within the Track View.

Levels & panning

Metal in general consists of abrasive-wide rhythm guitars, huge-punchy drums, (more…)

Share

DAW Best Practices: How to get a bigger drum sound with reverb

The Biggest, Baddest Drum Reverb Sound Ever

[Originally posted as a daily tip on the SONAR forums and reposted for viewers here on the blog.]

by Craig Anderton

You want big-sounding drums? Want your metal drum tracks to sound like the Drums of Doom? Keep reading. This technique transposes a copy of the reverb and pans the two reverb tracks oppositely. It works best with unpitched sounds like percussion.

1. Insert a reverb send.

Insert a send in your drum track, then insert your reverb of choice in the Send bus.

 

2. Render the reverb, isolated from the drum track. (more…)

Share

Basics: Five Questions about Filter Response

By Craig Anderton 

You can think of filters as combining amplification and attenuation—they make some frequencies louder, and some frequencies softer. Filters are the primary elements in equalizers, the most common signal processors used in recording. Equalization can make dull sounds bright, tighten up “muddy” sounds by reducing the bass frequencies, reduce vocal or instrument resonances, and more. 

Too many people adjust equalization with their eyes, not their ears. For example, once after doing a mix I noticed the client writing down all the EQ settings I’d done. When I asked why, he said it was because he liked the EQ and wanted to use the same settings on these instruments in future mixes. 

While certain EQ settings can certainly be a good point of departure, EQ is a part of the mixing process. Just as levels, panning, and reverb are different for each mix, EQ should be custom-tailored for each mix as well. Part of this involves knowing how to find the magic EQ frequencies for particular types of musical material, and that requires knowing the various types of filter responses used in equalizers. 

What’s a lowpass response? A filter with a lowpass response passes all frequencies below a certain frequency (called the cutoff or rolloff frequency), while rejecting frequencies above the cutoff frequency (Fig. 1). In real world filters, this rejection is not total. Instead, past the cutoff frequency, the high frequency response rolls off gently. The rate at which it rolls off is called the slope. The slope’s spec represents how much the response drops per octave; higher slopes mean a steeper drop past the cutoff. Sometimes a lowpass filter is called a high cut filter.

 Fig. 1: This lowpass filter response has a cutoff of 1100 Hz, and a moderate 24/dB per octave slope.

What’s a highpass response? This is the inverse of a lowpass response. It passes frequencies above the cutoff frequency, while rejecting frequencies below the cutoff (Fig. 2). It also (more…)

Share

Basics: Five Questions about Audio Specs

By Craig Anderton 

Specifications don’t have to be the domain of geeks—they’re not that hard to understand, and can guide you when choosing audio gear. Let’s look at five important specs, and provide a real-world context by referencing them to TASCAM’s new US-2×2 and US-4×4 audio interfaces. 

First, we need to understand the decibel (dB). This is a unit of measurement for audio levels (like an inch or meter is a unit of measurement for length). A 1 dB change is approximately the smallest audio level difference a human can hear. A dB spec can also have a – or + sign. For example, a signal with a level of -20 dB sounds softer than one with a level of -10 dB, but both are softer than one with a level of +2 dB. 

1. What’s frequency response? Ideally, audio gear designed for maximum accuracy should reproduce all audible frequencies equally—bass shouldn’t be louder than treble, or vice-versa. A frequency response graph measures what happens if you feed test frequencies with the same level into a device’s input, then measure the output to see if there are any variations. You want a response that’s flat (even) from 20 Hz to 20 kHz, because that’s the audible range for humans with good hearing. Here’s the frequency response graph for TASCAM’s US-2×2 interface (in all examples, the US-4×4 has the same specs).

This shows the response is essentially “flat” from 50 Hz to 20 kHz, and down 1 dB at 20 Hz. Response typically goes down even further below 20 Hz; this is deliberate, because there’s no need to reproduce signals we can’t really hear. The bottom line is this graph shows that the interface reproduces everything from the lowest note on a bass guitar to a cymbal’s high frequencies equally well. 

2. What’s Signal-to-Noise Ratio? All electronic circuits generate (more…)

Share

Basics: Five Questions about Latency and Computer Recording

Get the lowdown on low latency, and what it means to you

By Craig Anderton 

Recording with computers has brought incredible power to musicians at amazing prices. However, there are some compromises—such as latency. Let’s find out what causes it, how it affects you, and how to minimize it.  

1. What is latency? When recording, a computer is often busy doing other tasks and may ignore the incoming audio for short amounts of time. This can result in audio dropouts, clicks, excessive distortion, and sometimes program crashes. To compensate, recording software like SONAR dedicates some memory (called a sample buffer) to store incoming audio temporarily—sort of like an “audio savings account.” If needed, your recording program can make a “withdrawal” from the buffer to keep the audio stream flowing. 

Latency is “geek speak” for the delay that occurs between when you play or sing a note, and what you hear when you monitor your playing through your computer’s output. Latency has three main causes: 

  • The sample buffer. For example, storing 5 milliseconds (abbreviated ms, which equals 1/1000th of a second) of audio adds 5 ms of latency (Fig. 1). Most buffers sizes are specified in samples, although some specify this in ms. 

 Fig. 1: The control panel for TASCAM’s US-2×2 and US-4×4 audio interfaces is showing that the sample buffer is set to 64 samples. 

  • Other hardware. Converting analog signals into digital and back again takes some time. Also, the USB port that connects to your interface has additional buffers. These involve the audio interface that connects to your computer and converts audio signals into digital signals your computer can understand (and vice-versa—it also converts computer data back into audio).
  • Delays within the recording software itself. A full explanation would require another article, but in short, this usually involves inserting certain types of processors within your recording software. 

2. Why does latency matter? (more…)

Share

Optimizing Vocals with DSP

Optimizing tracks with DSP, then adding some judicious use of the DSP-laden VX-64 Vocal Strip, offers very flexible vocal processing. 

By Craig Anderton 

This is kind of a “twofer” article about DSP—first we’ll look at some DSP menu items, then apply some signal processing courtesy of the VX64—all with the intention of creating some great vocal sounds. 

PREPPING A VOCAL WITH “MENU” DSP 

“Prepping” a vocal with DSP before processing can make the processing more effective. For example, if you want to compress your vocal and there are significant level variations, you may end up adding lots of compression to accommodate quiet parts. But then when loud parts kick in, the compression starts pumping. 

Here’s another example. A lot of people use low-cut filters to banish rogue plosives (e.g., a popping “b” or “p” sound). However, it’s often better to add a fade-in to get rid of the plosive; this retains some of the plosive sound, and avoids affecting frequency response. 

Adding a fade-in to a plosive can get rid of the objectionable section while leaving the vocal timbre untouched. 

Also check if any levels need to be evened out, because there will usually be some places where the peaks are considerably higher than the rest of the vocal, and you don’t want these pumping the compressor either. The easiest fix is to select a track, drag in the timeline above the area you want to edit, then go Process > Apply Effect > Gain and drop the level by a dB or two. 

This peak is considerably louder than the rest of the vocal, but reducing it a few dB will bring it into line. 

Also note that if you have Melodyne Editor, you can use the Percussive algorithm with the volume tool to level out words visually. This is really fast and effective. 

While you’re playing around with DSP, this is also a good time to cut out silences, then add fadeouts into silence, and fadeins up from silence. Do this with the vocal soloed, so you can hear any little issues that might come back to haunt you later. Also, sometimes it’s a good idea to normalize individual vocal clips up to –3dB or so (leave some headroom) so that the compressor sees a more consistent signal. 

The clip on the left has been normalized and faded out. The silence between clips has been cut away. The clip on the right fades in, but has not been normalized. 

With DSP processing, it’s good practice to work on a copy of the vocal, and make the changes permanent as you do them. The simplest way to apply (more…)

Share

The Art of Transient Shaping with the TS-64

Understand this often-misunderstood processor, and your tracks will benefit greatly 

By Craig Anderton 

Transient Shapers are interesting plug-ins. I don’t see them mentioned a lot, but that might be because they’re not necessarily intuitive to use. Nor are they bundled with a lot of DAWs, although SONAR is a welcome exception. 

I’ve used transient shaping on everything from a tom-based drum part to make each hit “pop” a little more, to bass to bring out the attacks and also add “weight” to the decay, to acoustic guitar to tame overly-aggressive attacks. The TS-64 has some pretty sophisticated DSP, so let’s find out how to take advantage of its talents.

But first, a warning: transient shaping requires a “look-ahead” function, as it has to know when transients are coming, analyze them, filter them, and then calculate when and how to apply particular amounts of gain so it can act on the transients as soon as they occur. As a result, simply inserting the TS-64 will increase latency. If this is a problem, either leave it bypassed until it’s time to mix, or render the audio track once you get the sound you want. Keep an original of the audio track in case you end up deciding to change the shaping later on. 

TS-64 TRANSIENT SHAPER BASICS

A Transient Shaper is a dynamics processor that modifies only a signal’s attack characteristics. If there’s no defined transient the TS-64 won’t do much, or worse yet, add unpleasant effects. 

Transient shapers are not just for drums—guitars, electric pianos, bass, and even some program material are all suitable for TS-64 processing if they have sharp, defined transients. And it’s not just about making transient more percussive; you can also use the TS-64 to “soften” transients, which gives a less percussive effect so a sound can sit further back in a track. 

There are two main elements to transient shaping. The first is (more…)

Share