Mining Gold from PA Recordings with SONAR

SONAR Hero Image

by Craig Parmerlee – SONAR user since SONAR 7

SONAR and other DAWs are used heavily to produce high-quality recordings, while other people use SONAR as part of a compositional process.  I find that most of my SONAR usage is a little different, processing live recordings tracked in a concert or club setting.  This usage presents various problems that aren’t as apparent in a controlled studio setting.  This blog will present a workflow and various SONAR features I have found valuable when processing live recordings.

Objectives

  1. In most cases, my primary objective is to produce a recording that the musicians can study in order to improve their performance.
  2. In some cases, the performance and production quality will be high enough to serve as demo material to promote the group.
  3. I try to deliver a mixed and mastered copy to the musicians within 48 hours, while the event is still fresh in mind, so speed and efficiency are very important.
  4. Often a musician will ask for a further edit on one of the songs, for example, to include in their personal résumé.  Flexibility and ability to recall settings are important.

Changing Expectations

Tascam DR-40 Field Recorder

Years ago, I did such projects using Audacity, which seemed adequate at the time.  However, expectations have changed radically.

Today many musicians have a low-cost stereo field recorder such as the TASCAM DR-40.These recorders are the equivalent of point-and-shoot cameras.  For around $100, they can produce remarkably good quality under ideal circumstances.

This has become the baseline against which many musicians judge other live recordings.  Even though I want to produce quick results, if I can’t do substantially better than a TASCAM DR-40, for example, then I am wasting my time (I should note I love those small field recorders and often use them too, but that is not the subject of this blog).

Fortunately, with SONAR I have found a work flow and a set of “go-to” features that allow me to do much better than a stereo field recorder almost every time, using only the microphones that are already placed for the live PA system.

A Word About My Background

(more…)

Share

panup: Studio Session & LANDR Test

by Panu Pentikäinen (panup at Cakewalk forums)

Alex PortraitAlex ja Armottomat (Alex) visited my recording studio in February. We had five days total to do a fully mastered CD, make promo photos of the band, and record live video footage in the studio for later editing. I’ll describe here how one of the six songs was recorded and mixed.

Drums, bass and the electric guitar were recorded live with one to three takes. Acoustic guitar and demo vocals were recorded, too, but they were re-recorded later over the backing tracks. The drummer was the only one to hear the metronome (standard SONAR audio metronome, time signature set to 1/4); the others had eye contact with the drummer. Although the guitar amp was in another room (the bass was recorded direct), there was no spill other than a faint demo vocal in the drum room mics.

Time is always an enemy when you have to record many songs in a limited amount of time. I decided to make decisions before pressing the R (record) button rather than leaving everything to the mixing phase. I applied EQ to kick drum, drum room and the acoustic guitar before A/D conversion. One of the phrases I hate is: “This sounds like crap now but it hasn’t been mixed yet.” Some people really think that everything can be fixed in the mix! (Although to be fair you often can, because in SONAR we have VocalSync, built-in Melodyne, built-in drum trigger, and AudioSnap).

And although it sounds incredible, now it’s even possible to upload songs from SONAR to the LANDR online mastering service and instantly hear a preview of how the song would sound as mastered. Hearing the demo master may help you to improve the project’s mix. (more…)

Share

How Jerry Gerber Creates Incredible Compositions Without Ever Using the PRV

The art of “making music” in this digital age… When you really think about it, how incredible is it that as music-creators we can take something from our minds, and sculpt it into something tangible?  No matter how novice or professional you are, no matter what others think or say about the music YOU create, there’s no denying that we are living in an incredible time of opportunity for crafting music.

A while back I was introduced to a gentleman and composer working in SONAR out of Northern California by the name of Jerry Gerber.  I knew he was a great composer from his accomplished list of credentials, but what I wasn’t prepared for was being absolutely fascinated by the sonic depth of “his sound,” the detail and integrity of his tracks, and moreover—how he accomplishes all of the above mentioned.  When you listen to his work, and then hear his theoretic viewpoint of how to correctly compose and produce music, you quickly realize that this guy has tapped into something a bit deeper than most musicians.

What really made an impression on me was that without ever using the Piano Roll View (PRV), Jerry Gerber has composed and produced for some very highly-profiled films, television shows, computer games, concerts, dance and interactive media, and also back in the day wrote all of the original music for the remake of the popular children’s television show, The Adventures of Gumby.  His approach to all this is through an expert level of “MIDI Sequencing” which he explains in the newest edition of the SONAR Newburyport eZine.

I was intrigued and beyond impressed by his words in the eZine, so I decided to [self-indulgently] dig a bit deeper by reaching out to Jerry to get some insight on his methods of madness with his new record.  His words of musical wisdom make a lot of sense for anyone creating music in any genre, and I highly recommend the read; and then applying what you learn by analyzing and enjoying his new full-length composition.

[Cakewalk]:       You talked a lot about the “programming” aspect of the new record, but what was the “writing” process like for you? (more…)

Share

Basics: Five Questions about Effects Placement

By Craig Anderton

There are plenty of places in SONAR where you can process the audio signal, but you need to know how to choose the right one.

What’s an “insert” effect? Don’t you always “insert” an effect? You indeed “insert” effects, but there’s a specific effect type usually called an Insert effect that inserts into an individual mixer channel. In SONAR, this inserts into a channel’s FX Bin or the ProChannel (Fig. 1).

Fig. 1: The FX bin for two channels have insert effects, as does the ProChannel for the Vocals channel.

Insert effects affect only the channel into which they are inserted. Typical insert effects include dynamics processors, distortion, EQ (because of EQ’s importance, it’s a permanent ProChannel insert effect), flanging, and other effects that apply to a specific sound in a specific channel.

Then what’s a “send” effect? Also called an (more…)

Share

Basics: Five Questions About Panning Laws

By Craig Anderton

It’s not just a good idea, it’s the law…panning law, that is. Let’s dispel the confusion surrounding this sometimes confusing topic.

What does a panning law govern? When a mono input feeds a stereo bus, the panning law determines the apparent and actual sound level as you sweep from one side of the stereo field to the other.

But why is a “law” needed? Doesn’t the level just stay the same as you pan? Not necessarily. Panning laws date back to analog consoles. If a pan control had a linear taper (in other words, a constant rate of resistance change as you turned it), then the sound was louder when panned to center. To compensate, hardware mixers used non-linear resistance tapers to drop the level, typically by -3 dB RMS, at the center. This gave an apparent level that was constant as you panned across the stereo soundstage. If that doesn’t make sense…just take my word for it, and keep reading.

Okay, then there’s a law. Isn’t that the end of it? Well, it wasn’t really a “law,” or a standard. Come to think of it, it wasn’t a specification or even a “recommendation.” Some engineers dropped the center level a little more to let the sides “pop” more, or to have mixes seem less “monoized” and therefore create more space for vocalists who were panned to center. Some didn’t drop the center level at all, and some did custom tweaks.

Why does this matter to a DAW like SONAR, which doesn’t have a hardware mixer? Different DAWs default to different panning laws. This is why duplicating a mix on different DAWs can yield different results, and lead to foolish online discussions about how one DAW sounds “punchier” or “wimpier” than another if someone brings in straight audio files and sets the panning and faders identically.

A mono signal of the same level feeds each fader pair, and each pair is subject to different SONAR panning laws. Note the difference in levels with the panpot panned to one side or centered. The tracks are in the same order as the descriptions in SONAR’s panning laws documentation and the listing in preferences. Although the sin/cos and square root versions may seem to produce the same results, the taper differs across the soundstage between the hard pans and center.

This sounds complicated, and is making my head explode—can you just tell me what I need to do so I can go back to making music? SONAR provides six different panning law options under Preferences, so not only can you choose the law you want, the odds of being able to match a different DAW’s law are excellent. The online help describes how the panning laws affect the sound. So there are really only two crucial concepts:

  • The pan law you choose can affect a mix’s overall sound if you have a lot of mono sound sources (panpots with stereo channels are balance controls, which is a whole other topic). So try mixes with different laws, choose a law you like, and stick with it. I prefer -3 dB center, sin/cos taper, and constant power; the signal level stays at 0dB when panned right or left, but drops by -3 dB in each channel when centered. This is how I built hardware mixers, so it’s familiar territory. It’s also available in many DAWs. But use what you like…after all, I’m not choosing what’s “right,” I’m simply choosing what I like.
  • If you import an OMF file from another DAW or need to duplicate a mix from another DAW, ask what panning law was used in creating the file. One of SONAR’s many cool features is that it will likely be able to match it.

There, that wasn’t so bad. Ignorance of the law is no excuse, and now you have answers to five questions about panning laws.

 

Share

Basics: Five Questions About Using Stompboxes with SONAR

by Craig Anderton

Plug-in signal processors are a great feature of computer-based recording programs like SONAR, but you may have some favorite stompboxes with no plug-in equivalents—like that cool fuzz pedal you love, or the ancient analog delay you scored on eBay. Fortunately, with just a little bit of effort you can make SONAR think external hardware effects are actually plug-ins.

1. What do I need to interface stompboxes with SONAR? You’ll need a low-latency audio interface with an unusd analog output and unused analog input (or two of each for stereo effects), and cords to patch these audio interface connections to the stompbox. We’ll use the TASCAM US-4×4 interface because it has extra I/O and low latency, but the same principles apply to other audio interfaces.

2. How do I hook up the effect and the interface? SONAR’s External Insert plug-in inserts in an FX bin, and diverts the signal to the assigned audio interface output. You patch the audio interface output to a hardware effect’s input, then patch the hardware effect’s output to the assigned audio interface input. This input returns to the External Effect plug-in, and continues on its way through the mixer. For this example, we’ll assume a stompbox with a mono input and stereo output.

3. What are correct settings for the External Insert plug-in parameters? When you insert the External Insert into the FX bin, a window appears that provides all the controls needed to set up the external hardware.

  • Send. This section’s drop-down menu assigns the send output to the audio interface. In this example, the send feeds the US-4×4’s output 3. Patch this audio interface output to your effect’s input. (Note that if an output is already assigned, it won’t appear in the drop-down menu.)
  • Output level control. The level coming out of the computer will be much higher than what most stompboxes want, so in this example the output level control is cutting the signal down by about -12 dB to avoid overloading the effect.
  • Return. Assign this section’s drop-down menu to the audio interface input through which the stompbox signal returns (in this example, the US-4×4’s stereo inputs 3 and 4). Patch the hardware effect output(s) to this input or inputs.
  • Return level control. Because the stompbox will usually have a low-level output, this slider brings the gain back up for compatibility with the rest of the system. In this example, the slider shows about +10 dB of gain. (Note: You can invert the signal phase in the Return section if needed.)

4. Is it necessary to compensate for the delay caused (more…)

Share

Basics: Five Questions about Filter Response

By Craig Anderton 

You can think of filters as combining amplification and attenuation—they make some frequencies louder, and some frequencies softer. Filters are the primary elements in equalizers, the most common signal processors used in recording. Equalization can make dull sounds bright, tighten up “muddy” sounds by reducing the bass frequencies, reduce vocal or instrument resonances, and more. 

Too many people adjust equalization with their eyes, not their ears. For example, once after doing a mix I noticed the client writing down all the EQ settings I’d done. When I asked why, he said it was because he liked the EQ and wanted to use the same settings on these instruments in future mixes. 

While certain EQ settings can certainly be a good point of departure, EQ is a part of the mixing process. Just as levels, panning, and reverb are different for each mix, EQ should be custom-tailored for each mix as well. Part of this involves knowing how to find the magic EQ frequencies for particular types of musical material, and that requires knowing the various types of filter responses used in equalizers. 

What’s a lowpass response? A filter with a lowpass response passes all frequencies below a certain frequency (called the cutoff or rolloff frequency), while rejecting frequencies above the cutoff frequency (Fig. 1). In real world filters, this rejection is not total. Instead, past the cutoff frequency, the high frequency response rolls off gently. The rate at which it rolls off is called the slope. The slope’s spec represents how much the response drops per octave; higher slopes mean a steeper drop past the cutoff. Sometimes a lowpass filter is called a high cut filter.

 Fig. 1: This lowpass filter response has a cutoff of 1100 Hz, and a moderate 24/dB per octave slope.

What’s a highpass response? This is the inverse of a lowpass response. It passes frequencies above the cutoff frequency, while rejecting frequencies below the cutoff (Fig. 2). It also (more…)

Share

Basics: Five Questions about Audio Specs

By Craig Anderton 

Specifications don’t have to be the domain of geeks—they’re not that hard to understand, and can guide you when choosing audio gear. Let’s look at five important specs, and provide a real-world context by referencing them to TASCAM’s new US-2×2 and US-4×4 audio interfaces. 

First, we need to understand the decibel (dB). This is a unit of measurement for audio levels (like an inch or meter is a unit of measurement for length). A 1 dB change is approximately the smallest audio level difference a human can hear. A dB spec can also have a – or + sign. For example, a signal with a level of -20 dB sounds softer than one with a level of -10 dB, but both are softer than one with a level of +2 dB. 

1. What’s frequency response? Ideally, audio gear designed for maximum accuracy should reproduce all audible frequencies equally—bass shouldn’t be louder than treble, or vice-versa. A frequency response graph measures what happens if you feed test frequencies with the same level into a device’s input, then measure the output to see if there are any variations. You want a response that’s flat (even) from 20 Hz to 20 kHz, because that’s the audible range for humans with good hearing. Here’s the frequency response graph for TASCAM’s US-2×2 interface (in all examples, the US-4×4 has the same specs).

This shows the response is essentially “flat” from 50 Hz to 20 kHz, and down 1 dB at 20 Hz. Response typically goes down even further below 20 Hz; this is deliberate, because there’s no need to reproduce signals we can’t really hear. The bottom line is this graph shows that the interface reproduces everything from the lowest note on a bass guitar to a cymbal’s high frequencies equally well. 

2. What’s Signal-to-Noise Ratio? All electronic circuits generate (more…)

Share

Basics: Five Questions about Latency and Computer Recording

Get the lowdown on low latency, and what it means to you

By Craig Anderton 

Recording with computers has brought incredible power to musicians at amazing prices. However, there are some compromises—such as latency. Let’s find out what causes it, how it affects you, and how to minimize it.  

1. What is latency? When recording, a computer is often busy doing other tasks and may ignore the incoming audio for short amounts of time. This can result in audio dropouts, clicks, excessive distortion, and sometimes program crashes. To compensate, recording software like SONAR dedicates some memory (called a sample buffer) to store incoming audio temporarily—sort of like an “audio savings account.” If needed, your recording program can make a “withdrawal” from the buffer to keep the audio stream flowing. 

Latency is “geek speak” for the delay that occurs between when you play or sing a note, and what you hear when you monitor your playing through your computer’s output. Latency has three main causes: 

  • The sample buffer. For example, storing 5 milliseconds (abbreviated ms, which equals 1/1000th of a second) of audio adds 5 ms of latency (Fig. 1). Most buffers sizes are specified in samples, although some specify this in ms. 

 Fig. 1: The control panel for TASCAM’s US-2×2 and US-4×4 audio interfaces is showing that the sample buffer is set to 64 samples. 

  • Other hardware. Converting analog signals into digital and back again takes some time. Also, the USB port that connects to your interface has additional buffers. These involve the audio interface that connects to your computer and converts audio signals into digital signals your computer can understand (and vice-versa—it also converts computer data back into audio).
  • Delays within the recording software itself. A full explanation would require another article, but in short, this usually involves inserting certain types of processors within your recording software. 

2. Why does latency matter? (more…)

Share

Optimizing Vocals with DSP

Optimizing tracks with DSP, then adding some judicious use of the DSP-laden VX-64 Vocal Strip, offers very flexible vocal processing. 

By Craig Anderton 

This is kind of a “twofer” article about DSP—first we’ll look at some DSP menu items, then apply some signal processing courtesy of the VX64—all with the intention of creating some great vocal sounds. 

PREPPING A VOCAL WITH “MENU” DSP 

“Prepping” a vocal with DSP before processing can make the processing more effective. For example, if you want to compress your vocal and there are significant level variations, you may end up adding lots of compression to accommodate quiet parts. But then when loud parts kick in, the compression starts pumping. 

Here’s another example. A lot of people use low-cut filters to banish rogue plosives (e.g., a popping “b” or “p” sound). However, it’s often better to add a fade-in to get rid of the plosive; this retains some of the plosive sound, and avoids affecting frequency response. 

Adding a fade-in to a plosive can get rid of the objectionable section while leaving the vocal timbre untouched. 

Also check if any levels need to be evened out, because there will usually be some places where the peaks are considerably higher than the rest of the vocal, and you don’t want these pumping the compressor either. The easiest fix is to select a track, drag in the timeline above the area you want to edit, then go Process > Apply Effect > Gain and drop the level by a dB or two. 

This peak is considerably louder than the rest of the vocal, but reducing it a few dB will bring it into line. 

Also note that if you have Melodyne Editor, you can use the Percussive algorithm with the volume tool to level out words visually. This is really fast and effective. 

While you’re playing around with DSP, this is also a good time to cut out silences, then add fadeouts into silence, and fadeins up from silence. Do this with the vocal soloed, so you can hear any little issues that might come back to haunt you later. Also, sometimes it’s a good idea to normalize individual vocal clips up to –3dB or so (leave some headroom) so that the compressor sees a more consistent signal. 

The clip on the left has been normalized and faded out. The silence between clips has been cut away. The clip on the right fades in, but has not been normalized. 

With DSP processing, it’s good practice to work on a copy of the vocal, and make the changes permanent as you do them. The simplest way to apply (more…)

Share