Using Avid Artist Series Controllers with SONAR

by Craig Anderton

The Avid Artist Series Mix controller is compatible with SONAR. However, remember that this is a Pro Tools-centric controller, so not everything is implemented in SONAR (or in any other program for that matter). Regardless, the basics (and more) are there, but there are also some unique aspects you need to know.

There are horror stories all over the web of not getting the Artist Series Mix to work, even with Pro Tools, and many refer to it as a “doorstop.” Others have found ways to get it to work, which often involve strange rituals of turning things on in an esoteric and specified order—but it doesn’t have to be that weird. It seems the only real issue occurs when the Artists Series Mix initializes before other elements are ready to work with it, so all you need to do is take control over when it initializes—here’s how.


  1. Install the latest EuCon software from Avid’s Artist Series web site. This is essential, because the Artist Series Mix talks to your computer over Ethernet (or to your router/switcher if you already have a wired internet connection appropriating your computer’s Ethernet port).
  2. You may be instructed to do a firmware update.
  3. If needed, install the SONAR Eucon plug-in from The SONAR EuCon plug-in is needed to have it show up as a control surface in SONAR.
  4. Next time you boot up SONAR, select EuCon as a new control surface in SONAR (in Edit > Preferences > MIDI > Control Surfaces). Do notselect MIDI in or out for the Control Surface, that’s not what EuCon uses.


Using the following method, it doesn’t seem to make any difference when you turn on the Artist Series Mix. I usually wait until SONAR has booted, but I’ve also turned it on before anything else, after everything else, etc. The key is to keep the EuCon application from running before you want it to start.

  1. In Task Manager, Startup Tab, right-click on anything that says EuCon and disable startup. I left anything that says MC Client enabled because it didn’t cause problems. After doing so, reboot. You only need to do this once, not every time you want to use the Artist Series Mix.

  1. Boot SONAR and open a project.
  2. Turn on power to the Artist Series Mix.
  3. Wait until the Avid logos turn off in the display, then open the EuControl application that shows up with your apps.

  1. After it’s loaded, locate the EuControl button in the System Tray (or in the Hidden Icons if needed). It may take a while for this button to show up.
  2. Right-click on it and choose Restart EUCON Applications.

  1. When a dialog box says Restart all EUCON applications?, click Yes.

It will take a little while (although well under a minute), but eventually everything will recognize everything else, the faders will mirror what’s in SONAR’s console view if you’d previously selected EuCon as the control surface, and you’ll be ready to go. Note that you’ll also want to use the EuCon system tray icon to access the various settings, but that’s all pretty straightforward if you read the documentation for Avid’s EUCON software.


The Artist Series Mix is a pretty slick controller, even with somewhat of a “Made in China” (which it is) vibe. It has bright yellow OLEDs, and a small form factor that fits in crowded workspaces.

So…here’s what works.

  • Faders
  • Panpots
  • Solo
  • Mute
  • Record enable/disable
  • Sends (up to 8)
  • Gain Trim
  • Phase
  • Automation read/write
  • Bank Select
  • Strip nudge (i.e., move strips in a bank over one at a time)
  • Transport controls
  • It recognizes Aux Tracks, and buses are treated like tracks—no special switching is needed
  • Bank select by selecting a channel in SONAR. This is pretty cool if you’re focused more on SONAR than the controller. If you select a track that’s outside where the existing tracks fall, the faders will “scroll” so that the left-most fader is the selected track, and the other faders increment as you move right. For example, if the faders are on 1-8 and you select track 11, the faders will now go from 11-18.
  • Fader touch select. You don’t need to click anything to start controlling a fader…just touch and go
  • Footswitch jack for punch-in and punch-out
  • You can have up to four controls if you want 32 channels of faders.

Here’s what sorta works.

  • Selecting a track in SONAR selects it in the control surface, but unfortunately, not the other way around.
  • Bank select by selecting a channel in SONAR doesn’t work with buses. You need to use standard bank switching and strip nudging to get to buses.
  • Input Echo works except on Track Folders; however the corresponding control surface light (i.e., in the button you push) doesn’t illuminate when Input Echo is on.
  • Effects kind of work, sometimes. Maybe. I haven’t cracked the code on what makes them happen. I was able to get a Waves C1 compressor working, and for a fleeting moment it seemed like I had ACT figured out, but I wouldn’t go into the Artist Series Mix with the expectation of controlling plug-ins. Then if you can figure it out, you’ll be pleasantly surprised.

Here’s what doesn’t work.

  • The timed dimming function. Given that the manual states dimming is to prolong the life of the OLEDs, it’s concerning they don’t dim as advertised.
  • I don’t really think the effects editing qualifies as working, although as noted above, sometimes it does.
  • As of the most recent Artist Series software update, the meters no longer work in the display.

These units aren’t exactly inexpensive, but they work as advertised (or at least they do if you’ve read this), and perform the standard functions you’d want in a control surface. However, not everyone is enamored of them—check out some of the user reviews on various sites, like In any event, I have the Artist Series controllers working fine with SONAR now—so I know they definitely can do the job.

Note: This article is excerpted from “The Big Book of SONAR Tips,” which is available from the Cakewalk store.

The Miracle of Mid-Side EQ: Rock Your Mixes and Masters

by Craig Anderton

Sure, the LP EQ is a great linear-phase, stereo EQ. But it was designed for mid-side processing as well as conventional stereo, so let’s explore what mid-side processing is about, and why it’s so important.


You can add up to 20 nodes, and each can have one of the following responses:

  • Low shelf
  • High shelf
  • High pass
  • Low pass
  • Peak boost/cut


The LP EQ allows up to 20 nodes
The LP EQ allows up to 20 nodes, which can choose from five responses.

However, there’s some intelligence when adding nodes; for example if you double-click to enter a node close to the highest possible frequency, the LP EQ will insert a lowpass filter. At a somewhat lower frequency, there’s a shelving response (although you can of course change these default responses to whatever you like). Drag nodes horizontally to change the frequency or up/down to vary amplitude; a right-click + drag on a node alters the width, as does using the mouse scroll wheel on a selected node.

You can ctrl+click, or draw a marquee around, multiple nodes to select them, but there’s an interesting twist. Suppose a node is set to boost, and another to cut. If you select both, then click on the one that boosts and drag it downward, the amount of boost will decrease. However the one that’s cut will start boosting. This complementary motion allows increasing or decreasing the overall emphasis easily; for example, if you think you went too far with the amount of EQ and want to pull it back, this reduces all aspects equally.

If all the selected nodes either boost or cut, then their amplitudes vary together.

These basics give a flavor of the features, but there’s much more—so click on the UI to give the LP EQ the focus, then press F1 to call up the comprehensive documentation.


Mid-side processing encodes a stereo track into two separate components: the center becomes the “mid” component in the left channel, while the stereo track’s right and left elements become the “side” component in the right channel. You can then process these components individually, with automatic decoding back into stereo.

To get started with mid-side processing, click on the LP 64’s Expert button and under Mode, choose Mid/Side. For best results, set the precision to High. This results in the most latency but the highest accuracy, which is important because with mid-side processing, you don’t want any phase shift or sample misalignment—that will interfere with the decoded stereo imaging.

the LP EQ's Expert Mode access Mid-Side processing
The LP EQ’s Expert Mode is the key to doing mid-side processing with EQ. Also note the Mix control for parallel processing.

Processing can be independent for the mid and side components (as it is for the left and right channels in conventional stereo applications). You assign a node to the appropriate component by clicking on the node, and then clicking on M or S (toward the LP EQ’s upper right corner). Here are a few possible applications.

  • With mastering, you can get “inside the file” to do pseudo-remixing on a stereo track. One typical application is giving a slight boost to the higher-frequency side components to provide a bit more “air” and a wider stereo image.
  • If you’ve been seduced by vinyl’s comeback, remember that it’s crucial to center the bass and minimize bass excursions in the sides. With mid-side EQ processing, you can reduce the bass in the sides, and if needed, increase bass a bit in the center. Even if you’re not mastering for vinyl, taking this technique further can give a super-anchored, “center-channel” bass sound.
  • Drums with lots of room ambience can benefit from a bit of upper mids in the sides for extra definition, and a little bit of lower mids in the center to accent the kick.
  • If a synth bass has a wide image that “steps on” other instruments, you can bring down the bass in the sides.
  • For taming reverb, set a node to Mid, select the high pass curve, and slide it all the way to the right to take out essentially everything. Then you can shape the remaining reverb with the side EQ, while chasing the away from the center, where it can muddy the bass and kick.


But…how do you know whether you’re really making an improvement to the sound or not? The LP EQ includes a Mix control (accessed in the Expert section) so you can vary the mix from full EQ to no EQ. Yes, parallel processing for EQ…very handy, and even better, the Mix control can be automated (like virtually all other parameters, including display characteristics and bypass).

You can also switch quickly between two different EQ settings with the A/B comparison function.

Granted, there’s no shortage of EQ plug-ins, but the LP EQ truly brings something new to the party. If you’re not familiar what mid-side processing can do with EQ, there’s no better way to find out than with the LP EQ.

Basics: Five Questions about Effects Placement

By Craig Anderton

There are plenty of places in SONAR where you can process the audio signal, but you need to know how to choose the right one.

What’s an “insert” effect? Don’t you always “insert” an effect? You indeed “insert” effects, but there’s a specific effect type usually called an Insert effect that inserts into an individual mixer channel. In SONAR, this inserts into a channel’s FX Bin or the ProChannel (Fig. 1).

Fig. 1: The FX bin for two channels have insert effects, as does the ProChannel for the Vocals channel.

Insert effects affect only the channel into which they are inserted. Typical insert effects include dynamics processors, distortion, EQ (because of EQ’s importance, it’s a permanent ProChannel insert effect), flanging, and other effects that apply to a specific sound in a specific channel.

Then what’s a “send” effect? Also called an Continue reading Basics: Five Questions about Effects Placement

Basics: Five Questions About Panning Laws

By Craig Anderton

It’s not just a good idea, it’s the law…panning law, that is. Let’s dispel the confusion surrounding this sometimes confusing topic.

What does a panning law govern? When a mono input feeds a stereo bus, the panning law determines the apparent and actual sound level as you sweep from one side of the stereo field to the other.

But why is a “law” needed? Doesn’t the level just stay the same as you pan? Not necessarily. Panning laws date back to analog consoles. If a pan control had a linear taper (in other words, a constant rate of resistance change as you turned it), then the sound was louder when panned to center. To compensate, hardware mixers used non-linear resistance tapers to drop the level, typically by -3 dB RMS, at the center. This gave an apparent level that was constant as you panned across the stereo soundstage. If that doesn’t make sense…just take my word for it, and keep reading.

Okay, then there’s a law. Isn’t that the end of it? Well, it wasn’t really a “law,” or a standard. Come to think of it, it wasn’t a specification or even a “recommendation.” Some engineers dropped the center level a little more to let the sides “pop” more, or to have mixes seem less “monoized” and therefore create more space for vocalists who were panned to center. Some didn’t drop the center level at all, and some did custom tweaks.

Why does this matter to a DAW like SONAR, which doesn’t have a hardware mixer? Different DAWs default to different panning laws. This is why duplicating a mix on different DAWs can yield different results, and lead to foolish online discussions about how one DAW sounds “punchier” or “wimpier” than another if someone brings in straight audio files and sets the panning and faders identically.

A mono signal of the same level feeds each fader pair, and each pair is subject to different SONAR panning laws. Note the difference in levels with the panpot panned to one side or centered. The tracks are in the same order as the descriptions in SONAR’s panning laws documentation and the listing in preferences. Although the sin/cos and square root versions may seem to produce the same results, the taper differs across the soundstage between the hard pans and center.

This sounds complicated, and is making my head explode—can you just tell me what I need to do so I can go back to making music? SONAR provides six different panning law options under Preferences, so not only can you choose the law you want, the odds of being able to match a different DAW’s law are excellent. The online help describes how the panning laws affect the sound. So there are really only two crucial concepts:

  • The pan law you choose can affect a mix’s overall sound if you have a lot of mono sound sources (panpots with stereo channels are balance controls, which is a whole other topic). So try mixes with different laws, choose a law you like, and stick with it. I prefer -3 dB center, sin/cos taper, and constant power; the signal level stays at 0dB when panned right or left, but drops by -3 dB in each channel when centered. This is how I built hardware mixers, so it’s familiar territory. It’s also available in many DAWs. But use what you like…after all, I’m not choosing what’s “right,” I’m simply choosing what I like.
  • If you import an OMF file from another DAW or need to duplicate a mix from another DAW, ask what panning law was used in creating the file. One of SONAR’s many cool features is that it will likely be able to match it.

There, that wasn’t so bad. Ignorance of the law is no excuse, and now you have answers to five questions about panning laws.


Basics: Five Questions About Using Stompboxes with SONAR

by Craig Anderton

Plug-in signal processors are a great feature of computer-based recording programs like SONAR, but you may have some favorite stompboxes with no plug-in equivalents—like that cool fuzz pedal you love, or the ancient analog delay you scored on eBay. Fortunately, with just a little bit of effort you can make SONAR think external hardware effects are actually plug-ins.

1. What do I need to interface stompboxes with SONAR? You’ll need a low-latency audio interface with an unusd analog output and unused analog input (or two of each for stereo effects), and cords to patch these audio interface connections to the stompbox. We’ll use the TASCAM US-4×4 interface because it has extra I/O and low latency, but the same principles apply to other audio interfaces.

2. How do I hook up the effect and the interface? SONAR’s External Insert plug-in inserts in an FX bin, and diverts the signal to the assigned audio interface output. You patch the audio interface output to a hardware effect’s input, then patch the hardware effect’s output to the assigned audio interface input. This input returns to the External Effect plug-in, and continues on its way through the mixer. For this example, we’ll assume a stompbox with a mono input and stereo output.

3. What are correct settings for the External Insert plug-in parameters? When you insert the External Insert into the FX bin, a window appears that provides all the controls needed to set up the external hardware.

  • Send. This section’s drop-down menu assigns the send output to the audio interface. In this example, the send feeds the US-4×4’s output 3. Patch this audio interface output to your effect’s input. (Note that if an output is already assigned, it won’t appear in the drop-down menu.)
  • Output level control. The level coming out of the computer will be much higher than what most stompboxes want, so in this example the output level control is cutting the signal down by about -12 dB to avoid overloading the effect.
  • Return. Assign this section’s drop-down menu to the audio interface input through which the stompbox signal returns (in this example, the US-4×4’s stereo inputs 3 and 4). Patch the hardware effect output(s) to this input or inputs.
  • Return level control. Because the stompbox will usually have a low-level output, this slider brings the gain back up for compatibility with the rest of the system. In this example, the slider shows about +10 dB of gain. (Note: You can invert the signal phase in the Return section if needed.)

4. Is it necessary to compensate for the delay caused Continue reading Basics: Five Questions About Using Stompboxes with SONAR

DAW Best Practices: How to get a bigger drum sound with reverb

The Biggest, Baddest Drum Reverb Sound Ever

[Originally posted as a daily tip on the SONAR forums and reposted for viewers here on the blog.]

by Craig Anderton

You want big-sounding drums? Want your metal drum tracks to sound like the Drums of Doom? Keep reading. This technique transposes a copy of the reverb and pans the two reverb tracks oppositely. It works best with unpitched sounds like percussion.

1. Insert a reverb send.

Insert a send in your drum track, then insert your reverb of choice in the Send bus.


2. Render the reverb, isolated from the drum track. Continue reading DAW Best Practices: How to get a bigger drum sound with reverb

Basics: Five Questions about Filter Response

By Craig Anderton 

You can think of filters as combining amplification and attenuation—they make some frequencies louder, and some frequencies softer. Filters are the primary elements in equalizers, the most common signal processors used in recording. Equalization can make dull sounds bright, tighten up “muddy” sounds by reducing the bass frequencies, reduce vocal or instrument resonances, and more. 

Too many people adjust equalization with their eyes, not their ears. For example, once after doing a mix I noticed the client writing down all the EQ settings I’d done. When I asked why, he said it was because he liked the EQ and wanted to use the same settings on these instruments in future mixes. 

While certain EQ settings can certainly be a good point of departure, EQ is a part of the mixing process. Just as levels, panning, and reverb are different for each mix, EQ should be custom-tailored for each mix as well. Part of this involves knowing how to find the magic EQ frequencies for particular types of musical material, and that requires knowing the various types of filter responses used in equalizers. 

What’s a lowpass response? A filter with a lowpass response passes all frequencies below a certain frequency (called the cutoff or rolloff frequency), while rejecting frequencies above the cutoff frequency (Fig. 1). In real world filters, this rejection is not total. Instead, past the cutoff frequency, the high frequency response rolls off gently. The rate at which it rolls off is called the slope. The slope’s spec represents how much the response drops per octave; higher slopes mean a steeper drop past the cutoff. Sometimes a lowpass filter is called a high cut filter.

 Fig. 1: This lowpass filter response has a cutoff of 1100 Hz, and a moderate 24/dB per octave slope.

What’s a highpass response? This is the inverse of a lowpass response. It passes frequencies above the cutoff frequency, while rejecting frequencies below the cutoff (Fig. 2). It also Continue reading Basics: Five Questions about Filter Response

Basics: Five Questions about Audio Specs

By Craig Anderton 

Specifications don’t have to be the domain of geeks—they’re not that hard to understand, and can guide you when choosing audio gear. Let’s look at five important specs, and provide a real-world context by referencing them to TASCAM’s new US-2×2 and US-4×4 audio interfaces. 

First, we need to understand the decibel (dB). This is a unit of measurement for audio levels (like an inch or meter is a unit of measurement for length). A 1 dB change is approximately the smallest audio level difference a human can hear. A dB spec can also have a – or + sign. For example, a signal with a level of -20 dB sounds softer than one with a level of -10 dB, but both are softer than one with a level of +2 dB. 

1. What’s frequency response? Ideally, audio gear designed for maximum accuracy should reproduce all audible frequencies equally—bass shouldn’t be louder than treble, or vice-versa. A frequency response graph measures what happens if you feed test frequencies with the same level into a device’s input, then measure the output to see if there are any variations. You want a response that’s flat (even) from 20 Hz to 20 kHz, because that’s the audible range for humans with good hearing. Here’s the frequency response graph for TASCAM’s US-2×2 interface (in all examples, the US-4×4 has the same specs).

This shows the response is essentially “flat” from 50 Hz to 20 kHz, and down 1 dB at 20 Hz. Response typically goes down even further below 20 Hz; this is deliberate, because there’s no need to reproduce signals we can’t really hear. The bottom line is this graph shows that the interface reproduces everything from the lowest note on a bass guitar to a cymbal’s high frequencies equally well. 

2. What’s Signal-to-Noise Ratio? All electronic circuits generate Continue reading Basics: Five Questions about Audio Specs

Basics: Five Questions about Latency and Computer Recording

Get the lowdown on low latency, and what it means to you

By Craig Anderton 

Recording with computers has brought incredible power to musicians at amazing prices. However, there are some compromises—such as latency. Let’s find out what causes it, how it affects you, and how to minimize it.  

1. What is latency? When recording, a computer is often busy doing other tasks and may ignore the incoming audio for short amounts of time. This can result in audio dropouts, clicks, excessive distortion, and sometimes program crashes. To compensate, recording software like SONAR dedicates some memory (called a sample buffer) to store incoming audio temporarily—sort of like an “audio savings account.” If needed, your recording program can make a “withdrawal” from the buffer to keep the audio stream flowing. 

Latency is “geek speak” for the delay that occurs between when you play or sing a note, and what you hear when you monitor your playing through your computer’s output. Latency has three main causes: 

  • The sample buffer. For example, storing 5 milliseconds (abbreviated ms, which equals 1/1000th of a second) of audio adds 5 ms of latency (Fig. 1). Most buffers sizes are specified in samples, although some specify this in ms. 

 Fig. 1: The control panel for TASCAM’s US-2×2 and US-4×4 audio interfaces is showing that the sample buffer is set to 64 samples. 

  • Other hardware. Converting analog signals into digital and back again takes some time. Also, the USB port that connects to your interface has additional buffers. These involve the audio interface that connects to your computer and converts audio signals into digital signals your computer can understand (and vice-versa—it also converts computer data back into audio).
  • Delays within the recording software itself. A full explanation would require another article, but in short, this usually involves inserting certain types of processors within your recording software. 

2. Why does latency matter? Continue reading Basics: Five Questions about Latency and Computer Recording

Optimizing Vocals with DSP

Optimizing tracks with DSP, then adding some judicious use of the DSP-laden VX-64 Vocal Strip, offers very flexible vocal processing. 

By Craig Anderton 

This is kind of a “twofer” article about DSP—first we’ll look at some DSP menu items, then apply some signal processing courtesy of the VX64—all with the intention of creating some great vocal sounds. 


“Prepping” a vocal with DSP before processing can make the processing more effective. For example, if you want to compress your vocal and there are significant level variations, you may end up adding lots of compression to accommodate quiet parts. But then when loud parts kick in, the compression starts pumping. 

Here’s another example. A lot of people use low-cut filters to banish rogue plosives (e.g., a popping “b” or “p” sound). However, it’s often better to add a fade-in to get rid of the plosive; this retains some of the plosive sound, and avoids affecting frequency response. 

Adding a fade-in to a plosive can get rid of the objectionable section while leaving the vocal timbre untouched. 

Also check if any levels need to be evened out, because there will usually be some places where the peaks are considerably higher than the rest of the vocal, and you don’t want these pumping the compressor either. The easiest fix is to select a track, drag in the timeline above the area you want to edit, then go Process > Apply Effect > Gain and drop the level by a dB or two. 

This peak is considerably louder than the rest of the vocal, but reducing it a few dB will bring it into line. 

Also note that if you have Melodyne Editor, you can use the Percussive algorithm with the volume tool to level out words visually. This is really fast and effective. 

While you’re playing around with DSP, this is also a good time to cut out silences, then add fadeouts into silence, and fadeins up from silence. Do this with the vocal soloed, so you can hear any little issues that might come back to haunt you later. Also, sometimes it’s a good idea to normalize individual vocal clips up to –3dB or so (leave some headroom) so that the compressor sees a more consistent signal. 

The clip on the left has been normalized and faded out. The silence between clips has been cut away. The clip on the right fades in, but has not been normalized. 

With DSP processing, it’s good practice to work on a copy of the vocal, and make the changes permanent as you do them. The simplest way to apply Continue reading Optimizing Vocals with DSP