Recording Virtual Synthesizers: The Art of Imperfection

Synths can make perfect sounds…but is that always a perfect solution?

by Craig Anderton

Recording a virtual instrument is simple…you just insert it, hit a few keys, and mix it in with the other tracks. Right?

Well…no. Synthesizers are musical instruments, and you wouldn’t mic a drum set by taking the first mic you found and pointing it the general direction of the drummer, nor would you record an electric guitar by just plugging it into a mixing console. A little extra effort spent on avoiding an unnatural sound when mixing synths with acoustic instruments, improving expressiveness, tightening timing inconsistencies, and other issues can help you get the most out of your virtual instruments.

But first, remember that “rules” were made to be broken. There is no “right” or “wrong” way to record, only ways that satisfy you to a greater or lesser degree. Sometimes doing the exact opposite of what’s expected gives the best results. So take the following as suggestions, not rules, that may be just what the doctor ordered when you want to spice up an otherwise ordinary synth sound.

THE SYNTHESIZER’S SECRET IDENTITY

The paramount aspect of recording a synth is to define the desired results as completely as possible. Using synths to reinforce guitars on a heavy metal track is a completely different musical task from creating a all-synthesized 30-second spot. Sometimes you want synths to sound warm and organic, but if you’re doing techno, you’ll probably want a robot, machine-like vibe (with trance music, you might want to combine both possibilities).

So, analyze your synth’s “sonic signature”—is it bright, dark, gritty, clean, warm, metallic, or…? Whereas some people attach value judgements to these different characteristics, veteran synthesists understand that different synthesizers have different general sound qualities, and choose the right sound for the right application. For example, although Cakewalk’s Z3TA+ is highly versatile, to my ears its natural “character” is defined, present, and detailed.

Regarding sonic signatures, perhaps one of the reasons for a resurgence in analog synths sounds is digital recording. Analog synths tended to use low-pass filters that lacked the “edgy” sound of digital sound generation. Recording the darker analog sounds on analog tape sometimes resulted in a muddy sound; but when recording on digital, analog sounded comparatively sweet. Digital also captured all the little hisses, grunts, and burps that characterized analog synths. This is a case where the “imperfections” of analog and the “perfection” of digital recording complemented each other.

Another thought: look at guitars, voices, pianos, etc. on a spectrum analyzer, and you’ll note there is little natural high end. If you’re trying to blend a virtual instrument in with physical instruments, remember that a virtual synth has no problems obtaining a solid high end. Using the ProChannel’s LP filter set to 48dB/octave and lowering the frequency just a little bit can introduce the “imperfection” that matches the spectral characteristics of “real” acoustic and electric instruments more closely, so the synth seems to blend in better with the other tracks (Fig. 1).

Fig. 1: The ProChannel QuadCurve EQ’s lowpass filter can help digital synths sit better in tracks that use multiple physical or acoustic instruments.

In a similar vein, using Rapture’s low-fi options (like Tube; see Fig. 2) can add a taste of “grunge” that helps them fit in a little better with rock material.

Fig. 2: Lo-fi options can help perfect synths co-exist better in an imperfect world.

On the other hand for background music tailored to commercial videos, a bright sound can give more of an edge at lower volumes; furthermore, a “clean” quality can leave space for narration, effects, and other important sonic elements.

The point of all this is start with a synth whose character already approximates the desired result. But even if you don’t have an arsenal of synths, keep your final goal in mind. There’s lots you can do to influence the overall timbre of a synthesizer to achieve that goal.

SPACE: THE FINAL FRONT EAR

We have two ears, and listen through air. The sound we hear is influenced by the weather, the distance to the sound source, whether we’ve listened to too much loud music on headphones, the shape of our ears, and many other factors. The sound of a virtual synth need never reach air until we hear the final mix, but that’s not always a good thing.

Compared to acoustic instruments, synth sounds are relatively static (especially with the rise of sample-based synths). Yet our ears are accustomed to hearing evolving, complex acoustical waveforms that are very much unlike synth waveforms. What allows sample playback synths to sound satisfying is that the ear cares mostly about a sound’s attack, and identifies the instrument based on the attack (sort of like a computer servicing an interrupt), then moves on to listening to more music.

Sample-based gear is very good at producing convincing attacks, but the decay characteristics are generally poor; the ear notices this static quality. Creating a simple acoustic environment for the synth is one way to create a more interesting sound. This can also help synths blend in with tracks that include lots of miked instruments, because the latter usually include some degree of room ambience (even with fairly “dead” rooms).

THE VIRTUAL ARCHITECT

One technique is to synthesize an acoustic environment using signal processors. Try this: during recording, insert a reverb set to the sound of a small, dark room with very few (if any) first reflection components. This should be just enough to give the synthesized sound a bit of acoustic depth. When the synth and other instruments go through a main hall reverb during mixdown, they’ll mesh together a lot better. Another trick is to add three or four short, prime number delays (e.g., 19, 23, 29, and 31 ms with no feedback) mixed fairly far down (Fig. 3). Delays this short can add virtual reflections that emulate how a real room affects sound.

Fig. 3: The Sonitus delay allows setting different delays in the two channels, so you need only two buses to construct a “room” with four reflections.

You may want to create a different type of acoustic environment than a room, such as a guitar amp for electric guitar patches. Amps generally add distortion, equalization, limiting, and speaker simulation. Feeding the synth through one of TH2 Producer’s cleaner amps can create a sound with much more character. Sometimes even a tube preamp is all you need.

A second way to create an acoustic environment is to use the Real Thing. A vintage tube guitar amp is a truly amazing signal processor, even when it’s not adding distortion; use the external insert plug-in, send the output to a guitar amp, mic it, and feed a preamp into the insert input. The sound is very, very different compared to going direct.

Another way to add the feel of an acoustic space to a synth is to mix in a bit of miked sound of you playing your keyboard controller’s keys. Mix this very subtly in the background—just noticeable enough to give a low-level aural “cue.” You may be surprised at how much this adds a natural sound quality to synthesized sounds.

MAKING TRACKS

Remember, machines don’t kill music—people do. If your synths sound sterile on playback, roll up your sleeves and get to the source of the problem. Like most acoustic instruments, the human experience is fraught with complexity, imperfection, and magic. Introduce some of that spirit to your synth recordings, and they’ll ring truer to your heart, as well as to your music.


Published by

Craig Anderton [Gibson]

Author/musician Craig Anderton has played on, mixed, or produced over 20 major-label releases, authored dozens of books, and lectured on technology and the arts in 38 states, 10 countries, and 3 languages. Check out his latest music videos at http://www.youtube.com/thecraiganderton.