Two leading experts on live mixing share secrets

to improve audio performance in any venue

The live performer’s best friend is his or her mixer. That’s the person whose skills and savvy can make artists sound their best—or even better. It’s the live mixer who makes certain that the soaring vocals or searing solos coming off the stage sound as perfect as possible.
We tapped two top experts—Jeremiah Hamilton and Eddie Mapp—to help us explore the modern state of live audio mixing. Mapp has worked as the front of house engineer for acts like Stone Temple Pilots and Evanescence, while Jeremiah Hamilton is the senior engineer at Houston production service company LD Systems, and has worked with many acts from Tower of Power to Jimmy Vaughan. They spoke with us about collaborating with artists from behind the board, as well as the latest technologies and techniques to elevate the audio experience.

Tell me about your approach to translating the sound of an artist’s album to the stage.

Hamilton: You study the album to get a feel for the application of effects, the kind of reverbs you hear. You get to know through experience whether something sounds like a plate, or a hall or a room. Vocal tricks like doubling can all be pretty clearly heard by an experienced ear and you want to mimic that stuff live as best you can. In the really big leagues there’s some interaction between the studio people and the engineers from time to time, where they actually discuss the kinds of machines that were used. In the digital age, with certain software and digital boards being interactive, you can almost take exactly the same plug-ins and effects devices from the studio into the live board out in the field. It used to be that a studio engineer was a studio engineer, and a live engineer was a live engineer—with a firm line between the two. Nowadays with digital being so interactive, that’s changing. Studio engineers are finding an outlet in the live environment and vice versa.
Mapp: That’s actually one of the things I enjoy. I have a small studio back home where I work on little projects when I’m not on the road, and that gives me a good opportunity to sit down and try techniques that I use live and see how they translate to the studio. For the past four years I’ve been using a Digidesign VENUE console and with the last two or three bands I’ve been with we’ve taken a Pro Tools HD rig out with us. That helps because I can bring that home, analyze what I’ve been doing on the road, and make adjustments there. Whether it’s trying new plug-ins or even finding new miking techniques that might help alleviate some problems or speed things up, it’s been a fun thing. It’s become the normal way I work.

Do you use a lot of channel effects during a live mix?

Mapp: Especially with this console, all my effects are internal; I don’t use anything outboard. All my EQ, compression, gating, delays—all that’s already set up in the console. I have a handful of plug-ins that I carry with me, just for consistency. As far as going between the studio and live, I’ve got some different URS plug-ins that emulate API EQs and Neve EQs so I can use what the artist would have access to in the studio.

So Eddie’s obviously a Pro Tools guy. Jeremiah, what’s your preferred software and hardware for multitracking off the board?

Hamilton: You’re stuck in some situations because Digidesign products are proprietary to Pro Tools. Personally I—and engineers that I’ve worked with—prefer the sound of Cubase Studio 4 and 5 and Nuendo. But the predominant software in the industry is Pro Tools.

What tips can you offer bands trying to make live multitrack recordings without a digital console?

Hamilton: There are many ways to do it, and you don’t have to have all the fancy tools. A laptop along with some sort of I/O device will do. I own a small rig of PreSonus [digital interfaces] myself. You can take direct outs off an analog console of important stuff like vocals, bass, lead guitar, plus a sub mix—and create some very fine recordings very economically.

How do you overcome the challenges of in-ear monitors?

Mapp: The tricky thing is if you have a vocalist who isn’t a very strong singer, or if there are a lot of delicate parts. Sometimes it’s hard to get those to translate if you’ve got the rest of the band overpowering them. A lot of guitar players have big, loud amps and they want to feel that. With in-ears you can lose a little of that push you get from the cabinet. One thing I always try to do first is get everybody on stage at around the same volume level. Building a good balance there helps your mix as well as the people up front. When the level is consistent between the band members, that helps the vocal get over top of things, too.
Hamilton: Each time you develop an in-ear mix you’re gaining ground in the live environment by taking away a wedge mix. Very often an artist will want both—to take out one in-ear piece and hear a wedge come back at him so he gets a sense of belonging to the room. You can get inside the earphones and it sort of detaches you as the sound fills your head. You can’t hear the audience noise quite the same way but of course we stick mics in the audience and blend that back into the mix so you have that realism in your head. But the whole point of the in-ears, and why they’ve grown so fast, is to get rid of the stage volume so that a mixer has a fighting chance to produce a quality show in an ambient environment. I used to look at monitors as damage control, just trying to make the best of a situation. But with in-ear monitors you can have a quality engineer sit down at a board and create a mix inside the artist’s head that’s equal to what they had in the studio—and of course artists love that.

Without betraying confidences, can you tell us how you handle the monitors of artists who use pitch correction?

Hamilton: To be honest, after 35 years in the business I’ve only seen it a couple of times. I have seen the Antares [Auto-Tune] device used, but not in that application. I see it used as a deliberate attempt to create the effect by singing one manner and using it as sort of an automated doubler. But as far as “Milli-Vanilli-ing” and correcting things on the sly, I see very little of that. In a lot of ways I think it’s blown out of proportion. People make assumptions based on what they perceive people’s talent to be. I see vocal tracks hanging in the background in Pro Tools and people singing over them, but very seldom have I ever done a show where the artist is not singing and contributing to the performance.
Mapp: I’ve never actually worked with any artists that use live pitch correction. I guess I’ve been fortunate in that respect. I have worked with plenty of artists that use backing tracks. Each band is different in their vision of what they want the show to be. With Evanescence, yes, there are backing tracks. It’s mostly loop-based samples or strings, choir and different effect-type vocals. What I tried to do with them is to build the mix around the band: put everything up and just add the backing tracks in as a little spice here and there. I know there are certain artists who use pitch correction, and I just haven’t worked with any so far. I try to work with the artist as much as possible to find out where their strengths and weaknesses are, so maybe if I just need to help them a little with reverb or riding a delay to smooth some notes out, that’s what I prefer to do. I still like it to be a live thing. It’s live music.
–Dave Jones

comment closed

Copyright © 2010 M Music & Musicians Magazine ·