Charles Maynes Talks About The Symbiotic Relationship Of Audio In Movies & TV

Charles Maynes is a multifaceted sound designer, sound effects recordist, audio editor and re-recording mixer whose work spans film, television, video games, virtual reality and more. Having worked for a multitude of high-profile clients such as Dreamworks, Paramount Pictures, Skywalker Sound, Universal Pictures and Warner Brothers, to name a few, Charles has established himself as a versatile and indispensable talent that has seen him involved with two Academy Award Sound Editing Awards, plus winner of two Emmy and four Golden Reel Awards.

In this interview, Charles reflects on the paths that led him to where he is today, the artists that inspired his love of music, synths and sound design, and how a new generation of plugins are making a difference to his workflow…

Let’s rewind all the way back. Did you sit in school one day and think one day I'm going to work in the movie industry?

I was a super giant fan of bands like Japan and Ultravox in the 80s. The first synthesizer I bought was a Korg Monopoly which was pre-MIDI, and then I was playing in a band where we started integrating synthesizers and stuff – we were in a direction of bands like Japan, Ultravox, Roxy Music and Duran Duran.

I did audio production at college and then got into video production. We did some film work, most of the stuff was in the context of live video, in a music sense. Things like the classic BBC TV music show The Old Grey Whistle Test, multi camera stuff, where you learned how to work a switcher, and you learned how to direct and run cameras and everything.

I remember very distinctly, a guy who we were working with had a brother who played in a band in London for a while, called The Hitmakers, and he got a copy of a show called The South Bank Show which had a segment about Peter Gabriel making the Security album. I saw Gabriel explaining his process, and I distinctly remember a scene where they were at a junkyard breaking a television tube with a hammer and finally the tube imploded and they recorded it to a Nagra. They then showed him back in the studio, loading that into the Fairlight and triggering it on the music keyboard. At that point, it was like the world totally changed for me, the idea of being able to take these real sounds and manipulate them in that manner. I had been familiar with musique concréte techniques, with Stockhausen and Pierre Schaeffer and Pierre Boulez, but I had never really considered using those techniques in a pop/music context like that.

I think a thing that has come up often in the conversation of the creative process is are we positioning ourselves as service providers or are we positioning ourselves as collaborators?

MIDI had now happened at that point, and I was working at a sort of “general” interest music shop which sold band instruments, Drums, Keyboards and Guitars and Basses- but there was also a high-end music store in town which sold the Sequential Circuit stuff, the Oberheim things, and they were our EMU dealer in San Diego. I went in and got to see the Emulator and just went, ‘wow, this is incredible’ and I remember pleading with my parents to co-sign on a loan for me to be able to get one. It was crazy money back then, I bought the base model emulator and the shop gave me a fairly substantial discount, but it was still $7,000.

I ended up getting hired by that place and they were the EMU service centre as well so when bands would come into town, invariably their Emulator E2s would always have issues so we got used to essentially having to re-solder power supplies on them to get the connections right. I went to the San Diego sports arena when Roger Waters was playing, and his E2 needed that service. I was literally on stage at the sports arena and I took apart their E2, soldered the power supply, got it running before the show started. The thing that was great was being able to get the access to those people; you would get to talk to them and you could trade sounds and stuff so I started making connections that way. Andrew Robinson, who’s the tour manager for New Order, who I met in a kind of similar manner, when they were playing at the university where I was studying electronic music. We ended up trading sounds for years and it was in that community where you’re getting to have engagement with people who are established, or becoming established pop artists. That got me into doing musical instrument stuff and I ended up doing sound programming for the Emulator and having my work go out on commercial products.

Later, I had a friend who got hired at Digidesign and became Peter Gotcher’s assistant. He said, ‘hey, we’ve got some openings at Digidesign, would you like to come over and talk to us about possibly working here?’. So, I ended up doing that, got hired into their technical support department right when Pro Tools 1 was entering development. Roughly about a year later, Pro Tools 1.0 shipped, and the promise of it was wonderful, but it was very chaotic. The development team for it was a little fragmented in the sense that we had OSC developing Pro Deck, which was the four channel software, and Mark Jeffrey, who had done Softsynth and Turbosynth for Digidesign, was working on the graphic editor, which was Pro Edit, and you would have to switch back and forth between the apps and it was entirely crash prone. It was challenging, but we survived.

a bit later, I moved from the Tech Support Dept at Digidesign, to the Software Engineering dept, and  started doing more post production- specific testing for ProTools, and a newer product Post-Conform, which was designed for assembling sessions from Video Edit Decision Lists (EDL’s), I was also dealing with external beta testers in the film and TV world and ended up being able to leverage experience into a sound editing position here in Hollywood working for Universal Pictures. I moved here at the end of 1994 and did my first show at Universal in February of 1995. I was there for two films and then got laid off because there wasn’t a huge amount of work.

My first real kind of magic opportunity came when NAB happened at the beginning of ‘95 and Avid reached out to me to go and work in Las Vegas, to essentially present Post Conform to the NAB community, on the show floor. Through that I ended up meeting Steve Flick who had just won the Academy Award for Speed. He was in the process of setting up a new post production company himself and he invited me to work for him. That allowed me to work on films like Twister, Starship Troopers, Long Kiss Goodnight, Mystery Man, and others.

Would you say you’ve kind of done almost everything Post-related at some point in your career?

It’s interesting because I run the gamut on the film and TV side. I do sound effects recording and editing as well as dialogue editing when need be. As a Supervising Sound Editor, you, for the most part, are managing the entire post audio process. You have a team you’re working with, where you have Editors who are all cutting in those disciplines, depending on the budget of the show, which can vary dramatically. There’s been shows that I’ve done entirely by myself, it’s not ideal, but that’s basically what the budget allows for so you do the work as you can. The thing that’s been so amazing and one of the things that I’ve made a lot of comment on is that, in filmmaking in general, and in our society, that we have a poor appreciation of Moore’s law, in the sense that, the technology is advancing at such a pace that our workflows aren’t really necessarily keeping up with it.

I've seen some people comment online that studios have got wind of AI, or the kind of plugins that Accentize makes, and then think to themselves, well, they can do this for less money and in less time. Do you think that is the case? 

It would be nice to blame the tools for it. We shouldn’t because I think that there’s a human element as well. I think a thing that has come up often in the conversation of the creative process is are we positioning ourselves as service providers or are we positioning ourselves as collaborators? When the filmmakers allow us to collaborate, we can bring a point of view. As they often times have a very firm idea of the aesthetic that they’re trying to get across in their storytelling but at the same time, when you look at all of the components of a film or television show, when you’re making that aesthetic, there’s a lot of unknowns that still exist. For instance, probably the biggest component in a film soundtrack or television show soundtrack, as far as really creating vibe, is the music-  and it’s extremely common, for the music to be the last thing that anybody hears, so the last thing that I’m going to get to hear when my sound effects are being mixed is how they’re going to be interacting with the music. So, in a very big way, I think that the process that we’ve essentially grown accustomed to is actually pretty inefficient in the sense that really the first thing that we should have finished in a movie soundtrack is the music itself because it’s going to essentially determine so much of the direction of the final soundtrack.

If we can essentially support the music and the music support what we're doing, then it’s going to probably be more satisfying for the audience in the experience.

Because of your background as a musician, do you take a very musical approach to sound effects?

One thing that was very fascinating, at the end of last year I did my first composition gig on a feature documentary where I was doing the sound and basically it was determined that I was going to write the music for it as well. The way I would approach it would be to make my dialogue all happen where I wanted it to be and then I would compose the score for a scene, and then once the score was done, I would look at that and say, ‘okay, well, what sound effects can I put into this that will make sense?’. It’s not a matter of music or sound effects or dialogue competing with one another, it’s a matter of creating a soundtrack that essentially is going to allow the audience to follow the story.

An important moment for me in that journey, was with the film Twister in 1996, Mark Mancina was the films composer, who working at Media Ventures which is Hans Zimmer’s cooperative [now Remote Control Productions], and I had another friend of mine, Jeff Rona, who was also working there, and, when we were approaching it on the sound effects side there was a real kind of competitive division between the sound effects and the music as to what could make it into a soundtrack. Even then I thought that was counterproductive, because ultimately somebody’s going to lose and it’s better if everybody wins. If we can essentially support the music and the music support what we’re doing, then it’s going to probably be more satisfying for the audience in the experience.

I think it’s become increasingly important for people doing sound effects to pay attention to whatever the temp score is doing, because the Music Editors are going to be cutting in a temp score, which essentially is kind of the road map for the composer’s final score. It’s very fascinating in that realm as we instinctively think in a kind of musical character, where we might want to have a low frequency component to something, a high frequency component to something, and then something in the mid-range, we might want to have something that’s denoting the scale of the space. If it’s a big explosion or something, we might want to have reverberant effects that really impart the massiveness of a particular incident. So, that’s why I say, if Sound Effects especially was paying close attention to what Score was doing, as opposed to us building something which stands on its own without the score and then having the score force readjustments, I think that it’s inefficient.

The film Oppenheimer was a really, really great example of that. There were scenes in that where there were visually things that you would have expected there to be rich sound effects on, that there were no sound effects on, the music curated entirely.

Sound effects typically tend to be prepared for the possibility that the decision could be reversed – they (the Director, Picture Editor or Producers) don’t like how the score is working in a moment, so now they want to put sound effects in. This requires necessarily that the Sound team has to prepare an incredible amount of extra material that is probably never going to be heard by an audience.

Let's talk about the tech these days, how are these new generation of plugins helping you in your day-to-day work?

I’ve been using the Accentize stuff for a good while now, certainly dxRevive is awesome. There’s a whole generation of new, AI-based dialogue tools that are available and I think that the main thing is basically being able to use these tools to make otherwise imperfect production audio fit better, much faster than it was able to be done in the past. 

A great example would be like, classically speaking, the original sync sound had some issue that was not able to be overcome. So, we can take another recorded take and essentially be able to manipulate it into feeling as though it was the original sync take. I think that the big thing with a lot of these tools is, and really I don’t mean this to sound judgmental, but, it allows less skilled people to do these production audio jobs and allow us to be able to have an acceptable output for a release than we once did. You have the best sound recordists who do production sound, who do amazing work, they put a lot of effort in to providing the best production sound they can, but, in so many instances, they’re dealing with environmental factors that are beyond their control. There’s a plane flying over in a costume drama or somebody drops a weight in the middle of a take off-screen, there’s all sorts of things that happen and certainly dxRevive has been really great because it provides a really rich sound compared to some other tools.

Chameleon is probably one of my favourite plugins in the world. I use that all the time for sound effects, especially.

What was it like the first time you pushed that dial up on dxRevive? 

I’d been using SuperTone Clear, iZotope RX, Waves Clarity, I pretty much have nearly all of the denoising tools for dialogue, short of Cedar, in my studio, and, prior to dxRevive, you would always tend to just accept it’s going to sound a little thin, it’s going to not be great, but we can get it by in a pinch.

I eventually ended up getting the Pro version of dxRevive and I was pretty much astounded. I still use it and I use that and Clear quite a lot together. It’s really, really lovely for that. It can give a little better low end than any of the other tools and certainly it’s great for removing background. All of these new things have subtle strengths and differences. I think Production Expert did a shootout between the various tools and all of them performed pretty well. I go back and think of what were we getting from ‘DINR’ – the Digidesign noise reduction tool, which seems like a very long time ago (and it actually has been a VERY long time!) and the quality has certainly significantly improved.

Have you spent much time with Chameleon?

Chameleon is probably one of my favourite plugins in the world. I use that all the time for sound effects, especially. Like, let’s say you have a dialogue happening in a particular reverb room space, you basically take the IR from that, very conveniently, and put it on whatever effect or foley I want to add and it matches the dialogue reverb almost perfectly. The thing that I love about Chameleon is that it’s so immediate.

So you take the production sound space that they've recorded in and sit everything together?

Exactly. Chameleon is probably one of my most used effects plugins that I use for production.

Do you find the tweakable options in Chameleon helpful? 

Yeah, I don’t use them as much as you might think. Often I can just take the IR and just do a balance, or basically just print the IR output as a second channel and then write it with a fader to whatever is required in the mix.

Sometimes you probably do a bit of a high cut, don't you, things like that, just to, just to give it a bit more of a bed. 

Sometimes, yeah. With most film dialogue, you’re looking at, nominally speaking, rolling things off at about 60 to 80 hertz which again, kind of just puts it in a natural space. There’s probably some instances where you could have ambience that would have a lot of low frequency content, but I don’t find that to be very often.

So, to wrap this up, what one piece of advice would you give to your younger self that’s probably been the most valuable lesson you've learned doing this?

Well, when I started at Universal, Rick LeGrand, the head of the film sound editorial department – said something that was probably the most helpfulbit of advice for an aspiring sound editor,  which was that you’re only as good as your last reel. So, the last thing you did, is what you should be able to send to somebody to judge your career on. You never get to slack off.