No Products in the Cart
I am chatting with Robert Kizer who has most recently worked on Interstellar and The Amazing Spiderman 2. He loves dialogue both editing and ADR. So lets get started:
My becoming a dialogue sound editor happened more by chance than by decision, and it was a gradual occurrence rather than a sudden plunge. I started out in New York as a negative cutter, assistant telecine operator, then an assistant film editor, then a film editor. In 1979 I relocated to California, and managed to land film editing jobs at Roger Corman's New World Pictures. Connected with those jobs was a bit of sound editing (this was all on film and magnetic stock). Sometimes on a show, once the picture was locked, I would be tasked with either cutting the ADR lines, or building the music tracks, or even creating the music tracks out of library material. During the course of these shows, I became friendly with some of the sound editors, and so when I didn't have a film editing job I would call them to get some work at their shop. Those were the days when a sound editor would be assigned a reel and be expected to cut everything for that reel (minus the music). So I got a crash course in cutting dialogue, sound effects, backgrounds, ADR, walla group, and foley sound effects and footsteps. After the stock market crash of October 1987, all the film companies that made middle-budget movies went out of business. The companies remaining were those that made major motion pictures, and those that made direct-to-video. The former would not hire me as a film editor as my screen credits were not good enough for them, and the latter I would not work for because their salaries were way too low. So I stepped sideways and pursued sound editing. These were the glory days of sound editing: 40-person crews, seven-day weeks (all paid overtime), long hours (also paid). I quickly got in with two high-profile independent sound houses and, after a couple of years of general sound editing, I found myself gravitating more and more to focusing on the dialogue side of the process. This probably was because of my film editing background. At the same time, the workflow in sound had changed. No longer was it one sound editor cuts all sound for one reel. Now sound editors were given specific tasks. One would cut all the backgrounds for the entire show. Another would cut all the foley for the show. Another would cut all the weapons for the show. And so on. By 1990, still on film and magnetic stock, the sound editing team would be subdivided into those who just cut dialogue, those who just cut sound effects, those who just cut backgrounds, and those who just cut foley. Dialogue was further subdivided into production dialogue and ADR. Starting in 1986, I found myself focusing more and more on cutting dialogue. Shortly thereafter, ADR was added to my plate. By 1990, I was pretty much a dialogue and ADR editor. So, I would never say that I had a "passion" for dialogue, but rather I had a skill at it and I consciously worked at improving my skill. One part of that self-education was attending dialogue pre-mixes as much as possible. I learned more about cutting dialogue during those times than from anyone or anything else.
Oh boy! That could be an entire book just by itself. Not only has the software undergone dramatic changes, but so has the hardware. My very first experience in cutting digital audio was on the SSL ScreenSound, for "The Lion King." Shortly after that, I was cutting Walla Group on the Avid AudioVision for 20th Century Fox's "A Walk in the Clouds." On "Mighty Morphin Power Rangers: The Movie," I was cutting all the ADR and Walla Group for the show on the Avid AudioVision. From that point onward, I was working in the digital world and never looked back. In 2001, on "Monkeybone," I switched over to ProTools. The switch happened largely because 24-bit audio editing was becoming popular, and Avid announced that they were not going to upgrade the AudioVision to handle 24-bit audio.
When I started in digital, production dialogue was being recorded to DAT at 44.1 kHz, 16-bit. It would be loaded into the editing systems, in real time, but slowed down by 0.1% so that it would stay in sync with the NTSC video picture (which ran 0.1% slower than 60 Hz). Production dialogue had to be manually loaded into our systems from either the source DATs (common) or from the source quarter-inch tapes (rare). The picture editing department would deliver EDLs to us, which in turn would be loaded into software that would build the film editor's cut dialogue tracks from the source material we had loaded. (It used the timecode on the audio tapes as the master guide.) These builds were always imperfect. So the first few days on a reel would be spent checking each region/clip of audio against the guide track and adjusting the build so that it was in dead sync with the guide. Eventually there was software to do this, but even that software was not perfect. There were always blank spots in the build. And much time would have to spent tracking down the audio for that spot. In those days, we quickly learned that we could not use the OMF from the picture department. Frequently, the quality of the audio in the OMF was inferior to the original, or we needed access to multi-channel recordings and the OMF only would have one channel in it, or the OMF was delivered with only two-second handles at either end, which was inadequate for cutting the sound for the mix. (Remember, back in the early 90s, the AVID Film Composer only allowed 8 audio channels. Eventually it went to 16 and then to 24, and probably will go higher.)
When I started on the Avid AudioVision we had these huge “shoebox” hard-drives that only held 2 gigabytes of data. Data and memory management was a constant occupation. Shortly thereafter, the shoebox drives could hold 4 gigabytes each, but then we were moving to working with 48k audio, still at 16-bit. File size grew, so even though the drives could hold more, our files were much bigger so we didn’t gain much maneuvering room.
In 2000, we moved to SCSI drives. Our editing systems had a chassis that could hold maybe six of these SCSI drives. The drives themselves sat inside these compact sleds which could be inserted or removed very easily. I forget how big these drives were.
As I said earlier, by 2001 we were moving into audio files that were 48k and 24-bit. The SCSI drives were soon put out to pasture, and we entered the current world of 100 gigabyte, 200 gigabyte, 500 gigabyte, and 1 terabyte drives. There is talk of working with 96k, 32 floating bit audio.
Since about 2009, with the adoption of 24p by the filmmaking community, we in sound post have no longer had to deal with the old thorny issues of pull-down or non pull-down, drop frame code, and all the hoary pains of yesteryear.
Our dialogue sessions are much more involved and complicated given the growth of multi-channel recording on the set. Plus digital editing has made it easier for us to cut the material so that it is ready for the domestic sound mix as well as the foreign M&E mix (music and effects). And the dialogue editors will often go through and present alternate production readings from cleaner angles when they see that the line is on the list to be looped. Often the director will opt to go with a production alternate rather than an ADR line, simply because the production alt has that “production grit.”
But sync is still a problem. Something that was solved quite well back in 1929, seems to shockingly elude us in this modern day.
For a good part of my career, mixing was a closed door to me. In Hollywood, on union shows, editing and mixing were handled by two different locals: IATSE Local 776 covered editing (film and sound), and IATSE Local 695 covered sound recording (both production and post-production). Simply put, if you sat behind a Moviola you were in 776, and if you sat behind a tape recorder or a mixing console you were in 695. One could not do the job of the other. Those were the rules. The growth of non-union sound houses allowed people to move between the two worlds. But because I was working in the union world of sound editing, I had no opportunities to mix or even pre-mix my material. Eventually, in 1996, there was big reshuffling of jurisdictions in Hollywood, which led to the creation of IATSE Local 700. In it were all the editors, the lab technicians, and all the sound personnel that worked in post-production. Local 695 continued on, largely representing the on-set production sound mixers and recordists. Gradually, with improvements in ProTools and the admission into the union of many non-union sound editor/mixers, the once-firm line between editor and re-recording mixer became blurred. Skywalker Sound in northern California was probably the biggest driver of breaking down the old restrictions. Because they were so far removed from the Hollywood union offices, they had a lot more leeway to bend the rules than anyone in Los Angeles. So, as I said in my first statement, mixing was not an option open to me when I started working in dialogue editing. To date, I have not been given an option to sit down at a mixing console of any kind and work on my material.
If I had a spotting session with the director, then my notes of that meeting are typed up and passed out to all the dialogue editors and to the supervising sound editor. As my ADR programming progresses, each iteration gets duplicated and passed out to the same people. Once I have other ADR editors cutting material for me, they are included in the great paper trail.
Conforming is our friend. It represents job security. Picture changes only hurt when you are working on a very limited budget with no room for overtime. On a studio movie, changes are part of the world, and there is money to cover it. So one just dives in and conforms everything up. There are times when changes are happening fast and furious, that the supervising sound editor and film editor will agree that the sound department might not pay attention to certain versions that come down the road. In such instances the film editing department then provides a “jump note” taking us from the last version we conformed and jumping us up to the latest version.
Our sessions have to be kept up to date as much as possible. Especially on a show that is “virtual” (meaning, no physical pre-mix, just automation in the system). Plus, my ADR programming has to be always up to the latest version, because I never know when a principal actor is suddenly available for looping.
In dialogue editing, the performance in the cut is the performance in the film. That judgment has been made by the director and film editor. They have access to ALL the outtakes; I don’t.
In ADR editing, however, I sometimes will create Franken-lines (a reading stitched together from three or four different takes), with different pieces pitched up or down as needed. This is mainly to present a line reading more in keeping with the original production version, and which the actor was unable to match during the ADR session. There are also times where the selected reading during the ADR session, I feel, is inferior to an earlier take. I will always present the selected take, but then I will also present the other take as an alternate. And I make a note on the printed cue sheet (dubbing log) to remind me what I have done and the reason for it. (Yes, I like to have printed cue sheets for the pre-mix and the mix.)
Yes. As a dialogue editor, I often am called upon to make little adjustments in the tracks or find alternate readings. As a supervising ADR editor I am asked about choices of takes of ADR, and sometimes will have to present different alternates to those I had already prepared.
In general, I voice my opinion about how well the mix sounds. But I am very circumspect about that. I only participate when the discussion has been thrown open to all present. But if there is something I feel particularly passionate about, I am not shy about making my feelings known. Still, too many cooks can spoil the broth, so I tend to be a quiet observer.
The on-set mixers need to watch what is happening on the set. Being sequestered away from the action does not help in the capturing of good audio.
Record wild lines, the more the better. On “Interstellar,” there are substantial sections where McConaughey’s dialogue was all built and synched from wild tracks.
Never use wireless boom mikes. It’s one thing to use radio lavalier mikes, but the boom mike needs to be hard-wired to the recorder. We need that bandwidth.
If the actors feet are not visible in the shot, either cover their shoes with slip-on cotton booties (like hospital workers use) or get them to take their shoes off! And if that can’t be done, then throw some carpet down on the floor.
If the movie is one of those big tent-pole projects with tons and tons of CGI shots, and you have actors with big painted dots on their faces and clothes, then slap a headset mike on them and paint it green! Let the CGI folk remove the mike along with all the dots.
Try to talk directors out of doing two-camera shots where one camera is capturing a wide shot, and the other is on a really long lens to get a close-up shot. And if you can’t, try to figure out a way to hide a mike so for the close-up angle. They always will use the close-up angle, and the sound for it always sounds like garbage because of course the mike had to be held far away so as to preserve the integrity of the wide shot.
Just know that directors today want to use as much of the production track as possible and then some. So it’s up to the on-set mixer to fight the good fight and get that sound.