I posted a query on a messageboard about something, but after some discussion where they answered a question I wasn't asking, I have come to realise there may not actually be an answer.
I thought I would talk about it here, so I can clarify some things in my own head, and cover a little bit of technological development history at the same time.
The saying goes that if you build a better mousetrap, the world will beat a path to your door. The key word in that phrase is "better". Not to invent something out of whole cloth, but to improve what is already out there so it does it more efficiently, perhaps more humanely, and probably more cheaply, but gets better results.
When George Lucas looked at the way visual effects and film editing were done (which was optically, physically, in a linear, laborious way), he knew that it was time for improvement. Somehow he recognised that the technology was available, or coming very soon, to make considerable improvements to workflow, which would increase productivity, reduce frustration with limitations, and ultimately lead to a new way.
He, or rather technicians within the company he headed, invented EditDroid, a Non-Linear Digital Editing system that allowed editors to move clips and frames around the timeline in whichever combination they wanted, quickly, immediately, and without chopping up the physical film (with every cut of a strip of film, you lose at least two frames, one from each end of the splice). This system eventually evolved into what became Avid, the industry standard for editing. Few film editors still cut physical film anymore, and it's a rare one who doesn't use Avid as their NLE editor of choice. There are also low budget off-the-shelf amateur NLE applications, that borrow some of what Avid pioneered, though they have approached it from the digital video camera side, so have distinct variations in interface and methodology. But effectively, it was Lucas's needs to adjust the status quo that set a new standard and forever changed the way we edit film and TV.
James Cameron, in the same industry, is another pioneer who recognises when things are being done inefficiently and require fundamental improvements. He is excellent not only at motivating changes, but pinpointing the exact areas where it's failing, and even has insightful suggestions on how to fix them.
The reason why 3D worked so well for Avatar was because he knew where it was going wrong, and that he had the technical expertise to adjust it so it worked. And he saw where motion capture was consistently tripping up, and was familiar enough with visual effects and camera technology to provide solutions that would address a lot of its main problem areas. I'm sure he has many more planned improvements for his future films, especially as all of them undoubtedly will be 3D and use motion capture. He used the same uncanny skills in bringing forth the liquid metal T-1000 into Terminator 2, and the pioneering of motion capture characters for wide shots of the Titanic.
This kind of innovation has been rare, but effective. To fundamentally change how everyone does things is a huge undertaking, but it has to be done, or we will remain stagnant, and trapped in an ever-increasingly limited pocket as other technologies pass us by.
So if digital editing and digital visual effects can have such a transformation of technology, the next step has to be with digital audio.
I think there are flaws in the current standard of digital audio editing and manipulation that are being overlooked, overshadowed if you like by the visual, and it is time for them to be addressed.
Here's my problem. When filming takes place on location, the sound is always at risk of being interrupted by real world interference, such as traffic, wind and weather, creaky floorboards, aeroplanes, and a hundred thousand other things. I once was trying to film something so went hunting for a quiet spot out of the way where nobody was going to interfere, and as soon as I clicked "Record" a big truck pulled up, playing its radio really loud, tuned to Salsa music. It was absurd.
So the answer to that, often unavoidable, problem is to record the audio again later, and the most popular way is in a soundproof recording studio. This is called Looping, or ADR, which stands for Automatic Dialogue Replacement (or some think it's Additional Dialogue Recording; they're wrong, but whatever).
The problem with ADR is that in a real location, there's an open air quality that is distinct and unique. The sound bounces off nothing but the ground, or a wall that is a particular distance away, adjacent to another wall that is made of a specific wood of a particular thickness, and there's one (or five or two hundred) people nearby all wearing clothing giving a unique dampening quality, etc. It never ends what makes not only the location, but the way the sound reverberates and is recorded, unique. Even the brand of boom microphone can make a difference. So when they go back into the studio and record the ADR, it does not sound the same as the live location. It instead sounds like a carefully dampened soundproof booth.
Actors are getting increasingly adept at re-creating the exact phrasing and timing for matching their dialogue, so I cannot lay the blame on them. But the dialogue audio editing needs a serious shake-up, because there's no excuse for the shoddy mismatching of audio quality between location and ADR anymore.
We have recording equipment of astonishing quality, and unbelievable digital manipulation tools with unparalleled fine tuning controls. Surely they can analyse waveforms and re-create the effects. After hundreds of tests, they should be able to put together a thousand macros and new filters to make the ADR audio sound the same as the surrounding audio.
What is holding them back? Why are they stuck with such useless software that cannot achieve what, in my mind, is simply manoeuvring waveform peaks and troughs to be anything we want?
If we can have pixel control of the visual image, then surely we can have similar fine control of audio waveforms.
The technology is still stuck in an analogue mode of thinking, and it needs a kick up the arse to break free, a whole new approach, rebuilt from the ground up, to finally see the infinite possibility it has at its disposal.
No more limitations!
1 day ago
5 Reasoned Responses:
Spielberg & Michael Kahn still cut on film.
re sound: your comments are ill informed & flawed. You should sit in on a session with a sound designer / dialogue editor, because nothing is "holding them back & they're not stuck with such useless software." For what it's worth, not all ADR is recorded in sound proof booths.
Welcome to the internet... the mis-information super highway.
It would be nice if I could sit in on a session with a professional sound designer. That's not very likely at this stage.
I am aware of ADR is sometimes recorded at the same location and edited in afterwards. I have done that myself. But that isn't my point.
The existence of bad ADR means we need better tools.
Yeah I'm kinda torn on this one. What you're asking for is the audio equivalent of the button on the keyboard that say's "create CG image" or "create entire CG environment" then you press the button and presto there it is, exactly as you want it.
The truth is that audio tools ARE very good these days, it's just the mixing that is the cause for your concern. You are right in that it would be great to have a button on the mixer that says "mix ADR as if it were the onsite location", but alas just like the keyboard button for instant CG FX, it doesn't exist.
You can definately argue that ADR is an out of date necessity, but so is the maxim that all shots in a live action film are done one at a time. No matter how many cameras a film has, it's still a slow and tedious shot by shot process, even when different actors are used. This to me is where film making really gets bogged down.
In the end it's not a question of how long the audio takes but how good it turns out. Ultimately its quality lies squarely at the hands of the mixer and like films themselves, sometimes you'll have a good one and sometimes it'll be bad.
One thing I DO know is that on short films in particular, audio is often the most overlooked facet of a film which is where it will bring the overall quality down if not done properly. So if anything the arguement shouldn't be a way to improve audio creation techniques, but for short film makers in particular to pay more attention to it.
Having said all this, if there WAS a button on the mixer that said "mix ADR as per the location sound" then that would've made my life on PsyofK MUCH easier. :)
Dags
I daresay there is considerable room for improvement, though I wonder if the infinite complexities of nature can be captured in a series of easily applicable filters. With regard to indie & fanfilms, neglect of sound design is one of the most common and destructive problems. There's no pouint spending a fortune on props, sets & softaware if your sound is at the level of a compressed toilet flush
You misunderstand. I'm not looking for a single button click that fixes everything. I know that's not going to happen. Adjusting audio is always going to be equally as complicated as adjusting video in the colour grade.
But there needs to be a lossless and logical approach, with infinite control. In the best colour grading tools, there are graphs and scopes that measure and compare the colour levels throughout the image, at pixel level, so you can compare two images and better match them.
But audio doesn't seem to have that. It's trapped in judgement calls and comparisons that are failing at the human level, instead of being controllable and measurable at the digital level.
It needs a reboot, from the ground up, to let go of legacy, and addresses these issues in an all new way.