Get Ready to Kick-Start Your Post Production Career This Year | accusonus
Get Ready to Kick-Start Your Post Production Career This Year


Get Ready to Kick-Start Your Post Production Career This Year

Audio post production is a field in the industry of audio engineering related to manipulating the sounds created for moving pictures. The well-known term "soundtrack" arises from this specific audio material that has co-existed with visual programing since movies started to be produced with audio in 1926. A lot has changed since then. Technology has evolved and digital machines have taken over the process of capturing, editing, and implementing soundtracks to video. Now it’s 2017 and the demand for audio post production related work is higher than ever. Mass media companies produce content faster and faster, reaching an arm into as many markets as they can through the Internet and other traditional media outlets.

Essential Tips & Tricks Every Post Production Engineer Should Know

Getting started in the Audio Post Production industry in 2017 should raise your awareness of the processes related to audio material noise reduction, techniques for proper level control, Automatic Dialogue Replacement tools, and aural enhancement effects like EQs and Dynamic Range Processors.

 

The Importance of Audio Restoration for the Years to Come

Audio restoration includes all the processes involved in the removal of unwanted signal data from an audio recording, such as hisses, crackles, environmental noise, electric hums, etc. That being said, there is a new giant in the special audio restoration field, taking a peek around the corner and preparing for its show time in the years to come - the immersive 360° Virtual Reality audio. Post production for this type of audio material requires extra effort as the audio assets must be as clean and clear as possible of any audible noises. The main reason behind this is companies with 360° video platforms such as Facebook and Google have already started to implement audio formats with spatial metadata included. This allows their video players to process and create a stereo mixdown from several audio tracks contained within a video file, each one containing a different source material from an element in the scene, thus enabling the creation of a dynamic binaural effect as the user moves his point of view. For any kind of spatial effects to work effectively, the source signal should be as clean and dry as possible.



There are many traditional tools that can aid you in the process of removing unwanted noises from audio recordings. One of the most well-known tools out there is a Noise Gate. Noise Gates are processors commonly used to suppress unwanted noise that is audible when the audio signal is at a low level. You can use it to remove background noise, crosstalk from other signal sources, and low-level hums, among other things. The Noise Gate works by allowing the signal above a threshold level to pass unimpeded while reducing the signal if it’s level is below the given threshold. This effectively removes lower-level parts of the signal, while allowing the desired parts of the audio to pass.

Every location in the world has some kind of background noise (unless it’s an anechoic chamber). 

anechoic chamber, source: WIkipedia

This poses a challenge when dealing with noise removal because most of the time this “bad” kind of background noise includes reverberation from the room or field of the original recording. If you also consider machine noise from electric appliances like air conditioning units and refrigerators that blend with reverb noise, you can easily understand why audio restoration tools often remove frequencies you need to keep as well. Fortunately, recent research in advanced audio restoration techniques has been able to pinpoint and differentiate noise and reverberation from an audio recording separately without the need to define a noise profile. This allows us to maintain important signal data in real-time and optimize the amount of noise we truly want to remove in context with everything else. Our own ERA-D is the industry-leading tool for this technique.

 

Becoming Familiar with Basic EQ Principles and Mixing Concepts

Mixing audio for post-production shares an almost identical list of prerequisites regarding frequency equalization, volume dynamics, and spatial information principles. If you also consider the vast amount of audio tracks that end up in a typical DAW session of post-production work for a movie, it becomes clear, without prior knowledge of mixing techniques, you will have a hard time making each and every element sit in the right place in a mix. Effectively mixing all these elements in a session like this is a little more complex than merely adjusting volume levels and transitions between them. Dialogue, for example, carries little low frequency content and can, therefore, be removed to prevent low frequency buildup against other tracks. This prevents unfavorable interactions with other low frequency content such as your musical elements.

Proper EQ can make voices stand out, minimize extraneous rumblings, and tame an overly bright sounding segment, among other things. If you're in this for the long-run, spend a little time researching and becoming more familiar with the concepts of the audio frequency spectrum and equalization.B&H

 

Artificially Adding Ambient Sound

Often, it’s your task as a Post Production Engineer to add an ambient soundtrack to the project you are working on. Ambient sounds include the aural information our brains use to remember certain areas, such as a subway station or a fast food restaurant. Normally, this consists of vast amounts of noise along with specific audio cues. This helps the viewer immerse in the scene. Choosing the right ambient sound from a company’s sound effect library or your own personal library (more on that later) can be tricky and tiresome. A good final result will always be worth the extra work, though, as a properly selected or constructed ambient soundscape can turn an inanimate scene into an immersive gateway for the viewer.

 

 

Creating Your Own Sound Library

It is important early on in your career as a Post Production Engineer to start building a habit of recording soundscapes, natural sounds, weird sounds, foley sounds, and pretty much anything else you can get a microphone close to. Carrying a portable audio recorder like Zoom in your bag and bringing it out every time you hear something interesting and worth-recording is perfectly fine. And no, it doesn’t make you a weirdo. If you make a habit of it, you will soon build a database of natural, real-world sounds that you can further sculpt and manipulate inside your DAW to create unrealistic soundscapes and unique sound effects. Cherry-picking the best results and storing them for later use will create a continuously growing library of sounds you can revisit anytime you want. Most important of all, though, these sounds will be unique and have your own distinct aesthetic that could be the deciding factor for clients to choose you in the future.

 


Understanding Loudness Standards

Variations in audio broadcasted to public media have been a problem in the industry for many years. This is troublesome for the viewer as he must readjust the volume of his TV or radio every time a new song or different commercial pops up. Recently, broadcast organizations around the world came to an agreement about how loud audio in broadcasting media should be and how to measure this loudness efficiently. This is widely adopted in many countries around the world, especially in Europe, with the EBU R128 standard in effect from 2011. As a Post Production Engineer working with clients in these countries, you must follow the guidelines regarding the correct loudness each format can have and use the appropriate tools to measure the loudness of your final mix.

 

Finalizing for Different Formats

In the digital age of on-demand streaming services such as YouTube, Vimeo, and Netflix, it is important to add different final optimizations to your mix for different delivery platforms. Creating a single final master is suspect to yield problems that arise from sample rate conversions (for example YouTube uses 44.1kHz but Vimeo uses 48kHz) and compression encoding artifacts. Always stay up to date about the latest specifications the streaming services use and create a workflow so you can easily use the same encoder settings and audition how your final mix will sound through them.

 

 

Main picture source: Pinnacle_Collage on Flickr under the Creative Commons Attribution 2.0 Generic 


Register now at accusonus and download a FREE Regroover sample pack


You can use Regroover to take traditional music sampling to a whole new level, unmix grooves and access previously unreachable sounds. Try to split your sound samples with Regroover and you’ll see that the new era of sampling is fun, intuitive and inspirational.

0 Comments - Get Ready to Kick-Start Your Post Production Career This Year

Leave a comment


CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.