AI in Audio Engineering

With the dawn of generative AI at industry level scale and capability that truly starts to question what the difference between humans and computers can do brings a new dawn of computational applications for audio engineering.

Consider the ideal, super-human audio engineer with unlimited scalability, applications, understanding of audio engineering design, systems, mixing theory, music theory, and plugins. What would you start having this super-human servant do for you, your venue, or your church on a regular basis? Apple’s audio software Logic recently unveiled a mastering AI plugin which takes the unmastered music you give it, analyzes it and then process this mix to make it mastered quality. The product is very impressive for mastering applications and this is just the beginning. Pretty soon the technology will have expanded to handle mastering much more dynamically, with more control over the types of processing and sonic flavor the engineer wants.

But what about other use cases? Think about any situation in which you want to take a mix and improve it. This could be live music and binaural audio examples for in ears mixes, this could be your broadcast and livestream audio feeds. Pretty soon, we will have the technology applications to analyze these audio sources and produces outputs which are superior to the original mix. You could have dynamic control over groups within a mix, casually adjusting vocals and various elements of large mixes with a small number of controls, as AI reins in absurd numbers of inputs into a set of simple controls for the user or AI itself to work with.

For what?

This is the great question that will shape the philosophical arguments of AI in audio. For what purpose and end results are we excited to involve AI? I think the first introduction of such a capability was in volume rider apps for live consoles. You set the volume you want the levels to be at and the system adjusts the mix in real time to make the mix match the desired output. To a far greater level of complexity, AI will take your parameters and fader levels, take your inputs, and after analysis, produce a chain of processing and settings which will govern the audio system. You will then become the slave to technology, simply handling muting and unmuting of channels. The end-game of this game-changing AI will be to take over so much of what makes a pro audio engineer worth his charge. AI is only capable because it is trained on a trove of training data of musicians and techs who are themselves extremely capable. The same is true for every pro engineer, they are masters because they’ve learned from the masters. 

What will distinguish the pro audio engineer from the novice in the future of AI will be what we do with AI. Do we allow AI to take over and literally run every aspect of our mixes, or will we use it as a tool to help train our ears to better identify weaknesses in our mixes. 

Another use case is bringing master class engineering to control an uncomfortable number of inputs at once. 

As a result of this philosophical debate on end goals, we should identify some things in which we hope AI can help us better hear and adjust in our mixes.

Good compression- We want AI to help us identify where compression is needed and the settings that would be ideal for the end-goal.

Dynamic EQ - Similar to EQ and compression, a “smart” Dynamic EQ that can actively target and adjust problematic areas of concern and highlight desired tones would be ideal.

De-essing- Another flavor of compression which AI can help tackle

Phase issues- AI would be a master technician capable of analyzing all phase of signals and determining problematic summation across multiple channels with non-linear phase relationships.

Uniform listening experience- AI would have great application capabilities in taking a reference track(s) and then comparing multiple live audio inputs with desired reference track(s). It could then compare live audio with broadcast feeds and suggest edits which would make the broadcast feed more uniform and comparable to the live mix experience. Much like mastering a song, the AI would master your live sound to make the broadcast more pleasant and reflect the live mix.

Binaural audio feeds- Expanding our understanding of the binaural experience, AI could be used to both develop and deploy systems which better convert mono and stereo sounds to binaural and offer tools for managing these experiences.

System tuning- It won’t be long at all before you get a black box which takes all the inputs of conventional system tuning software and via links to systems controllers, within minutes analyzes an entire room, sets parameters for each speaker element, and aligns all speakers and levels to match target traces.

Live backing track modulation and rendering- Just like harmony generators were introduced not too long ago to live sound, live backing track software will soon take the outputs of multiple audio sources and compress the overall sonic tones into an output which can be modulated like a synthesizer to fill gaps in your live music. It will be a midi driven system which creates live backing tracks. Perhaps it will never be ready for full-time live shows, but the technology will be available to make practice sessions more like recording sessions which produce variations of backing music which you can then choose from for your final live mix.


The novice audio techs will continue to devour more and more AI tools without understanding what is happening. Looking for quick fixes and shortcuts to stellar mixing results. Pro engineers will use AI to help train their hearing for any mix case and suggest new paths towards achieving mixes which promote the stage talent and message of the artists. The Pro engineers will always be just as capable as AI because they are the pool of talent on which AI is trained. Just like learning from other pro engineers, they will learn from and use AI in their theory, but they will not be crushed by the burden of AI servanthood. However, as the concern has mounted over all other industries, AI will have its day of reckoning with the jobs of many audio engineers.


Check out these interesting and amazing AI apps for audio. The future is here!


Voice enhancer, making cheap mics sound like studio mics: https://podcast.adobe.com/enhance

Fantastic AI tool to remove echos and artifacts in recordings: https://crumplepop.com/

Auto mixing: https://www.roexaudio.com/

Mastering: https://www.bandlab.com/mastering?lang=en

Mastering: https://www.landr.com/en/

Mastering: https://www.unchainedmusic.io/aimixing

AI Audio morphing and FX AI tools: https://neutone.ai/fx

Synth Sampler: https://soniccharge.com/synplant