AIAndAudioEngineering

ThoughtStorms Wiki

Quora Answer : Will the profession of audio engineering be automated by AI in the near future, or will human audio engineers still be as needed as today?

Feb 11, 2020

Yes.

But as always when automation (and AI is just fancy automation) replaces humans, it won't do it by directly substituting exactly what the human currently does.

Instead, automation works through

a) replacing some tasks that humans do, thin slice by thin slice

b) reconfiguring the problem to be more suitable for automation.

So what I expect to happen ... at some point ... is that much "audio engineering" will move from the producer side of the musical equation to the consumer side.

It's impossible, when mixing and mastering music, to know exactly what kind of speakers the music is going to be played on. And what the audio characteristics of those speakers are. So skilled engineers mix and master something that sounds reasonable on most "typical" speakers. And if you want your music to be played in clubs with enormous sound systems you have to master specifically for that. Etc.

As the range of speakers and places we listen to music increases, this becomes increasingly difficult.

So eventually, I expect we'll get "smart speakers" which know their own profile in terms of frequency responses etc. And music will get published in new, "smarter" formats that basically specify "this frequency range should be this loud relative to that frequency range". And "the overall should be this number of decibels" or "it should be this volume relative to maximum volume of sound that you can make".

And then the speakers themselves, having knowledge of their own responses, and being fed the intention of the music, will do the final calibration / adjustments to make sure that they give the most "true" rendering of the music's intention.

Once that happens, much of the work of mastering, and the art of the mastering engineer, will be redundant. Manufacturers will ensure that their speakers come with as accurate profiles as possible (the more accurate the profile, the higher quality the speaker will be perceived as). Really high-end smart speakers might even be able to monitor their own output through external microphones, and recalibrate themselves accordingly.

Meanwhile, DAWs will know the typical frequency characteristics that particular styles of music have. In fact you'll probably be able to get off-the-shelf packs of them. Do you want the Aerosmith or the Deadmau5 "sound"? Then here's the appropriate frequency definitions for drums, bass and guitars, or kick, hat, snare, sub and synth-pad tracks, predefined. Now when you compose you can apply those intentions to the appropriate tracks, and know that the smart speakers are going to do their best to honour your intention.

And when that happens, there'll be no difference between the average bedroom producer and the top professionals. Both will simply be loading intentions into the music files, and relying on the consumer's sound-system to do the right thing with them.

Backlinks (1 items)