MusicAndAI

ThoughtStorms Wiki

Context : MusicalStuff, ArtificialIntelligence, TechnologyAndMusic

Quora Answer : Can AI eventually make music better than human beings?

Oct 4, 2019

The most interesting thing about AIs is that they challenge our conception of what "good" or "better" music is.

However we theorize music, whatever structural elements we attribute to it. Sooner or later someone is going to put those theories into an algorithm, and computers will be able to generate more of that stuff.

And if we can't analyse the music and build a theory and an algorithm, then machine learning systems will almost certainly be able to do that for us. They'll listen to any class of music that we point them at, and discover the theory or the models to build more of it.

But will that make the music worth listening to?

It raises the question of why we listen to music. What's it for?

Why do we want to listen to Wagner and The Specials and Snarky Puppy? What is it about that series of notes played in those timbres by those people make us particularly value them?

What, I think we are already discovering is that music is not about the notes or the sound-waves at all. It's already about the people and the story behind it. We already like pop songs that follow the most ordinary musical formulae you can imagine. Formulae that computers have been able to compose since the 1960s. Most pop music is just I V IV VI chords.

And increasingly we don't really care if our singers can't "sing" (as in hit notes), and that they need the help of autotune. Or that their backing bands are made on Fruity Loops. Or that the producer is using off-the-shelf loops and presets. (With a few personalizing tweaks). Or that musical theory is available in a plugin. We won't care as neural networks, trained on millions of examples of a particular genre, find their ways into the plugins in our DAWs.

What we'll care about is the story of the people behind it.

And we'll realize that that is ALWAYS what we cared about. Not the notes. Not the complexity of the harmonic series. Not Ling Ling's 40 hours of practice a day to be able to hit exactly the right tones at exactly the right time.

We care about the story and the people behind the music. If the music is technically a machine going "Bang Bang Bang Bang" then we'll start to ask for the story of the electronic engineers that designed the machine. And the programmer who decided that it should be Bang Bang Bang Bang and not Bang Bang Bong Bang. Or the DJs who played that sound in the Muzic Box to create a party vibe that conquered the world.

Music is always about the people behind it and their story. Machines will take over all the hard repetitive body contortions. Machines might take over the selection of notes and sequences and timbres.

But machines won't take over the narrative spinning. Machines won't have stories. And won't tell stories. Somewhere, we'll want a human about whom we can tell stories. A human we can admire. A human we can aspire to be like.

Increasingly, those stories will features machines as participants. Instead of the story being about Ling Ling practising for 40 hours a day and therefore able to tickle various taut catgut cords in a complex but controlled manner, the stories will be about Deadmau5's ability to tickle the knobs on his modular rack to create such a warm and wondrous sound.

And soon, the stories will be about the way that X's perfect sense of tune selection means they fed the recurrent neural networks just the right diet of musical history, in just the right order and proportion, that the network generates generation changing masterpieces.

And the billions of other AI generated sequences of bits in MP3 format, will be totally ignored.

Quora Answer : Would you say that AI is slowly killing the art of music creation? Software that can compose, produce & mix music is getting very popular among aspiring ''musicians'', and will most likely advance a lot in the just coming years.

Oct 22, 2019

Well, in one sense, machines have been putting human musicians out of work since the invention of the steam-organ.

Then recording technology allowed us to dispense with live musicians on a mass scale.

No, AI is not "killing" the art of music creation. But as with recording technology, and electrical amplification (the thing that made "rock music" possible, by letting a quartet of musicians fill huge stadiums with sound), sampling and digital sound manipulation, AI will change music.

Music after AI won't be the same as music before it.

Just as music after electrical amplification could never be the same as music before it.

You can label that "death" if you like, but most people, and certainly not all the happy listeners of music in the future, won't agree with you.

Quora Answer : Music: What are the biggest obstacles in procedurally generated music?

Nov 11, 2015

Obstacles to what? There are no obstacles to it being generated. That's very easy.

I guess you mean "obstacles" to it being accepted. I'd suggest that the main obstacle is that it hasn't passed through any human filter for being pleasurable.

There are awesome musicians out there using electronics and computers. But the main point is that between the computer and the track being put on the internet and you being subjected to it, is the composer / producer, who listens to what's being produced and then uses some kind of musical judgement to decide whether to pass it through to other people. The composer plays around with the machines until she / he likes what is being produced, and THEN presents it to you.

If you remove that stage of filtering, you are presenting an indifferent or potentially hostile audience with something that you have no guarantee that any human will enjoy. Obviously the possibility of failure is significant. And you have to remember that audience attention is a very scarce resource. Listeners have relatively little time to spend evaluating unfamiliar music. And there's an awful lot of music available. Few people want to be the guinea-pigs trying out music that literally no-one else has ever given a "like" to.

OK. Now, I suspect that my answer so far isn't what you wanted, because it isn't addressing what you meant. I get that. And I'm now going to make the case that it IS, actually, the answer that you needed.

Because ... you are probably thinking something like this :

ockquote> When I said procedurally generated, I obviously meant that we'd fold the RULES for what people like (eg. harmonic theory etc.) INTO the procedure."


That's what most research into procedural generation does. It tries to identify rules that correspond to what people like. Or that conjurs up a particular atmosphere. And then make it part of the procedure.

The problem is that our understanding of these rules is running decades, if not centuries, behind people's actual tastes. What a great many people are listening to today, is extremely sophisticated (subtle, complex and rich in references) about timbre. Somewhat sophisticated (subtle and rich in references) about rhythm. And pretty simplistic about harmony and melody. As long as something is basically harmonic rather than aharmonic then people are happy. It just has to be one chord, or an alternation between two chords, and people will listen to it for hours. But people are very fussy about timbre and rhythm. Get that wrong and people know immediately. They know when sounds are trite, lame, boring etc. They know when rhythms make them move or make them sit in the corner. When a sound is current or overused or retro.

Now, to the best of my knowledge, there is almost no research into procedurally generating these elements. Because there's very little theory about what makes these elements compelling. Theorists have very little idea why people like guitar shredding and dubstep wobbles and what makes a good, rather than a so-so, drop. Why did that particular way of twiddling the knobs of a TB-303 become a worldwide phenomenon? Why the TB-303 and not a similarish monosynth of that era? Our theory of "great sounds" is ... I won't say non-existent because someone, somewhere must be thinking about this. But it's not well developed at all.

Now most "procedurally generated" music you hear - say in video-games - works like this : a composer chooses all the elements ... the timbre of the instruments, the rhythmic matrix etc. etc. that fit different scenes. And then the "procedural generation" just toggles them in or out, or, at most, widdles around choosing random notes or chords squashed within the harmonic template. In other words, its relegated to the least important, least difficult and least interesting (to contemporary listeners) part of the music. And the majority of the music, certainly the most important parts of the music : the rhythmic matrix and the sound-design, is just composed up-front. In other words, they aren't really doing procedurally generated music at all.

So THIS is the big obstacle to procedurally generated music. We now have over 100 years of recorded music in our culture. 100 years of playing and manipulating recordings. 100 years of generating sound mechanically, electrically and electronically. Listeners and actual music producers are immersed in this, and have a very sophisticated understanding of sound, of mixing, of mastering. But our overt theory of making good vs. bad sound is pathetically under-developed. And without a good overt theory, there's no way that we can start to invent algorithms to procedurally generate what's important in music today. And so we don't. All we do is put into our algorithms the rules that we DO have ... which are largely 300 year old harmonic theory that no-one cares about. It's not a problem of actually synthesizing the sounds ... computers can do real-time synthesis just fine. But you wouldn't let an algorithm loose, selecting the parameters for your synths in the hope that they'll come up with something people like ... not without putting a human filter in the way.

Postscript : Obviously if someone actually IS doing this somewhere, I'd love to know. Tell me I'm wrong here. Please!

Quora Answer : Is there a potential for a flood of AI generated music to one day saturate the market given that it can create fairly good music already?

Aug 25

Well, look.

Firstly, the market is saturated already. So worrying about that is a bit pointless. There's already far too much music in the world for anyone to listen to even a fraction of it during their lifetime.

Furthermore, the internet has destroyed the scarcity of recording media like vinyl discs that created the illusion that music itself was a scarce, valuable resource. Now we know it's an infinitely copyable abundant supply which can be piped to you very cheaply. Which is why Spotify has almost everything and artists make almost nothing.

Secondly, AI can create "good music".

But that misses the point. It misses what humans value about music and why we do it.

Music is a communication channel, between musicians, and listeners, and dancers, and each other.

Music is something humans do to socialize together.

People want musicians as figures they can relate to, or aspire to be like or to date.

Music is a medium for musicians to tell a story with. And no one wants that to change.

Of course machines and AI will help us make music. It will play the instruments for us. Advise us on good chord combinations. Keep time for us. Etc. We used to think that people wanted to go and see a guitar god who was fantastically brilliant at widdling his fingers around the frets of the guitar very fast. Then it turned out we were happy with a DJ who just synchronized records as they faded from one to another. Now people will happily go and cheer someone who presses buttons on a computer which does all the difficult synchronizing, while it plays music by someone else altogether.

And that's fine.

Because the DJ is still telling the story. And it's the story that the crowd come for. The music should be "good" (for some value of good). But the story is why they make the effort and become fans and pay money and wear t-shirts etc.

AI will help people make more and "better" music. For any value of "better" which you think sounds like a particularly skilled instrumentalist or singer. But the quality of the playing or singing was never the point. The real point of the music was the expression of the artist and the story they told that touched and inspired the listeners.

The audience will ALWAYS want a story about a human behind the machine. Even if the human just presses a few buttons in the studio. Or selects the training data for the neural network. The crowd will project their need for a story and a human to relate to, onto that activity.

When you think objectively, the actual actions of the musicians, the twanging bits of taut cat-gut or puffing into a metal tube, were never meaningful activities in themselves. They always got their meaning from the sound they produced and the sensibility that this sound conveyed. But we already knew that the sensibility was a learned response, which is why, say, Chinese classical music sounded so different from European. We conventionally associate sensibility with particular sound structures. And we appreciate whatever activity gives rise to those sound structures.

No-one wants music without sensibility that a machine invents all by itself. So we won't get that. We'll get new generations of musicians who have "the sensibility" who just happen to express that through pressing buttons on the machines; and things will go on as normal.

But because there's already an abundance of supply, there won't be any money in doing that. People will do it because it's fun.

What money is to be made is not from "music" but from a viciously competitive market for celebrity. In other words, for succeeding at being the successful and famous artist who a lot of people aspire to be like or relate to in other ways. But this celebrity really has nothing to do with music per-se. We project it onto actors. Onto sport players. Onto rich people and minor aristocrats. Etc. Techniques for achieving celebrity are as much about your ability to crash the right parties or your Instagram game as they are about your musical theory.

So computers will do the musical theory for us. And we'll concentrate on Instagram or whatever the next social platform for showing off is.

And honestly, that's how it should be. Humans are about telling stories together. Not twanging cat guts.

Quora Answer : What is the future of computer-generated music?

Aug 4, 2013

A lot of popular music today is "computer generated" if you just mean sequenced, recorded on a computer with a lot of computer generated synths and samples.

Then there's that thing that people think of as "computer generated" music which is a kind of academic world that seems strangely retro in focussing on analyzing and resynthesizing the rules of harmony and melody that governed "serious" classical music over 100 years ago.

But today, much of the energy / interest in popular music is coming precisely from new sounds, timbres and rhythms. So one avenue that I think we're going to have to explore in the future is computer analysis and resynthesis of timbre.

Computers already let composers play with an extra-ordinary space of sound. But I'm not sure how much they're helping us come to understand that space.

Can a computer figure out why a particular guitar riff / set pedal effects pumps you up, while a melodically comparable one falls flat? What makes the sound of someone like, say, Burial so different and so much more emotional than a similar minimal lo-fi looped garage beat?

Computers can compose something that sounds like Chopin. When will they start to compose something that sounds like John Cage (with all the intellectual and spiritual implications)?

Quora Answer : How can man distinguish music produced by AI from that made by man?

Oct 14, 2019

Eventually you won't be able to.

Maybe today reasonably knowledgable listeners can. Casual listeners already can't.

If you love Chopin you are probably screaming.

But for me, it sounds pretty much like Chopin does. And even if I were knowledgable enough of Chopin's works to know it wasn't him, I certainly couldn't tell it wasn't by a human student or lesser composer trying to sound like him.

Quora Answer : Will the profession of audio engineering be automated by AI in the near future, or will human audio engineers still be as needed as today?

Feb 11, 2020

Yes.

But as always when automation (and AI is just fancy automation) replaces humans, it won't do it by directly substituting exactly what the human currently does.

Instead, automation works through

a) replacing some tasks that humans do, thin slice by thin slice

b) reconfiguring the problem to be more suitable for automation.

So what I expect to happen ... at some point ... is that much "audio engineering" will move from the producer side of the musical equation to the consumer side.

It's impossible, when mixing and mastering music, to know exactly what kind of speakers the music is going to be played on. And what the audio characteristics of those speakers are. So skilled engineers mix and master something that sounds reasonable on most "typical" speakers. And if you want your music to be played in clubs with enormous sound systems you have to master specifically for that. Etc.

As the range of speakers and places we listen to music increases, this becomes increasingly difficult.

So eventually, I expect we'll get "smart speakers" which know their own profile in terms of frequency responses etc. And music will get published in new, "smarter" formats that basically specify "this frequency range should be this loud relative to that frequency range". And "the overall should be this number of decibels" or "it should be this volume relative to maximum volume of sound that you can make".

And then the speakers themselves, having knowledge of their own responses, and being fed the intention of the music, will do the final calibration / adjustments to make sure that they give the most "true" rendering of the music's intention.

Once that happens, much of the work of mastering, and the art of the mastering engineer, will be redundant. Manufacturers will ensure that their speakers come with as accurate profiles as possible (the more accurate the profile, the higher quality the speaker will be perceived as). Really high-end smart speakers might even be able to monitor their own output through external microphones, and recalibrate themselves accordingly.

Meanwhile, DAWs will know the typical frequency characteristics that particular styles of music have. In fact you'll probably be able to get off-the-shelf packs of them. Do you want the Aerosmith or the Deadmau5 "sound"? Then here's the appropriate frequency definitions for drums, bass and guitars, or kick, hat, snare, sub and synth-pad tracks, predefined. Now when you compose you can apply those intentions to the appropriate tracks, and know that the smart speakers are going to do their best to honour your intention.

And when that happens, there'll be no difference between the average bedroom producer and the top professionals. Both will simply be loading intentions into the music files, and relying on the consumer's sound-system to do the right thing with them.

Quora Answer : What do you predict music will sound like 50 years from now?

Apr 25, 2020

Artificial Intelligence

AI is going to be increasingly important in music composition from here on out.

Not in the form of "get the AI to write music". I think that's going to be a minor issue. People LIKE writing music. We already do it for fun and give it away free. AIs are not going to take over that, there's no economic incentive.

But what there will be is a lot of AI assistants coming into DAWs.

You'll have plugins that, given a chord sequence, will improvise melodic lines on top of them. You'll have plugins that, given a MIDI melody will give an increasingly plausible "human" performance on realistic sounding instruments. I'm convinced we'll get neural style transfer which can transform simplistically recorded parts into close approximations of famous players. Want Miles Davis playing trumpet on your track? You'll soon be able to have a neural network trained on every Miles Davis recording which can make your trumpet part sound like him. Etc.

In the next 10 - 20 years, the home musician will not only be able to have an entire sampled orchestra or rock band available on their laptop, but a huge amount of the wisdom and intuition of orchestral players and rock guitarists etc. captured in machine learning systems, so that your compositions will sound extraordinarily like humans playing them.

We'll also, of course, play with this technology to create various genre-chimeras. Covers of famous songs in different musical styles. Or new music which mixes widely diverse styles together. Today, if I use sampled strings to add an orchestral break to my prog-rock epic, most people will be able to tell the difference from the real thing. Tomorrow, I suspect only a handful of experts would be able to tell, by listening, that I didn't hire Mahler to write and orchestrate an extra middle-section of my tune. AI is going to make the music "fakes" that good.