ComputerAidedComposing

ThoughtStorms Wiki

Context : ComposingMusic

MusicLM : https://google-research.github.io/seanet/musiclm/examples/

https://colab.research.google.com/github/Harmonai-org/sample-generator/blob/main/Dance_Diffusion.ipynb

LinkBin

SemillaAI

https://www.musicradar.com/news/aimi-interview

https://www.musicradar.com/news/10-tracks-artificial-intelligence

Humtap

Chord sequences in CodePen

https://codepen.io/jakealbaugh/full/qNrZyw

Bookmarked 2020-10-18T14:42:06.237783: https://aeon.co/amp/essays/how-social-and-physical-technologies-collaborate-to-create?__twitter_impression=true

Bookmarked 2020-10-18T14:46:31.533417: http://m.nautil.us/issue/21/information/how-i-taught-my-computer-to-write-its-own-music?utm_source=ticker

Bookmarked 2020-10-18T21:11:10.960585: http://benjamin.kuperberg.fr/chataigne/en

Quora Answer : What does the growing trend of AI musicians mean for the music industry and especially session players?

Oct 25, 2019

Seriously?

What do you think?

Those live musical performers who weren't already decimated by the use of recorded music in public space. And the session musicians who hadn't already been replaced by computer sequencing and sample packs. Are going to find themselves squeezed even further.

Of course, people still like watching and listening to live musicians. And there will always be a market for a few of them, that people are willing to pay a premium to see.

But most music is just going to be made by increasingly "intelligent" computers. (And by that, I mean neural network "style transfers" between famous recorded musicians and sequenced musical lines ... so that 99.99% of listeners, including musicians, won't be able to tell that it's a computer)

I'd hope / assume that no-one becomes a session player today thinking they'll make any money from it. If you arean't making music primarily for love, then you are already making a big mistake.

That's going to continue.

On the bright side, those same very poor musicians barely scratching a living from session playing, are going to be able to go home and use their skills to compose fantastic music, with the ability for their personal laptop to sound like the greatest orchestras and performers.

Quora Answer : Survey: Why do you use generative music software as a composer or someone who can already write music?

Dec 27, 2016

Well, in my case I can only "write music" like a 7 year old writes "essays". I know musical notation. I know the simplest of harmonic theory and a few heuristics. But I can't look at a score and hear it in my head. Or compose top-down from some high-level structure I've concocted in my head.

Choosing the next note in a sequence is usually the result of widdling-around on a keyboard or with a mouse until I hear something I like. I have no fluency in inventing music the way someone who actually knows what they are doing would write it.

Right now, I'm playing a lot with Sonic Pi.

So a couple of things I like :

Being able to choose notes randomly. Yes, this "noise" soon becomes wearisome ... in that your music has a kind of samey random widdling around quality. At the same time, what's interesting is to start with this samey widdling around, and use it as a platform to learn more about larger scale harmonic movement. So I can say "play me 16 seconds of this baseline with a cloud of random notes on top, then shift it up by a fifth, then down by a seventh" or whatever. Then take that whole chord sequence transpose it into a different key after 1 minute. Etc.

Being able to construct these higher-level harmonic developments with very concise code. With a programming language like Sonic Pi you're literally just zipping together multiple rings : a ring of chords, a ring of keys, a ring of different rhythmic patterns, a ring of dynamic progressions etc. With a few of lines of code, you can sketch out a large scale structure. The next challenge is figuring out ways of infilling the structure with more subtlety and interest than just "clouds of random notes". But as a programmer turned artist, that challenge itself is interesting to me. Programming is all about expressing fiendish complexity and detail as concisely and elegantly as possible by finding the most powerful abstractions. And music is a good place to explore that.

Another thing I like about Sonic Pi is that it combines things like melody and harmony with studio / sound techniques like synth parameters and chains of effects within a fairly consistent world. You can tweak synth parameters and create and destroy effects within the same programmatic musical score as specifying notes and chords. That uniformity allows for more interplay and crossover between the logics of harmony and logics of timbre.

Quora Answer : Could a musical software compose coherent music just from a random collection of sounds on the Internet?

Jul 21, 2020

Of course.

How much you'd want to listen to it depends on your taste and the particular algorithms that went into assembling it.

But it's technically almost trivial these days. Have some kind of crawler run through YouTube, download and convert to audio format.

Then have it chop out some sounds according to some criterion (eg. look for the transients that sound like an attack).

Then process them according to an algorithm, and arrange them according to another algorithm.

But like I say, it depends a LOT on the algorithms.

Do you decide to force all the sounds through autotune so they are in a specific key / scale ... and therefore "in tune"? Do you try to arrange them into a regular grid with a pulse to make rhythms? Do you add rhythmic constraints? Extra melodic constraints? Or do you prefer not to ... to leave the original pitch of the sample and have the music "atonal" and allow whatever rhythms the original sounds imply.

Today it's just a question of fine tuning the parameters to your algorithms to have something like this sound anywhere between a traditional "cacophony" type music like Varese's Ionisation, through to the smoothest easy listening jazz.

Quora Answer : Do you think AI's will be able to compose music as well as a professional composer within 15 years?

Mar 1, 2018

Yes and no.

They will certainly be trained to reproduce huge amounts of music that is utterly plausibly "in the style of" existing composers.

Both by composing notes :

The Endless Traditional Music Session

And by resynthesizing actual audio :

DADABOTS

What AIs won't do is invent new ideas. Or rather, when they invent new ideas, you'll still need humans to decide if the new ideas are any good or not.

A human composer will have the advantage here ... a good composer will have a taste or good intuition as to whether a new idea "works" ... or not. And is, effectively willing to stake their reputation on it by releasing it.

In fact this is the general rule for the coming AI wave.

Machines will do more of the thinking, but people will be paid to oversee, vouch for and take responsibility for what the machines come up with.

That will be as true in music as everywhere else. You will have plugins in your DAW that can generate jazz solos and beat workouts as impressive as any human has ever played. But you will still have to be the one who is willing to take it to a record company, put it under your "brand" and say "this one's worth it".

I hope what is going to happen is that the AIs will largely be built into the instruments (particularly easy in the form of DAW plugins). And will therefore mainly become collaborators for your composition.

I confidently predict that in less than 10 years, in FL Studio 25 or Ableton 15, you'll be able to add a "Trumpet plugin" that you'll be able to instruct "here are the chords, give me an 8 bar solo in the style of 1968 Mile Davis". And it will produce something that 99% of human listeners today couldn't even imagine wasn't being played by a real disciple of Miles on a real trumpet.

But of 1000 people who own this plugin 300 will just use it as a technical demonstration, saying "listen how this sounds like Miles Davis". Another 500 people will use it inappropriately, in utterly pedestrian and boring settings, that no-one ever needs to hear. 199 people will use it well, adding a touch to their original compositions, and sell maybe 20 copies of their album on BandCamp. And one person will find a use for it, so strange and original and yet "right", that Miles himself would have applauded.

That's the guy who is still out-composing the machine.

Related :

Phil Jones (He / Him)'s answer to In genres like pop and dubstep, instruments have less of a role. Is technology the future of music?

Phil Jones (He / Him)'s answer to What are the various future trends in music?

Phil Jones (He / Him)'s answer to How do composers write intellectual electronic music?

Quora Answer : What is the new technology in the music industry?

Oct 21, 2019

The big thing that's happening today is AI.

In the near future you are going to find yourself spoiled for choice with dozens of new VST plugins which have AI or neural nets of some kind inside them.

There'll be neural nets that synthesize plausible instrument sounds (basically doing a neural "style transfer" from recordings of real violins, to your MIDI / synthesized violin part).

And "style transfer" of actual musicians' styles. Want your sax part played in the style of Charlie Parker or Eric Dolphy? Neural nets can be trained on those artists and apply that style to your MIDI notes.

Then there'll be "intelligent mixers" listening to all your tracks and figuring out to adjust volumes to fit them all in. And probably intelligent mastering too.

There'll be VSTs to compose music ... you give it the chord sequence, it gives you all the notes back. Perhaps from a network that's been trained on the complete works of half a dozen very famous 18th century composers.

And probably dozens of other applications for AI in music which we can't even imagine.

Keep an eye on :

Flow Machines : https://www.flow-machines.com/

MIMIC : Musically Intelligent Machines Interacting Creatively ( https://research.gold.ac.uk/26616/3/ICMC2018-MG-MYK-LM-MZ-CK-CAMERA-READY.pdf )

which use Magenta

AIVA : The AI composing emotional soundtrack music

See also :

Quora Answer : How are the synthesizer and DAW industries affected by machine learning and AI?

Aug 16, 2020

Have a listen to the OpenAI jukebox.

This technology will eventually be coming to your DAW.

Maybe only in a couple of years, maybe only with the help of dedicated TPU hardware in your soundcard. But it will come.

Neural networks trained on professional musicians' singing and playing style, available to incorporate into your own pieces.

If you think about this, it's the obvious future of the large expensive sample libraries who already have orchestras on the payroll and lots of data. Why have to program "switches" to switch between several different violin samples when AI can just infer (or even synthesize) the appropriate sound based on its context in the piece?

I'd expect record labels who own a large catalogue of multitrack master tapes of famous older musicians to start mining them to sell neural networks trained on them.

Quora Answer : What trends do current music producers see in music production?

Jan 28, 2020

There are trends at different scales. For example, something might be fashionable for a year or so. Or 10 years. Or 20.

Busy Works Beats, who teaches trap production on YouTube suggests that simple guitar lines and drill beats are becoming the new fashion in hip-hop. He may well be right. But these are trends that are likely to last a year or two. They can dominate now, but superficially.

A couple of years ago, distorted kicks were big. I don't know if they still are. I can believe that next year you'll be hearing a hell of a lot of acoustic guitar lines and drill rhythms in hip hop, and then they'll die down again.

Then again, in hip-hop you have things like triplet-flow (Migos flow). And "Scotch Snaps" This is a generational trend of the kind that lasts about 10 years. 10 years ago, you heard isolated examples. Now it's everywhere. In 5 years time I'd expect people to have become bored of it and moved on to new flows. Ideas get saturated.

In the longer term ... In the longer term, I have a rough, technological, model of popular music. I think it works as a description of what happened over the last 60 years or so, though I'm not sure how predictive it can be.

It divides recent popular music into three epochs :

the epoch of electrical amplification (ie. the new sounds created by electrical amplification provide the excitement and the logic of the music's evolution). This is basically the era of rock. Electrical amplification drives both the evolution of the sound of the electric guitar, through amplification, distortion and other effects. It allows singers to compete with louder instruments. And rewards new styles of singing. Amplification also enables the trend towards large scale concerts, stadium spectaculars, festivals etc. Most of the evolution of popular music between 1950 and 1980 is driven by electrical amplification.

Which then hits a limit in how loud and big concerts can be. How much more you can do with various kinds of distortion and loudness.

the second epoch, 80s - early 2000s is the epoch of electronic control : sequencing, drum-machines, arpeggiators, synths, computers etc. This replaces "loudness" and "distortion" as the dominant virtues in music, with "precision" and "endurance". Everything from disco to house and techno to hip-hop follows this logic. Musics like funk, which are built on precision and endurance, even when played by humans, start to be admired even more in this age.

But electronic control hits its limit because you can only have so much speed, and duration. And fine-grained control. Before musics built on that become as boring and clich\xc3\xa9d as musics build on loudness and distortion.

the third epoch is the age of digital communication and transformation : which surprisingly has made the human voice the prominent instrument. Today, in an era of social media, and mediated intimacy. When everyone is expressing themselves through tweets and selfies, the voice has become the main channel of musical expression. People complain a lot about autotune, but autotune isn't what they think it is. Or rather, amplification was "cheating" because it let people who couldn't sing loudly enough, or play interestingly enough, make a big enough noise that they held people's attention and interest. Sequencing is obviously "cheating" because you aren't "playing your instruments" if you are only pressing buttons. And now autotune is "cheating" because it keeps you in tune. But autotune and other vocal effects are not replacing your voice or your singing. They are "augmenting" it. The human identity is still apparent even though the voice is increasingly decorated and elaborated and multiplied. It's striking how much pop music has stripped out other instruments from the frquency band that the voice wants to occupy, stripping away the kinds of instruments like guitars and strings that compete with it. And filling that bandwidth with more backing vocals, or samples of vocal chops. Or reverb on the voice. Other instruments are reduced to short percussive sounds and relagated to bass or very high frequencies, to leave more space for voice.

But we are probably hitting the maximum tolerance for autotune and vocal science etc. And identity is dominating everything in music. But won't be the end.

Technologically, the next epoch should be driven by artificial intelligence. Which is now coming to our music-making tools. We'll have neural-network audio processing. Neural network partners for composing. Neural-network based synthesis. Etc. We'll probably be able to have the computer resynthesize plausible orchestrations simply from a musician whistling a basic melody. The next wave is AI augmented musicians. But it's still hard to predict exactly how that will sound.

Quora Answer : What are some ideas for audio signal processing side projects?

Jun 7, 2019

Here's one I've been thinking about recently.

There are now neural networks which can turn low resolution images back into high resolution images : eg. Image Super-Resolution with Deep Convolutional Neural Network

Do we yet have networks that can take old music recordings off tape (with the associated degradation in sound) and clean them up, turn them back into pristine originals?

Another idea ... I have a lot of problems composing (and mixing) using headphones, then playing it back onto other people's speakers and the recordings sound awful.

So ... why not a neural network or similar which can learn the behaviour of a speaker, and then automatically remix music to sound good on it?

It ought to be possible, for any speaker, to have a profile of that speaker's response. Analyse more music that has been hand mixed to "sound good" on that speaker. And then analyse any other piece of music, see where the responses are going to sound bad, and fine-grain equalize them accordingly to sound good on this particular speaker.

Eventually I'd imagine this being built into speakers themselves.