ComputerAidedComposing

ThoughtStorms Wiki

Context : ComposingMusic

MusicLM : https://google-research.github.io/seanet/musiclm/examples/

https://colab.research.google.com/github/Harmonai-org/sample-generator/blob/main/Dance_Diffusion.ipynb

LinkBin

https://www.musicradar.com/news/aimi-interview

https://www.musicradar.com/news/10-tracks-artificial-intelligence

Humtap

Chord sequences in CodePen

https://codepen.io/jakealbaugh/full/qNrZyw

Bookmarked 2020-10-18T14:42:06.237783: https://aeon.co/amp/essays/how-social-and-physical-technologies-collaborate-to-create?__twitter_impression=true

Bookmarked 2020-10-18T14:46:31.533417: http://m.nautil.us/issue/21/information/how-i-taught-my-computer-to-write-its-own-music?utm_source=ticker

Bookmarked 2020-10-18T21:11:10.960585: http://benjamin.kuperberg.fr/chataigne/en

Quora Answer : What does the growing trend of AI musicians mean for the music industry and especially session players?

Oct 25, 2019

Seriously?

What do you think?

Those live musical performers who weren't already decimated by the use of recorded music in public space. And the session musicians who hadn't already been replaced by computer sequencing and sample packs. Are going to find themselves squeezed even further.

Of course, people still like watching and listening to live musicians. And there will always be a market for a few of them, that people are willing to pay a premium to see.

But most music is just going to be made by increasingly "intelligent" computers. (And by that, I mean neural network "style transfers" between famous recorded musicians and sequenced musical lines ... so that 99.99% of listeners, including musicians, won't be able to tell that it's a computer)

I'd hope / assume that no-one becomes a session player today thinking they'll make any money from it. If you arean't making music primarily for love, then you are already making a big mistake.

That's going to continue.

On the bright side, those same very poor musicians barely scratching a living from session playing, are going to be able to go home and use their skills to compose fantastic music, with the ability for their personal laptop to sound like the greatest orchestras and performers.

Quora Answer : Survey: Why do you use generative music software as a composer or someone who can already write music?

Dec 27, 2016

Well, in my case I can only "write music" like a 7 year old writes "essays". I know musical notation. I know the simplest of harmonic theory and a few heuristics. But I can't look at a score and hear it in my head. Or compose top-down from some high-level structure I've concocted in my head.

Choosing the next note in a sequence is usually the result of widdling-around on a keyboard or with a mouse until I hear something I like. I have no fluency in inventing music the way someone who actually knows what they are doing would write it.

Right now, I'm playing a lot with Sonic Pi.

So a couple of things I like :

Being able to choose notes randomly. Yes, this "noise" soon becomes wearisome ... in that your music has a kind of samey random widdling around quality. At the same time, what's interesting is to start with this samey widdling around, and use it as a platform to learn more about larger scale harmonic movement. So I can say "play me 16 seconds of this baseline with a cloud of random notes on top, then shift it up by a fifth, then down by a seventh" or whatever. Then take that whole chord sequence transpose it into a different key after 1 minute. Etc.

Being able to construct these higher-level harmonic developments with very concise code. With a programming language like Sonic Pi you're literally just zipping together multiple rings : a ring of chords, a ring of keys, a ring of different rhythmic patterns, a ring of dynamic progressions etc. With a few of lines of code, you can sketch out a large scale structure. The next challenge is figuring out ways of infilling the structure with more subtlety and interest than just "clouds of random notes". But as a programmer turned artist, that challenge itself is interesting to me. Programming is all about expressing fiendish complexity and detail as concisely and elegantly as possible by finding the most powerful abstractions. And music is a good place to explore that.

Another thing I like about Sonic Pi is that it combines things like melody and harmony with studio / sound techniques like synth parameters and chains of effects within a fairly consistent world. You can tweak synth parameters and create and destroy effects within the same programmatic musical score as specifying notes and chords. That uniformity allows for more interplay and crossover between the logics of harmony and logics of timbre.

Quora Answer : Could a musical software compose coherent music just from a random collection of sounds on the Internet?

Jul 21, 2020

Of course.

How much you'd want to listen to it depends on your taste and the particular algorithms that went into assembling it.

But it's technically almost trivial these days. Have some kind of crawler run through YouTube, download and convert to audio format.

Then have it chop out some sounds according to some criterion (eg. look for the transients that sound like an attack).

Then process them according to an algorithm, and arrange them according to another algorithm.

But like I say, it depends a LOT on the algorithms.

Do you decide to force all the sounds through autotune so they are in a specific key / scale ... and therefore "in tune"? Do you try to arrange them into a regular grid with a pulse to make rhythms? Do you add rhythmic constraints? Extra melodic constraints? Or do you prefer not to ... to leave the original pitch of the sample and have the music "atonal" and allow whatever rhythms the original sounds imply.

Today it's just a question of fine tuning the parameters to your algorithms to have something like this sound anywhere between a traditional "cacophony" type music like Varese's Ionisation, through to the smoothest easy listening jazz.

Quora Answer : Do you think AI's will be able to compose music as well as a professional composer within 15 years?

Mar 1, 2018

Yes and no.

They will certainly be trained to reproduce huge amounts of music that is utterly plausibly "in the style of" existing composers.

Both by composing notes :

The Endless Traditional Music Session

And by resynthesizing actual audio :

DADABOTS

What AIs won't do is invent new ideas. Or rather, when they invent new ideas, you'll still need humans to decide if the new ideas are any good or not.

A human composer will have the advantage here ... a good composer will have a taste or good intuition as to whether a new idea "works" ... or not. And is, effectively willing to stake their reputation on it by releasing it.

In fact this is the general rule for the coming AI wave.

Machines will do more of the thinking, but people will be paid to oversee, vouch for and take responsibility for what the machines come up with.

That will be as true in music as everywhere else. You will have plugins in your DAW that can generate jazz solos and beat workouts as impressive as any human has ever played. But you will still have to be the one who is willing to take it to a record company, put it under your "brand" and say "this one's worth it".

I hope what is going to happen is that the AIs will largely be built into the instruments (particularly easy in the form of DAW plugins). And will therefore mainly become collaborators for your composition.

I confidently predict that in less than 10 years, in FL Studio 25 or Ableton 15, you'll be able to add a "Trumpet plugin" that you'll be able to instruct "here are the chords, give me an 8 bar solo in the style of 1968 Mile Davis". And it will produce something that 99% of human listeners today couldn't even imagine wasn't being played by a real disciple of Miles on a real trumpet.

But of 1000 people who own this plugin 300 will just use it as a technical demonstration, saying "listen how this sounds like Miles Davis". Another 500 people will use it inappropriately, in utterly pedestrian and boring settings, that no-one ever needs to hear. 199 people will use it well, adding a touch to their original compositions, and sell maybe 20 copies of their album on BandCamp. And one person will find a use for it, so strange and original and yet "right", that Miles himself would have applauded.

That's the guy who is still out-composing the machine.

Related :

Phil Jones (He / Him)'s answer to In genres like pop and dubstep, instruments have less of a role. Is technology the future of music?

Phil Jones (He / Him)'s answer to What are the various future trends in music?

Phil Jones (He / Him)'s answer to How do composers write intellectual electronic music?

See also :