AudioProtoplasm
ThoughtStorms Wiki
Context : MusicalStuff, TechnologyAndMusic
My term for our current wave of music technology. Where we are exploring the creative potential of having huge control over digitized audio. (ReadWith) AutoTune and AutoCroon
Previous waves that affected popular music were
- audio recording (by capturing audio instead of scores, moved us away from the abstractions of "music theory" towards a focus on improvisation, the performer's "quirks" (microtonal blue notes), swing etc. (Ie. led to jazz)
- electrical amplification (which made guitars loud enough to play to crowds, dispensed with the need for large orchestras or big bands of brass instruments, and so gave us rock music)
- electronic control (sequencing, drum machines etc.)
- now audio protoplasm : autotune, warping, chopping, vocoding etc.
Quora Answer : If you're sampling a song in rap music and add a drum track to it, does the melody/riff have to be the same BPM as the drums?
Ultimately you'll want the sample to play in a "compatible" time to the drums.
It might be the same BPM. It might be half or double. Or a mixture of both. Or some weird ratio that just sounds good. 3:2 might work out OK.
But as others are pointing out, you don't have to worry much about the BPM of the original, because software can time-stretch and pitch-shift the sample to match the BPM you want your track to be.
A lot of producers will run a sample through a "half time" effect just to get a grainer slowed down sonority.
And beyond even that, producers these days are not just grabbing a sample and looping it. They are chopping it up, playing back the fragments with different timings, in different orders, with different effects. They are warping it to fit a different rhythmic template. Or filtering it to extract only specific frequencies etc.
I even took a sample the other week and fed it into NewTone pitch correction software and changed just a couple of notes to make a different melody from the original sample.
Today's software makes audio incredibly malleable, almost a kind of protoplasm. In today's hip-hop, samples are less "obvious" because they are far more transformed and blended in to the new composition.
Quora Answer : What musical effects and sounds have become possible in the 2000s that weren't practical or possible in the 80s and 90s?
Watch Jamie Lidell recording his voice at different speeds, and then tweaking the formants to make tracks recorded fast and slow, nevertheless sound "realistic".
I don't think that kind of fine grained control over voices through formants, was widely available or being used prior to the 2000s.
I keep saying that "voice is king" in contemporary popular music. And partly that's because we have so much flexibility for messing with it.
Yes, autotune is part of that. But it's not just about pitch-correction for bad singers. It's about augmenting the human voice with a range of musical superpowers.
Things like the SOMA The Pipe are signalling the way for how the voice can become a more multidimensional .
And the combination of loopers + beatboxing is turning individuals and their voice into complete bands.
Finally, the ability to collect, collaborate and co-ordinate on the internet, and our computers able to do massive multitracking, allows things things like the Eric Whitacre virtual choir
and some of Jacob Collier's productions.
Transcluded from AutoTune
Quora Answer : Do you think the trend in noticeably auto-tuned vocals in popular music will soon become passe? Will they become a kind of time-stamp for this moment in studio production?
No.
I think it's here to stay.
Think of autotune as the equivalent of distortion for electric guitars.
Distortion started as a gimmick. Distortion was hated by musical purists in the same way vocal processing is hated by purists today. But autotune, just like guitar distortion, makes possible entirely new sonic worlds and genres of music.
In particular autotune is highly culturally relevant to us today, when we are starting to see, and to fear, the synthesis of humans and machines. Angst about autotune reflects our wider angst about ArtificialIntelligence, our worries that humans are "too dependent" on our machines. Fears that we are becoming incapable of acting independently of them. That we're merging into them.
What can illustrate that drama better than the human voice, the most fundamentally "human" part of music, allowing itself to become roboticised thanks to autotune and vocoding? And the computer sound becoming more human and animal-like thanks to formant filters etc?
Autotune is as fundamentally the sound of now, as electric guitars were the sound of the 60s and 70s. And everything you than think of to say AGAINST autotune, people were saying against electric guitars back then.
Backlinks (2 items)