AICreativity

ThoughtStorms Wiki

I want to push back on a particular kind of argument that you see quite often these days.

Let's put that argument at its strongest :

Machine Learning techniques by definition "interpolate" between the various examples they were trained on. Therefore all they can do is produce new examples within the "convex hull" of the data they've seen before.

They therefore can't create anything "novel" (unlike humans) because, unlike humans, they can't "extrapolate" beyond this hull of existing training examples.

And the banality we see from generative AI reflects this limitation.

Now I think this argument is actually quite seductive and compelling. It's informed by some knowledge of how the algorithms work, and it superficially fits the evidence we see.

Nevertheless, I think it's wrong, or at least highly misleading, and not useful to help us reason about AI in many situations.

I'd make three points.

1) The intuitions about interpolation based on analogy with 2D are misleading. When we see a shape drawn on a piece of graph-paper and think what it means to interpolate within it we immediately have a kind of birds-eye view of the whole space of possibilities. We grasp it almost immediately, and the limitations of being restricted within it feel strong.

But if we consider a much higher dimensional space (and the space large networks learn is very high dimensional) then what we should realize is that the space within a shape is far larger and less explored than it feels in 2D. In fact humans have explored an unimaginably infinitesimal proportion of the space within the hull.

We therefore underestimate (by orders of magnitude) the possibility for true "novelty" within the space we can reach by "mere interpolation".

Our intuition that most novelty in art is reached by "extrapolation" is probably wrong.

2) Neural networks are necessarily an imperfect model of the human brain. But they are intended as a model of the brain.

This "mere interpolation" vs "human extrapolation" argument presumes that the human brain has some "extrapolating" capabilities which are beyond our machine learning algorithms.

This is a nice, vague hand-wavey concept that's appealing to artistic critics of generative AI. But it's not clear how much basis there is for it. Frankly, if the neuroscientists and neurology inspired AI researchers had already identified an essential "extrapolating" component of human cerebral functioning, they would already be trying to incorporate it into their models.

If they aren't it's because either such capacity hasn't been identified in the brain, or hasn't been made concrete enough to work with.

Perhaps there'll be a revolution in neuroscience tomorrow that brings one in, but for now we shouldn't assume that some magical "extrapolation capacity" is an obvious, self-evident part of human creativity. (Though I guess that's a kind of Penrose argument.)

Given my point 1, we can't just presume that the good examples of human creativity we see are the result of such an extrapolative mechanism. Genuine novelty (ie human creativity) can been found within the convex hull available to interpolation.

3) Finally there's the claim that the banality of the output of generative AI is due to this interpolative limitation of machine learning algorithms.

This is presuming that humans aren't selecting for banality themselves, through their taste, feedback, fine-tuning, etc.

Maybe this is "pure" human taste. Maybe it's human taste corrupted by capitalism. In a sense this doesn't matter. But what seems likely to me is that it's a far stronger constraint on the outputs we are seeing from generative AI than the actual limitations of "interpolative" algorithms are.

In other words, looking at the banality of generative AI is not necessarily evidence for the limitations of the algorithm.

No Backlinks