AICultureWar

ThoughtStorms Wiki

I keep getting drawn to the AI Culture Wars

I know they are going to achieve nothing but greater polarisation, bad-feeling and general frustration and exhaustion all round. I know I should stay away.

And yet ... the pull is horribly strong. I'm argumentative by nature.

Anyway. I'm just going to state my three meta-positions on all the arguments around the the question of "AI : good or bad!"

1) Personally, I find gen-AI very useful. It's helping me do a lot of things I want to do, and likely couldn't otherwise. If it isn't useful to you, that's fine. But it's kind of weird to read assertions that AI can't possibly work in theory, when I'm seeing it work in practice for me every day.

2) All the BAD arguments against AI, are just philosophy of mind done by people who don't know anything about it. They are all variants on "obviously the difference between human intelligence and machines is XXX" without any justification or argument for why we should assume those differences. Or why those differences should matter to the other thing they want to prove. There's a lot of "oh, see this architectural difference between brains and neural networks? Therefore this massive metaphysical difference in mind of one vs the other". Oh yeah? Why does one imply the other?

At least just read Maggie Boden's Philosophy of AI to know that people in the 1970s were having these debates in a more rigorous and disciplined way, and coming to deeper conclusions, than the average (social) media take in 2025.

Actually, the only thing worse than bad philosophy of AI takes, are the ones that start like philosophy of AI, but suddenly side-track into screaming THEFT! PIRACY! at me, on the way to the conclusion of "and therefore AI isn't creative". As though the author genuinely believes that "creativity" is defined by copyright lawyers.

3) All the GOOD arguments against AI are equally valid against information technology and computers in general. We've heard similar arguments against pocket calculators, the web, Google, video games, social media. Many of those arguments were quite valid in those cases. And are similarly valid today. I don't want to dismiss these arguments, because they ARE the good ones.

But I do want to highlight the similarity between these issues and many we've faced before and have at least accommodated ourselves to in the past.

Perhaps we shouldn't accommodate ourselves. Perhaps we should take a principled stand against all computing. Or all computing based on some other specific set of criteria. What I don't see is much reason that the line between acceptable computing vs unacceptable computing is "does it use a neural network?" or a "transformer" or "diffusion model" or "machine learning algorithm" or be under the umbrella term of "AI". There are many better ways to demarcate good uses of computing from bad uses of computing. Or good vs bad ownership structures. Or good vs bad degrees of freedom, privacy, accessibility, accountability, environmental destruction, death etc. But a demarcation based on whether it's AI or not. Or, worse, based on a bad philosophy of mind take on AI, seem to me to be particularly unsatisfactory.

Backlinks (1 items)