MyFearsAboutAI

ThoughtStorms Wiki

Phil. You seem to be a bit of an AI booster and not too concerned about the issues many other right-thinking people have. What's up with that? Are you just a fanboi?

Kinda. Maybe. Dunno. I've been watching AI for a long time. Although I'm not working professionally in the area today, I built my first NeuralNetwork at work in 1992. And spent most of the 90s among academic AI and ALife researchers. So I'm familiar with most of the ideas and philosophical debates, even if not the latest techniques.

Like most people, I was shocked and blown away when ChatGPT brought LanguageModels to the mainstream. I genuinely didn't see them coming and am amazed by how good they are. Especially for programming, which is where I'm using them intensively. I'm happily paying $20 a month to OpenAI for GPT / DallE access, and think it's the best value service I've ever seen in my life.

But while I don't claim to fully understand the transformer model, I think I have a layman's understanding of it and the other improvements that have been made. I can kinda grok how they appear to achieve the "miraculous" things they do. I'm not spooked by them.

I also think that, for a layman, I have a pretty good intuition about what they can do, what they still can't do. And what we can expect from them in the future.

Because in the 90s and 2000s I was always interested in AgentBasedSimulation, the "agentic" AI model where we aggregate AI specialists into a CollectiveIntelligence / SocietyOfAIMinds makes perfect sense to me and I expect to see even more impressive feats from these collectives in the next few years. I can also see how various other techniques for Unhobbling are likely to bring improvements. This is all incredibly exciting. And (IMHO) a great human achievement. One we should be proud of.

So no qualms then?

But ... of course. Like all technologies, I think how good / bad it will be depends on how we use it. I also believed that the internet / social web was a wonderful boon to humanity. And look how that has turned out. SocialMedia became awful. It's hard to say now that it's a net positive for humanity. While there are still many good things, there are plenty of bad things whose effects are still spiralling out of control. In particular, 20 years ago, I held what was a fairly typical "liberal" / "techno-optimist" view that more access to information and more access to debate from competing opinions would naturally end up in a better informed public and a more robust set of truths being widely adopted. It seemed obvious that while there would be disinformation and lies, truth would eventually drive out untruth.

Today that assumption is much harder to sustain. Or even give a plausible justification for. With infinite bandwidth why shouldn't AllLiesSurvive? Why shouldn't a torrent of InformationOverload overwhelm the critical capacities of both individuals and institutions to absorb and evaluate them? A polluted MemeticEcosystem of a million inconsistent and unverifiable narratives, where humans simply latch on to and disseminate stories, based on gut heuristics, leading to TheEndOfConsensus, is an equally plausible outcome of the technological marvel of teh interwebs.

If, once the old gatekeepers are removed, there is some other emergent mechanism that kicks in to guide the cacophony of beliefs to converge on a widely shared truth, no-one seems to be able to give a plausible candidate for it or explanation of how it would work.

Similarly, today it seems self-evident that a huge new supply of automated "intelligence" - where intelligence is the capacity to reason, analyse data and generate new things (from software, to essays, to images to scientific hypotheses to design etc.) - cheaply, should be a good thing for humanity.

But is that assumption anything more than the triumph of hope over experience?

In practice, as in the internet case, there's no obvious veridical mechanism that keeps language models "honest" (by which I mean neither hallucinating, nor proceeding from, and reasoning with, false premises).

I believe we can keep such models honest with both human feedback and supervision of the learning data; and extra checks and balances to verify the output. It's a technically solvable problem. But just as there's no technical solution to humans posting lies on social media, there is no technical solution to people filling AI with fake "facts" and asking it to enable and empower their bad faith actions. As ClayShirky put it WRT social media in AGroupIsItsOwnWorstEnemy :

Of the things you have to accept, the first is that you cannot completely separate technical and social issues.

In other words, the main danger of AI is bad humans.

Then again, BrianEno once said "HonorYourMistakeAsAHiddenIntention". StaffordBeer said "ThePurposeOfASystemIsWhatItDoes".

By this token, if the AI revolution takes off and it ends disastrously, maybe people like me won't be able to say "it wasn't meant to be like this. It ought to have worked differently". That normative gap between ought and is has collapsed. And we will almost certainly find that the devil was in the details of how AI was funded, and applied.

OTOH, before that future, I think we can still fight for the AI to be used well.

AI alignment?

To take this a bit further. NickLand thinks capitalism and AI are the same thing. Both are inhuman information processing machines. (AIAndCapitalism)

And I believe we have already built unstoppable PaperclipMaximizers profit maximizers in the form of corporations (CorporationsArePsychopaths)

That is to say, the problem of bad AI is likely to be the problem of capitalism.

If that sounds trite, let me put it slightly differently. I don't believe that there is a special AIAlignmentProblem which is distinct from many other problems we face due to inability to align technology with human interests. For example, environmental problems are really due to the fact that our extractive and polluting technologies outpace the ability of nature to heal itself. Many social problems arise in and from an exploitative labour market. And, these days, the perverse incentives for social media companies have driven the worst excesses of SocialMediaStrife and mob mentality.

Because of all this, I don't believe it's either possible or worthwhile to try to address "AI Alignment" separately from these other problems. They are all symptoms of the same root cause.

Trying to align AI without fixing the wider perverse incentives in the economy, will fail. Focusing exclusively on hypothetical problems that are specific to AI while ignoring these other problems, is wasted energy.

On the subject of wasted energy, aren't you worried that the huge energy requirements of AI are going to be an environmental disaster?

I don't know. I'd really like to compare the energy requirements of OpenAI with something like Facebook or Coca Cola or the motor industry. To get a sense of how bad it is.

Here's my basic reasoning. It seems the human brain uses about 0.3 kilowatt hours per day. That, therefore, is the baseline for "human intelligence".

I don't like the term "AGI" on the grounds that I believe that even human intelligence isn't "general". But putting that aside, we seem to be converging on a notion something like "as smart as a human for the purposes of employment in many jobs", as to what counts as "AGI". I think there's a world of problems wrapped up in that idea. But let's stick with it for a second.

Can we therefore get that human-level AI down to below 0.3 KwH per day? I'd bet we very likely can.

  • Partly because our AIs are going to be doing specialized thinking work, while human brains do a lot of other stuff to do with day-to-day living and enjoying ourselves.
  • Partly because human bodies are using a lot more energy than just the reasoning in our brains. AIs without bodies won't.
  • Partly because we often manage to make machines which are more energy efficient than the human body. Given how often we have been able to do that for other bodily activities, it would be kind of strange if evolution had worked out a super-energy efficient method of cognitive problem solving that we can't learn to emulate and improve upon.
  • And partly because I see a huge amount of work and progress on things like NeuralHardware and tricks to bring the performance of smaller Language Models or MixtureOfExperts etc. to the level of the best and largest language models.

In other words, I think it's plausible that so-called AGI can be made energy competitive with humans.

At which point all those ecological arguments against AI start to take a more sinister turn. If we can't afford human-like AGI at, say, 0.2 KwH per day, can we afford real human intelligence at 0.3 KwH per day?

And you might think "that may be the cost of the generative application of AI, but what about the huge up-front training costs?" And we could start asking how much energy it takes to bring up a generation of children from 0 until they are cognitively useful in the economy at about 20-25?

I'm not going down that road and its ramifications. I actually think we shouldn't. But I will say again that I think it's very plausible that AI will be energy competitive with humans.

Especially when we get into NeuralHardware and start doing stuff like burning trained foundation models into chips etc.

But the boosters aren't suggesting we just have a few AIs around, they are imagining orders of magnitude more AIs than human brains, and AIs that are "super-intelligent" (so perhaps aren't demonstrably runnable in 0.3 KwH per day)

I agree. We can imagine situations where demand for AI far outstrips the demand for human intelligence and the energy demands of AI could start to dwarf that required by humans. At which point you would hope that "we" (humans + machines) would be smart enough to pack AIs off into close orbit around the sun or something to take advantage of that solar energy. (Compare RobotSolarSystem)

If you think about it though, the argument that "a machine may consume more power to do the same thing as humans" is different from "a machine may consume more power doing more than humans".

The first is a valid argument against replacing humans with machines. The second is kind of an open-ended speculation. If we make too many paperclips, they become problematic. If we make too much intelligence, it becomes problematic.

Capitalism always threatens to make more of something than is good for us. But I don't think AI is unique in this context. We ought to know what is "enough" in terms of AI compute, in the same way we know what is "enough" paperclips. Or enough anything else. If we don't, again this isn't a special problem of AI.

What about the fact that AI is stealing the work of so many artists and musicians without either payment or permission?

I genuinely DGAF.

If you've been paying attention at all on ThoughtStorms you'll know I'm militantly opposed to IntellectualProperty. I don't think people should own either ideas or expressions of ideas. And that concepts like property, ownership and theft shouldn't be anywhere near a discussion of the rights and wrongs of information and AI. They certainly have no moral force for me.

AI should be freely trained on the best that humans produce of writing, visual arts, music, thinking etc. In fact, anything less is deliberately making AIs "worse". If you hamstring the AI for greedy commercial reasons, that is definitely an immoral waste of the energy being used in training them.

Flattering the vanity of artists who "want to be asked permission" seems to me to be equally crass.

Ever since Napster I've been hoping for the collapse of the music and other IP rights based industries. I've been hoping for musicians and artists to join FreeSoftware programmers in rebelling against property and artificial scarcity as an organising principle in their economy.

I understand why both the pressures and temptations of property and capitalism have, so far, forestalled that.

I don't blame artists for colluding with capital.

But now I'm hoping that generative AI, by effectively destroying the market for human made commercial art, will provide a new opportunity to finally escape the commercialisation of art, music and other intellectual production in general etc and return these activities to a purely amateur / cultural sphere.

As I write on AIPlatforms, I think all attempts to strengthen the IP regime to "protect the poor artists" are fundamentally misguided, when not outright dishonest. They will end up enriching corporations, stamping on ordinary consumers, and capital will easily find ways to continue to exploit artists within the framework that is allegedly for their benefit.

It's much more powerful and progressive for artists to reject such schemes in favour of supporting freedom for AIs to learn from the best human culture can produce.

Wait! WHAT????

Destroying intellectual property is ALSO the solution to the problem of AI consuming so much energy.

In a world where people consume the AI compute they need. And AI companies build to that demand, we wouldn't have such feverish BlitzScaling

The race to build more compute and burn more energy than everyone else is driven by the desire to capture the platforms and value. Without intellectual property laws locking in ownership to those who capture the biggest models, there would be less incentive to grow them beyond the size that people need them today.

My political recommendation would be that

  • a) using data to train AI is always fair use
  • b) parameters / neural weights etc aren't protectable with IP.

In other words, AI companies can use as much data as they want but the AIs they create are always in the public domain.

I believe this would pop the speculative bubble of building excessive compute in a race to capture a future monopoly.

What about the danger of AI locking in prejudices?

Yes, I have several pages about RacistAI, RacistComputers, SexistAI etc. so I want to track and highlight those problems. We need to address them.

I also want to pull back from this for a second and think about the bigger picture. The irony is that with AI we are simultaneously trying to make machines that are "emulations of humans" but also "better than humans". There is a tension here. To be a good emulation of humanity, the AI would pick up humanity's prejudices. To be better than humanity, we expect to be able to fix these prejudices in AI.

This is, I would argue, a very difficult problem. Particularly when the only way we really know to make AI is to train it on a lot of humanity's DataExhaust which contains its prejudices.

But also want to argue that this is not a special problem. Just as I think that AI's environmental problems are not special to AI, but are the same as capitalism's environmental problems (and need to be solved in that context), so I think AI's prejudice problem is not really a special problem of AI prejudice, but is simply one of many problems we have where AI needs to both emulate humanity and transcend it.

For example, for practical purposes, we want AI to be smart like humans and reason like humans, but also not be dumb like humans and fall for human LogicalFallacies.

The only way we currently know to do that, is to train on large amounts of raw / wild data (containing human flaws), and then overlay some extra oversight such as RLHF or hard-coded checks and balances, which enshrine the "better than average humanity" capabilities and values that we want.

We are doing this already.And It does work.

I don't want to sound complacent here. I believe we need to address these prejudices with humans explicitly telling the AI what we want in terms of better values. Such as anti-racism and anti-misogyny.

We may need to go further. We may need to call out companies that decide to make a deliberate decision - or even grandstand on their decisions - to NOT do this, and to pander to humanity's prejudice with self-styled "uncensored" AIs. (I'm looking at ElonMusk's "Grok" etc. here. And some of the image-generators).

Perhaps there's a role for government legislation. I don't want to deny that.

But I do believe that this is what ethical AI companies (and that includes OpenAI, Google etc.) already know needs to be done. And are doing to an extent.

Perhaps we need to go further. To do more of what is already being done. But I do think, it is a somewhat a "solved" problem.

Not that we've made AIs that that are 100% virtuous. Or that there isn't a danger of bad humans making AIs that are deliberately NOT virtuous. But that we know what we need to do. And, in general, the problem is not an AI specific one.

It is a political problem of figuring out how to make humanity more virtuous and to want virtuous AIs.

Obviously some people may decry the ability of AI to enshrine and re-enforce human prejudices and think this is a problem of AI. Perhaps, they'll argue, AI needs to be prevented because it can't be made more virtuous than humanity is willing to make it. I can empathize with that frustration, even if I don't agree with the conclusion.

In a recent personal discussion, JoenioMarquesDaCosta made this more concrete to me. He points out that we have won various rights in law, which prevent people exercising prejudices. But when we move to AIs which are not subject to similar legislated prohibitions, those prejudices get reintroduced. An example is a racist bias in security and policing. There are now institutional checks against assumptions that a particular race might be more inclined towards criminality. There may even be laws prohibiting companies acting on such prejudices, which give victims of them redress. But if similar prejudices creep into an AI backed IT system, through biases in the weights of a neural network, these prejudices will be invisible and perhaps not subject to similar legal prohibition.

Now we can agree that, we hope legislation catches up. And that the AI makers manage to suppress the prejudices. But, Joenio points out, if the IT system does effectively remove protections from the victims of prejudice, it will likely not be AI engineers and supporters like me who have to pay the price. It will be those victims who face a recurrence of the practical impact of the prejudice.

He makes the analogy with Uber and other "ride sharing" platforms which have effectively casualized and precaricised work in the TurkingEconomy. By running faster than legislation can keep up, new technologies effectively nullify and roll it back. Shouldn't there, therefore, be a precautionary principle of not bringing out new technologies to mass availability until these potential effects have been considered and perhaps legislation to ameliorate them has been implemented?