AIProblems
ThoughtStorms Wiki
Context : ArtificialIntelligence, PoliticsOfArtificialIntelligence
(ReadWith) MyFearsAboutAI, AIFailures, EthicalAI, BadBots
- FakeFriends : Plausible bots will try to "befriend" you on social media in order to rip you off.
- AIThreatsToPrivacy An AI on your computer, watching everything you do, can always rat you out.
- AIAndTheDownwardSpiralOfIntelligence : AI depends on a huge pool of free human expertise online, over time, it creates the incentives to destroy that pool of expertise - because a) we don't invest in educating people, and b) AIs get used to flood the zone with Disinformation
- AISlop
- AI will build in all the prejudices : RacistAI / SexistAI
- Drones will invade your physical privacy and rob your home : DroneSwarms, FlyingRobots
- AIMindReading
- TheEthicsOfForecasting discusses the implications of AI being successful at predicting our behaviours.
Over on DanielEstrada's G+ there's a discussion about ElonMusk's recent warnings about the dangers of AI. (AIAlignmentProblem)
Daniel thinks Musk may be talking things up for ulterior motives, but I think it's more a question of messaging.
I don't suppose Musk's thinking is fuzzy on this. By most accounts he's a smart, and he's undoubtedly got access to plenty of people who know something about what "real AI" is and is capable of in his circle of friends / employees / the general Silicon Valley hive-mind.
I suspect you're closer in saying he doesn't know how to tailor the message for the public. He's got to figure out to explain it to people in a way that catches their attention and they can understand without getting dismissed as crazy. Terminator IS a reference everyone knows and understands. But, of course, invites the "crazy" response.
Everyone who understands these things knows that 99.99% of the danger is not the one rogue super-AI that goes out of control, but the layers and layers of "somewhat smart" infrastructure we're slotting into place, which has real power over our lives, and is effectively out of OUR control, because no-one really understands it or can be bothered to monitor it properly. Those layers of semi-smart (from Siri to autonomous cars to the computers that analyze and manage your credit rating and health records to the next gen of security-guard drones and police crowd-control hardware) offer 4 kinds of danger :
- the people who make them and slot them into our lives may be working against our own interests.
- they are vulnerable to malicious crackers who can seize control of them to work against us.
- they can have flaws or develop faults that mean they misbehave.
- they may develop their own goals which are against our interest. (Not necessarily full hostility, but localized hostility)
When AIs get robots (and corporations) as actuators, then all four problems are exacerbated.
Police robots will fire tear-gas and rubber-bullets at you. Crackers will pwn your home-security robot to break down the door its meant to be guarding. Autonomous delivery trucks will plough-through skateboarders that they weren't programmed to see. An AI bank may deduce we're not valuable enough customers to deal with and so we lose our access to banking services.
But I can see why that's a hard and complex message to try to get out to the public. And Musk may decide to use "terminators" as short-hand just to try to push people into thinking about the subject.
See Also :