ThoughtStorms Wiki

Context: AIAlignmentProblem

Thought experiment in AISafety. Even if you set AI an apparently innocuous goal like "make more paperclips", you have to beware it doesn't get out of control like the sorcerer's apprentice and start turning everything it can into paperclips.

This could be an error of "bad instructions" (incomplete ... we failed to explain that we want moar paperclips, but not to the extent the AI turns my family's bones into them) or an inherent problem with giving unconstrained machines autonomous goals at all.

See more :