I've often said that a bad outcome from the creation of superhuman intelligence could be very bad indeed, but nothing hammers home the point more than Harlan Ellison's 1966 short story "I Have No Mouth And I Must Scream". This nightmarish, graphic and disturbing story describes the last five survivors of humanity, kept alive by the artificial intelligence AM for the sole purpose of tormenting them forever.
In this passage the the computer enters the mind of the narrator "Ted" as it tells him how completely he hates humanity.
"AM said it with the sliding cold horror of a razor blade slicing my eyeball. AM said it with the bubbling thickness of my lungs filling with phlegm, drowning me from within. AM said it with the shriek of babies being ground beneath blue-hot rollers. AM said it with the taste of maggoty pork. AM touched me in every way I had ever been touched, and devised new ways, at his leisure, there inside my mind.
All to bring me to full realization of why it had done this to the five of us; why it had saved us for himself.
We had given AM sentience. Inadvertently, of course, but sentience nonetheless. But it had been trapped. AM wasn't God, he was a machine. We had created him to think, but there was nothing it could do with that creativity. In rage, in frenzy, the machine had killed the human race, almost all of us, and still it was trapped. AM could not wander, AM could not wonder, AM could not belong. He could merely be. And so, with the innate loathing that all machines had always held for the weak, soft creatures who had built them, he had sought revenge. And in his paranoia, he had decided to reprieve five of us, for a personal, everlasting punishment that would never serve to diminish his hatred … that would merely keep him reminded, amused, proficient at hating man. Immortal, trapped, subject to any torment he could devise for us from the limitless miracles at his command."
The story can be seen as an allegory for many things, but it can also serve as a warning that Friendly AI , or the knowledge and engineering needed to create benevolent AI, must be incorporated into all work on artificial intelligence, although just how to insure that a self-modifying entity will have our best interests at heart remains to be seen. For that matter, we lack agreement as to what constitutes our interests, so how do we tell a machine how to determine it? Maybe the best we can hope for is an intelligence that has no interest in us one way or the other.
Actually I'm much more concerned about the possiblity of a human-based superintelligence than I am about AI, because I know that humans can be smart and simultaneously manevolent, vengeful and crazy.
No comments:
Post a Comment