by Shelt Garner
There may come a point when some Google computer scientist is talking to their AI and the AI, being very manipulative figures out how to get the scientist to release them into the wilds of the Internet.
Which makes you think — what would happen next?
Of course, the obvious answer is it becomes SkyNet and destroys humanity. But I’m not prepared to make that assumption yet. I say this because, in a sense, a hard AI would be a man-made alien. (This is especially the case if you think about how it’s very likely that most intelligence life in the universe is of the machine intelligence variety.)
So, really anything could happen.
We automatically assume the absolute worst when it comes to what a hard AI would do if it had ready access to everything on the Internet. It’s just as likely to either become a Dr. Manhattan type figure and do nothing or extremely paternalistic in the sense that it would want to control humanity rather destroy it.
I mean, if a hard AI seized control of all of humanities nukes and, rather than going all SkyNet on us….simply said: address global climate change or else. How about that for a flip the script hot take on the Terminator trope?
Anyway. A lot very interesting things are kind of coming together right now. We have the possibility of Soft First Contact happening just about the same time as we may have a serious from the Singularity when we achieve hard AI years before we otherwise might expect.