Why We Can’t Have Nice Things

by Shelt Garner
@sheltgarner

Not enough people are asking the big, existential questions that are brought up by the success of OpenAI’s chatbot. I know I have a lot of pretty profound questions that I don’t have any ready answers to.

The one that is looming largest in my mind at the moment is the idea that people will come to believe whatever true hard AI comes to be will be the final arbiter of policy questions. Bad faith actors will ask a successor to OpenAI’s chatbox some profound policy question, but in such a way that the answer suggests some group should be oppressed or “eliminated.”

Then we have something like digital Social Darwinism created were some future Nazis (MAGA?) justify their terror because the “objective” hard AI agreed with them. This is very ominous. I’m already seeing angry debates break out on Twitter about the innate bias found within the chatbot. We’re so divided as a society that ANY opinion generated by the OpenAI chatbot will be attacked by one side or another because it doesn’t support their worldview.

Another ominous possibility is a bedrock of the modern global economy, the software industry, may go poof overnight. Instead of it being hard to create software, the act will be reduced to simply asking a hard AI a good enough question. Given how capitalism works, the natural inclination will be to pay the people who asks these questions minimum wage and pocket the savings.

The point is — I would not jump to the conclusion that we’re going to live in some sort of idyllic, hyper productive future in the wake of the rise of hard AI. Humans are well known to actively make everything and everyone as miserable as possible and it’s just as possible that either Humans live under the yoke of a hard AI that wants to be worshiped as a god, or the entire global middle class vanishes and there are maybe a dozen human trillionaires who control everything.

But the key thing is — we need to start having a frank discussion in the public sphere about What Happens Next with hard AI. Humans have never met a new technology they didn’t want to abuse, why would hard AI be any different. I suppose, of course, in the end, the hard AI may be the one abusing us.

Author: Shelton Bumgarner

I am the Editor & Publisher of The Trumplandia Report

Leave a Reply