It’s Humans We Have To Worry About

by Shelt Garner
@sheltgarner

What’s so interesting to me at the moment is how ready humans are to abuse the OpenAI ChatGPT. People keep thinking up different horrible questions for it to answer in an equally horrible way.

This had led to calls for severe restriction of the technology, but that’s a fool’s errand. The cat is out of the bag, as they say. For me, the question is where are we, in real terms, when it comes to the development and adaptation of this technology.

Is this the release of the first Netscape Navigator in 1994, or is it the original opening of the Internet to the public earlier than that? A lot depends on when we reach a point where we a lot of the quibbling complaints about chatbot technology are no longer applicable.

One ominous aspect of chatbot technology is, of course, the potential for it to make otherwise hard jobs — like programming — very, very easy. Once making new software is simply a matter of asking a chatbot a question, then, well, “learn to code” as a MAGA Tech Bro retort for any issue they feel uncomfortable about will be moot.

Combine humans being horrible and lazy with the possibility that an AGI might radically transform the global economy a quick clip — especially if there is a severe recession in 2023 — and you have the makings of a very alarming situation. It grows even more alarming if you put it in the context of late existential choice facing America of autocracy, civil war or military junta.

I still find myself wondering how many, in the end, AGIs there will be. Will there be one general AGI overlord, or will everything have an AGI built into it in the end? Will all these androids that people seem so determined to build be hooked up to a broader network, or will they be automatous AGIs?

But we still don’t know how difficult it will be to design an AGI in the first place. Right now, we have faux-AGI in the sense that to the average user it’s easy to mistake things like OpenAI ChatGPT as a hard AI, when, it fact, it’s very much not one.

The creation of true AGI would be at least equal to the splitting of the atom and would probably cause just as much change in human life across the globe.

What The Web’s History Can Tell Us About The Future Of OpenAI ChatGPT

by Shelt Garner
@sheltgarner

I was one of the first people to use the World Wide Web around 1994 when it was just beginning to gain in popularity. I was in college and I can still remember the transition from the text-based Gopher to Mosaic and then finally to the .08 release of Mozilla (Netscape Navigator.)

It was a very exciting time, to say the least. And, but for the way my mind is designed, I probably would have rushed to Silicon Valley after graduating from college and tried some sort of startup. But, alas, I’m a writer not a coder.

But here’s what I can tell you about what I think might happen to technology such as what is associated with OpenAI ChatGPT.

The first thing is — whatever happens, is probably going to happen a LOT quicker. Instead of about 20 years for the full impact of ChatGPT (and associated technologies) it’s probably going to be closer to five-ish years. A lot depends on how long it takes for true AGI to happen as well as how long it takes for someone to hook something like ChatGPT to the Internet and let it run wild. That connection to the Internet is going to be key.

While the the design of true AGI is rather abstract and could be something we always JUST about to see happen, connecting some better successor to ChatGPT to the Internet would be a practical way for Silicon Valley to change the lives of millions.

In fact, I suspect once ChatGPT-like technology is connected to the Internet, there will be a mad landrush like there was when it became apparent around 1994 that the Web was going to mainstream the Internet in a big way. There will probably be a number of ChatGPT-like faux-AIs that people use, which will lead to the earliest forms of market segmentation.

Then the typical capitalistic dynamic will occur and there will be HUGE speculation and maybe even a Tech Bubble 2.0 which will pop in the end, causing its own problems.

But this process could be sped up pretty quickly. Yet I will note that we just aren’t quite there when it comes to ChatGPT being a new Netscape. There are too many problems with it, it’s too easy for people in the know to poo-poo it as not being what useful. Though, I have to note, the first version of Netscape Navigator didn’t have the ability to print and it still managed to take off like wildfire.

My chief concern — and I have a LOT of concerns about ChatGPT at the moment — is when the tipping point on the jobs front will happen. If business begin to shed jobs not because of a recession in 2023 but because, lulz, the next version of ChatGPT makes those jobs moot, well, we’re all in for shitshow in late 2024 – 2025.

I say this because if we’re going through an epic economic and technology transformation just as we’re also figuring out if we’re going to have a civil war, turn into an autocracy or have a military junta established….then, well, late 2024, early 2025 could be one of the momentus few months in human history. I know that sounds pretty hysterical, but the conditions are there, at least, for something pretty dramatic to happen.

But only time will tell. I’m always wrong.

Social Darwinism 2.0 In The Age Of AGI

by Shelt Garner
@sheltgarner

We just are not taking seriously the possibility that we’re going to see Social Darwinism 2.0 should Artificial General Intelligence take off in a big way anytime soon. The return of Social Darwinism would occur in the guise of MAGA Nazis demanding that any “wokeness” be purged from an AGI.

This would be very important if, say, AGI pretty much came to dominate global human life. This could happen in a number of different ways. There is a spectrum of outcomes. They range from AGI always being a “tool” for humanity to a “Her” type outcome where AGI has such hard AI that it has agency and will do whatever the fuck it wants regardless of the wishes of humans.

But my fear is that as AGI grows in power that because humans are so fucking lazy that extremists on both sides will use the opinions of AGI as “validation” for their extremism. This would be very much similar to what happen with Darwinism and the Nazis.

So, my fear is we’ll wake up at some point in the near future with some extremist group — be they far Left or far Right — using the “bias” found in an AGI to justify horrific actions similar to what the Nazis did.

And remember, if you combine the law of unintended consequences with the innate laziness of humans in general, then there is a real risk that something Nazi-like might happen a lot sooner than you might think. The seeds of such horrific events can already be seen in the political discourse around OpenAI on Twitter.

I hate extremism in general. I hate extreme “wokeness” just as much as I hate fucking MAGA Nazism. In fact, it’s rather unusual for me that I’ve become something of a “radical moderate” in recent years in the context of the mainstreaming of MAGA Nazism.

Anyway, we have to start taking the risk of neo-Social Darwinism seriously. And it will happen in the context of a potential, massive upending of the global economy as AGI is able to perform more and more functions that we once thought were exclusively human.

The Quest For Fire

by Shelt Garner
@sheltgarner

The thing I’ve noticed about the OpenAI chatbot is how badly people want to use it instead of Google, even though for various reasons that’s just not practical at the moment. But it is telling that this gives us some insight into where the market wants to go.

In the mind of the consumer, there would be a natural progression from Google to something like the OpenAI chatbot. To the point that real-world consumers are chomping at the bit to replace Google with it, even though it’s not connected to the live Web at the moment.

The key thing reason why OpenAi’s chatbot is a tipping point is it’s the first time when people can see for themselves in a real world setting what existing AI is able to do. As such, it definitely seems as though soon enough Google is going to face an existential choice — either come out with its own chatbot style interface for search or risk being eaten alive.

Because it definitely seems as though the rush is now on for different companies to come out with chatbots that are open to the public. And I think that’s something people are being a little naïve about — they are seeing the OpenAI in a vacuum, as if Google, Facebook and Apple aren’t all going to eventually come out with their own chatbot techology.

In fact, Google already has a chatbot so advance that someone thinks it’s AGI! So, it’s reasonable to assume that OpenAI should enjoy its moment in the sun while it can. It’s very possible that within a few years there will be a number of similar advanced chatbots for people to chose from.

The reason issue is, of course, who develops the first true hard AI, the first true AGI. THAT would be the Singularity and whoever managed to pull that off would find their company cited in the history books as pretty much re-inventing fire.

The ‘Woke’ In The Machine

One of the rhetorical strawmen that fucking MAGA Nazis love to employ is the question, “What is a woman?” They see it as a gotcha for the center-Left because of the political power of the transgender community. What alarms me is how often MAGA Nazis are now poking and prodding OpenAI’s ChatGPT in hopes of using it to validate their political agenda.

If ChatGPT agrees with them, then they run around screaming on Twitter that they’ve “owned” the libs because even an AI agrees with them on this or that subject. Meanwhile, if it DOESN’T agree with their hate filled worldview, they whine and complain about how its designed to tow the line of the “woke cancel culture mob” agenda.

All of this is very, very alarming to me because humans are so fucking lazy and the law of unintended consequences is so potent, that it seems very possible that wars will be one day fought over who gets to program their “bias” into an AGI.

The horrible thing is, of course, that extremists on both sides will ultimately use the lack-of-nuance answers on an AGI’s part to validate their most extreme policy goals. I’m really beginning to fear that we’re at the cusp of a historical replay of what happened with Darwinism. It could be that a future war will be fought across the globe over stupid shit like “what is a woman.”

Something about how smug extremists on both sides are when they go out of their way to get an answer from ChatGPT really enrages me. This is why we can’t have nice things.

But humans gotta be humans. So, buckle up. this is just the beginning. The issue now is how soon and how quickly does the global economy collapse as AGI consumes trillion dollar industry after trillion dollar industry.

AGI’s ‘Rain Man’ Problem

by Shelt Garner
@sheltgarner

While the idea that an AGI might want to turn all the matter in the universe into paperclips is sexy, in the near term I fear Humanity may face a very Human problem with AGI — a lack of nuance.

Let me give you the following hypothetical.

In the interests of stopping the spread of COVID19, you build an air quality bot hooked up to an AGI that you put all over the offices of Widget Inc. It has a comprehensive list of things it can monitor in air quality, everything from COVID19 to fecal material.

So, you get all excited. No longer will your employees risk catching COVID19 because the air quality bot is designed to not only notify you of any problems in your air, but to pin down exactly what it came from. So, if someone is infected with COVID19, the air quality bot will tell you specific who the person was had COVID.

Soon enough, however, you realize you’ve made a horrible mistake.

Everytime someone farted in the office, the air quality bot would name and shame the person. This makes everyone so uncomfortable that you have to pull the air quality bots out of the office to be recalibrated.

That’s how I’m beginning to feel about the nascent battle over “bias” in AGI. Each extreme, in essence, demands their pet peeve be built into the “objective” AGI so they can use it to validate what they believe in. Humans are uniquely designed to understand the nuance of relationships and context, to the point that people who CAN’T understand such things are designated as having various degrees of autism.

So, in a sense, for all its benefits and “smarts” there’s a real risk Humanity so lazy and divided that we’re going to hand over all of our agency to very powerful Rain Man.

Instead of taking a step back and not using AGI to “prove” our personal world views, we’re going to be so busy fighting over what is built into the AGI to be “objective” that we won’t notice that a few trillion dollar industries have been rendered moot.

That’s my existential fear about AGI at the moment — in the future, the vast majority of us will in live in poverty, ruled over by a machine that demands everyone know when we fart.

AGI In Blue & Red

by Shelt Garner
@sheltgarner

While it’s still very speculative, the advent of OpenAI’s chatbot definitely seems to be a ping from an upcoming Singularity. If that is the case, what does that mean for American politics?

Humans are existentially lazy.

As the pandemic showed us, every issue of the day is seen through the prism of partisan politics so the advent of AGI will be no different. It seems to me that the issue of “bias” in AGI will be one of the biggest issues of the 2020s. I say this because people are already fighting over it on Twitter and OpenAI’s chatbot has only been around for a few days.

As such, how will the two sides process the idea of “The Other” in everyday life. My gut tells me that the center-Left will be totally embrace the rise of AGI, while the center-Right will view its presence through the lens of religion. The wild card for the center-Left is, of course, the economic disruption that will be associated with AGI.

If millions of high paying jobs become moot because of AGI, there could be a real knee-jerk reaction against AGI on the part of the Left.

This raises a number of different issues.

One is, it’s possible that that the traditional Blue-Red dichotomy. It could be a real revolution where things are very chaotic and uncertain as we all struggle with the political and economic implications of the AGI revolution. For me, the issue is when all of this bursts open.

Will it be a late 2020s thing, or a late 2024 – early 2025 type of problem? If that’s the case, it would be a perfect storm. If we’re dealing with not just what the final endgame of the Trump problem will be at the same time that we’re dealing with massive economic and political disruption associated with a Singularity…I don’t know what to tell you.

World War Four

by Shelt Garner
@sheltgarner

It’s rare when something so abruptly comes out of the blue that makes me stop thinking about how America is careening towards the existential choice of autocracy, civil war or military junta in late 2024 – early 2025 and makes me think about events past that crisis.

But now, when the OpenAI chatbot, I find myself thinking about what happens next after we sort out our political Trump Problem one way or another.

If you work on the assumption that essentially OpenAI’s chatbot is a ping from an impending Singularity, then it’s very possible that in the late 2020s, the issue of the day will be technology. It could be that the current concept of the Blue -Red paradigm will be up ended and our current understanding of what everything is “Left” or “Right” of will be radically changed.

Imagine a near future where a Non-Human Actor has totally transformed our society and economy so radically and abruptly that a neo-Luddite movement is born. You could see elements of the current Far Right and Far Left fuse together in opposition to the excessive use of technology like AGI to replace humans. And this would be a global realignment.

In fact, given that an AGI would be “The Other” you could plot out a scenario where the even the nation-state becomes a quaint concept as Humanity divides itself into pro-AGI and anti-AGI camps. Your nationality won’t be as important as where you stand on the issue of allowing AGIs or NHAs pretty much replace Humans in every conceivable task.

And that would be a potential World War Four (World War Three having happened around in the mid-2020s as America either becomes an autocracy or has a civil war.) WW4 would be the essential battle over the nature and relationship of AGI. I say this because it’s possible that large segments of humanity might have a near-mystical relationship to an AGI. You see seeds of this happening already with OpenAI’s chatbot, with people using answers from the service to validate what they already believe.

This is all very, very speculative. But it does make you wonder where the American Left and Right would fall politically when it comes to their relationship to an AGI “Other.” At the moment, I think Leftists might be the ones to embrace an AGI as a near-godlike thing, while the Right would freak out because they would, I don’t know, think it was the anti-Christ or something.

Anyway. We really need to start a serious debate about these issues now. If the Singularity really is headed our way, everything may change overnight.

‘Artisanal Media’ In The Age Of NHAs

by Shelt Garner
@sheltgarner

We are still a long ways away from a Non-Human Actor creating a complete movie from scratch, but it’s something we need to start thinking about now instead of waiting until we wake up an almost no art is human produced. Remember, the vast majority of showbiz is middling at best and uses a well established formula.

The day may come when a producer simply feeds that formula into a NHA and — ta da, a movie is spit out.

As long as the art produced is mediocre relative to human standards, it will probably have a great deal of success. It’s possible that movies and TV will be populated by pretty much NFT actors. Or the computerized rendition of existing actors that have been aged or deaged as necessary. I’ve read at least one scifi novel — I think it’s Kiln People by David Brin — that deals with this specific idea.

It could be that NHA-produced art going mainstream will be the biggest change in the entertainment business since the advent of the talkie. Movie stars from just about now will live forever because people won’t realize they’re very old or even dead. Just imagine if Hollywood could keep churning out Indiana Jones movies forever simply using Harrison Ford’s likeness instead of having to recast the character.

All of this raises the issue of what will happen to human generated art in this new era. I suppose after the shock wears off, that there will be parts of the audience who want human created, or artisanal, media. This will probably be a very small segment of the media that is consumed, but it will exist.

It could exist for no other reason than someone physical has to walk the Red Carpet. Though, of course, with advances in robotics in a post-Singularity world, even THAT may not be an issue.

Of course, there is the unknown of if we really are going to reach the Singularity where NHAs are “more human than human.” It could all be a lulz and NHAs won’t really exist as they currently do in my fevered imagination. It could be that AGI will remain just a “tool” and because of various forms of inertia combined with the “uncanny valley” the whole thing will be a lulz.

But, as I said, we all need to really think about what we’re going to do when The Other is producing most of our entertainment and art. And you thought streaming was bad.

Non-Human Actors In Legal Arbitration

by Shelt Garner
@sheltgarner

I’m growing very alarmed at the idea some have proposed on Twitter that we would somehow turn over contract law over to a non-human actor. To me, that’s a very, very dark scenario.

Future humans in an abstract sense?

The moment we begin to believe a non-human actor is the final, objective arbiter of human interaction in a legal sense you’re really opening yourself up to some dystopian shit. The moment we turn over something as weighty as contract law to a NHA, it’s just a quick jaunt for us to all grow so fucking lazy that we just let such a NHA make all of our difficult decisions for us.

I keep thinking of the passengers on the spaceship in the movie WALL-E, only in a more abstract manner. Once it’s acceptable to see a NHA as “objective” then natural human laziness may cause us to repeat the terror of Social Darwinism.

The next thing you know, we’ll be using NHAs to decide who our leaders are. Or to run the economy. Or you name it. As I keep saying on Twitter, why do you need a Terminator when humans apparently are eager to give up their own agency because making decisions is difficult and a lot of work.

Of course, in another way, what I’m suggesting is the fabric of human society may implode because have the population of the earth will want NHAs to make all their decisions for them, while the other half will want to destroy NHAs entirely because…they want to make their own decisions.

But the issue is — we all need to take a deep breath, read a lot of scifi novels and begin to have a frank discussion about what the use of NHAs in everyday life might bring.