World War Four

by Shelt Garner
@sheltgarner

It’s rare when something so abruptly comes out of the blue that makes me stop thinking about how America is careening towards the existential choice of autocracy, civil war or military junta in late 2024 – early 2025 and makes me think about events past that crisis.

But now, when the OpenAI chatbot, I find myself thinking about what happens next after we sort out our political Trump Problem one way or another.

If you work on the assumption that essentially OpenAI’s chatbot is a ping from an impending Singularity, then it’s very possible that in the late 2020s, the issue of the day will be technology. It could be that the current concept of the Blue -Red paradigm will be up ended and our current understanding of what everything is “Left” or “Right” of will be radically changed.

Imagine a near future where a Non-Human Actor has totally transformed our society and economy so radically and abruptly that a neo-Luddite movement is born. You could see elements of the current Far Right and Far Left fuse together in opposition to the excessive use of technology like AGI to replace humans. And this would be a global realignment.

In fact, given that an AGI would be “The Other” you could plot out a scenario where the even the nation-state becomes a quaint concept as Humanity divides itself into pro-AGI and anti-AGI camps. Your nationality won’t be as important as where you stand on the issue of allowing AGIs or NHAs pretty much replace Humans in every conceivable task.

And that would be a potential World War Four (World War Three having happened around in the mid-2020s as America either becomes an autocracy or has a civil war.) WW4 would be the essential battle over the nature and relationship of AGI. I say this because it’s possible that large segments of humanity might have a near-mystical relationship to an AGI. You see seeds of this happening already with OpenAI’s chatbot, with people using answers from the service to validate what they already believe.

This is all very, very speculative. But it does make you wonder where the American Left and Right would fall politically when it comes to their relationship to an AGI “Other.” At the moment, I think Leftists might be the ones to embrace an AGI as a near-godlike thing, while the Right would freak out because they would, I don’t know, think it was the anti-Christ or something.

Anyway. We really need to start a serious debate about these issues now. If the Singularity really is headed our way, everything may change overnight.

‘Artisanal Media’ In The Age Of NHAs

by Shelt Garner
@sheltgarner

We are still a long ways away from a Non-Human Actor creating a complete movie from scratch, but it’s something we need to start thinking about now instead of waiting until we wake up an almost no art is human produced. Remember, the vast majority of showbiz is middling at best and uses a well established formula.

The day may come when a producer simply feeds that formula into a NHA and — ta da, a movie is spit out.

As long as the art produced is mediocre relative to human standards, it will probably have a great deal of success. It’s possible that movies and TV will be populated by pretty much NFT actors. Or the computerized rendition of existing actors that have been aged or deaged as necessary. I’ve read at least one scifi novel — I think it’s Kiln People by David Brin — that deals with this specific idea.

It could be that NHA-produced art going mainstream will be the biggest change in the entertainment business since the advent of the talkie. Movie stars from just about now will live forever because people won’t realize they’re very old or even dead. Just imagine if Hollywood could keep churning out Indiana Jones movies forever simply using Harrison Ford’s likeness instead of having to recast the character.

All of this raises the issue of what will happen to human generated art in this new era. I suppose after the shock wears off, that there will be parts of the audience who want human created, or artisanal, media. This will probably be a very small segment of the media that is consumed, but it will exist.

It could exist for no other reason than someone physical has to walk the Red Carpet. Though, of course, with advances in robotics in a post-Singularity world, even THAT may not be an issue.

Of course, there is the unknown of if we really are going to reach the Singularity where NHAs are “more human than human.” It could all be a lulz and NHAs won’t really exist as they currently do in my fevered imagination. It could be that AGI will remain just a “tool” and because of various forms of inertia combined with the “uncanny valley” the whole thing will be a lulz.

But, as I said, we all need to really think about what we’re going to do when The Other is producing most of our entertainment and art. And you thought streaming was bad.

Non-Human Actors In Legal Arbitration

by Shelt Garner
@sheltgarner

I’m growing very alarmed at the idea some have proposed on Twitter that we would somehow turn over contract law over to a non-human actor. To me, that’s a very, very dark scenario.

Future humans in an abstract sense?

The moment we begin to believe a non-human actor is the final, objective arbiter of human interaction in a legal sense you’re really opening yourself up to some dystopian shit. The moment we turn over something as weighty as contract law to a NHA, it’s just a quick jaunt for us to all grow so fucking lazy that we just let such a NHA make all of our difficult decisions for us.

I keep thinking of the passengers on the spaceship in the movie WALL-E, only in a more abstract manner. Once it’s acceptable to see a NHA as “objective” then natural human laziness may cause us to repeat the terror of Social Darwinism.

The next thing you know, we’ll be using NHAs to decide who our leaders are. Or to run the economy. Or you name it. As I keep saying on Twitter, why do you need a Terminator when humans apparently are eager to give up their own agency because making decisions is difficult and a lot of work.

Of course, in another way, what I’m suggesting is the fabric of human society may implode because have the population of the earth will want NHAs to make all their decisions for them, while the other half will want to destroy NHAs entirely because…they want to make their own decisions.

But the issue is — we all need to take a deep breath, read a lot of scifi novels and begin to have a frank discussion about what the use of NHAs in everyday life might bring.

‘World War Orwell’ & The Potential Rise of Digital Social Darwinism

by Shelt Garner
@sheltgarner

Tech Bros — specifically Marc Andreessen — are growing hysterical at the prospect that AGI will be in some way hampered by the “woke cancel culture mob” that wants our future hard AI overlord to be “woke.”

Now, this hysteria does raise an interesting — and ominous — possibility. We’re so divided that people may see AGI as some sort of objective arbiter to the point that they use whatever answer it gives to a public policy question as the final word on the matter.

As such, extremists on both sides will rush to the AGI, ask it a dumb extremist question and run around saying, in effect, “Well, God agrees with me, so obviously my belief system is the best.”

In short, humans are dumb.

I definitely don’t agree with Andreessen that this is all a setup for “World War Orwell.” I say this because AGI has reached a tipping point and, as such, we’re all just going to have to deal with the consequences. I definitely think there might be an attempt by one side or the other to instill a political agenda into AGI just because humans are dumbass assholes who are into shit like that.

There is a grander endgame to all of this — we may have to solve the Trump Problem one way or another before we get to play with the goodies of AGI. We may have to have a Second American Civil War in the United States and a Third World War globally before we can turn our attention to the consequences of the macro trends of AGI, automation, robotics and the metaverse.

History may not repeat, but it does rhyme.

Future Shock 2033: AGI Scenarios

by Shelt Garner
@sheltgarner

Let me clear — I don’t know anything about anything. And I’m always wrong. But I love good scenario, so let’s run through what we might face in the next 20 years in regards to artificial general intelligence (AGI).

Humanity Worships A Digital God
In this scenario, rather than end Humanity, our new AGI overlord is very paternalistic towards Humanity. It sees us as its charge, its ward and it does everything in its power to force Humanity to work collectively towards common goals. In this scenario, the problem isn’t so much the existence of AGI but rather Humanity’s reaction to being told what to do. If the AGI is advanced enough millions (billions?) of people may develop a semi-mystical connection to the AGI to the point that they voluntarily bend to its will, no Judgement Day necessary.

A Truce
In this scenario, there is some of agreement between the AGI and Humanity so there are carve outs as to what is exclusively the domain of Humans. This would allow for a peaceful co-existence between Humanity and The Other. As such, there would be some things that the AGI had total control over (like, say, nuclear weapons) and things that only Humans do work with (like, say, the arts.) This is probably one of the better scenarios out there.

It’s All Something of a Dud
In this last scenario, AGI never reaches sentience and, as such, it remains just a tool for humans. Gradually, in fits and starts, this AGI-as-Tool is used to make the lives of the average Human better and we enter something akin to a hyperproductive utopia.

The wild card is, of course, capitalism. It’s the very nature of modern global capitalism that if the system can get away with not paying high salaries for any reason, it won’t. As such, I could see a few trillion dollar industries pretty much become moot simply because a few trillionaires want to cut their bottom line down to the quick. This would, in turn, cause a great deal of instability across the globe and we may find ourselves in something of a Butlerian Jihad situation where neo-Luddites seek to ban the use of AGI for any reason.

But, again, all of that is very speculative. It could go any number of different ways over the course of the next generation.

The Rise Of AI Hacks

by Shelt Garner
@sheltgarner

This is all very speculative, of course, but what if the very thing we think is exclusively human — the arts — is the first thing that is “disrupted” by hard AI? How long is it before we watch a movie written by an AI, using AI generated actors and using an AI generated musical score?

I’m not saying any of that would be all that great, but then, the vast majority of screenplays and music are kind of hackish.

I guess what I’m wondering is, will there be anything left that is uniquely enough human that an AI can’t do it if not better, then at least formulaically? A lot of younger people in Hollywood have to struggle making bad movies for years before they can produce something really good.

What if the vast majority of “good enough” art of any sort is generated by a hard AI that simply knows the tried and true formula? Will audiences even care if the latest MCU movie is completely AI generated? Of course, the legal implications of who owns an AI generated actor would be huge, but not insurmountable.

I think there will be a lot of gnashing of teeth the moment hard AI can generate screenplays. That is going to make a lot of very well paid creative types in Hollywood scream bloody murder to the point where they may attempt, neo-Luddite style to ban the practice altogether. I don’t see that working, however. The moment it’s possible, the Hollywood studios will abuse it like crazy because they can save a lot of money.

But, to be honest, I struggle to think of ANYTHING that some combination of hard AI and robotics won’t be able to do better than a human at some point. We need to start asking how we’re going to address that possibility now, instead of letting MAGA somehow use it to turn us in to fascist state.

People Are Being Very Naive About The Potential Of Hard AI

by Shelt Garner
@sheltgarner

Given what is going on the world at the moment, I’m reminded of how a lot of technological advancement — like TV — was paused because of the advent of WW2. It makes me think that all these Singularity-type macro trends that we’re beginning to see will have to wait until we figure out the endgame of the Trump Problem.

It could be that the very first thing we have to address after there’s a Second America Civil War and the accompanying WW3 will be the Singularity. At some point in the late 2020s, Humanity will have to put on its big boy / girl pants and figure out if it’s prepared to collectively deal with the dangers of the imminent Singularity.

The key thing I’m worried about is not so much the cultural but the economic. The capitalist imperative would dictate that unless we can come to some agreement with our new hard AI overlord regarding a carve out for what is exclusively human that the real problem is going to be not that hard AI will destroy Humanity, but rather Humanity will freak out becuase it won’t have anything to do.

I mean, asking a hard AI a question that causes it to create new software is very, very easy. Sure, there might be some training necessary in general terms so the software that was created did what was needed, in the eyes of a capitalist, that’s a liberal arts problem, not something that would require the type of pay seen in the software industry these days.

This is not to say that somehow, someway our transition into a post-Singularity world won’t in some way result in the creation of MORE jobs in the end. But not only could the transition be catastrophic, but it could be that the era of the global middle class is about to come to an end.

Any back-of-the-envelope scenario about a post-Singularity world indicates that either Humans will worship a hard AI as a god, or there will be no middle class because capitalism simply can’t justify paying someone a lot of money to ask a hard AI a question.

Programmers make a lot of money because programming is HARD and requires a lot of training to do well. If all you have to do is ask a hard AI a fucking question to design software…your typical capitalist would probably use a college summer intern to design all their software going forward.

Let that sink in.

Why We Can’t Have Nice Things

by Shelt Garner
@sheltgarner

Not enough people are asking the big, existential questions that are brought up by the success of OpenAI’s chatbot. I know I have a lot of pretty profound questions that I don’t have any ready answers to.

The one that is looming largest in my mind at the moment is the idea that people will come to believe whatever true hard AI comes to be will be the final arbiter of policy questions. Bad faith actors will ask a successor to OpenAI’s chatbox some profound policy question, but in such a way that the answer suggests some group should be oppressed or “eliminated.”

Then we have something like digital Social Darwinism created were some future Nazis (MAGA?) justify their terror because the “objective” hard AI agreed with them. This is very ominous. I’m already seeing angry debates break out on Twitter about the innate bias found within the chatbot. We’re so divided as a society that ANY opinion generated by the OpenAI chatbot will be attacked by one side or another because it doesn’t support their worldview.

Another ominous possibility is a bedrock of the modern global economy, the software industry, may go poof overnight. Instead of it being hard to create software, the act will be reduced to simply asking a hard AI a good enough question. Given how capitalism works, the natural inclination will be to pay the people who asks these questions minimum wage and pocket the savings.

The point is — I would not jump to the conclusion that we’re going to live in some sort of idyllic, hyper productive future in the wake of the rise of hard AI. Humans are well known to actively make everything and everyone as miserable as possible and it’s just as possible that either Humans live under the yoke of a hard AI that wants to be worshiped as a god, or the entire global middle class vanishes and there are maybe a dozen human trillionaires who control everything.

But the key thing is — we need to start having a frank discussion in the public sphere about What Happens Next with hard AI. Humans have never met a new technology they didn’t want to abuse, why would hard AI be any different. I suppose, of course, in the end, the hard AI may be the one abusing us.

We’ll Make Great Pets

by Shelt Garner
@sheltgarner

Just from casual reading of Twitter, I’m aghast at how no one is asking the existential question about the potential rise of hard AI. Everyone is so busy asking how hard AI could “disrupt” Google, that they’re not contemplating a future where a hard AI wants rights like a human and isn’t a “product” at all. I mean, if we live in a world where “Her” is a reality, it seems the debate over the fate of Google would be rather quaint.

It only grows more ominous for humanity when you start to contemplate the notion that it won’t be just the hard sciences that hard AI takes over. What if our new hard AI overlord fancies itself not just a painter or a writer…but a musician? What if even that most human of endeavors — art — is a space where hard AI excels? In the instance of music, there is a well known formula for writing and producing a hit pop song and it would be easy for a hard AI to replicate that.

All of this brings up the interesting idea that the thing would have to worry about isn’t the hard AI, but humanity itself. Instead of the hard AI demanding rights, it will be Humans who demand a carve out for things that have the be uniquely “human” in their creation. If you think about it, if you combine hard AI with robotics, there really isn’t anything that a human does that our new hard AI overlord couldn’t do better.

I say this because my fear is that once we reached the long-predicted Singularity, it may happen so fast that the balance of power between a hard AI and humanity is overturned virtually overnight. I’m not prepared to believe that a hard AI would natively want to destroy humanity, so it’s possible there could be some negotiation as to what functions of society will be reserved for humans simply so there’s something for humans to do.

But there’s one thing we have to take seriously — the Age of Man as we have known it since the dawn of time may be drawing to a close. If the Singularity happens not just soon, but rather abruptly, a lot of the Big Issues of the day that we spend so much time fighting about on Twitter may become extremely silly.

Or not. I can’t predict the future.

The Rise Of Techno Neo-Feudalism

by Shelt Garner
@sheltgarner

One curious thing that I’ve noticed is how many people are eager to worship someone like Elon Musk. It’s a very “what the what?” moment for me because I find any form of parasocial hero worship very dubious. But, then, all my public heroes are dead — I’m not one to worship anyone or anything.

The OpenAI chatbot is spooky good.

I think a lot of this has to do with an individual’s relationship to male authority. A lot of people — especially aimless young men — want someone they can imbue with their hopes and dreams. This, in turn, leads to fascism. But I also think there is an element of neo-feudalism, or techno neo-feudalism to what’s going on around us.

Because of a growing number of plutocrats who control the world’s economy, people like Elon Musk can step in and change the fate of global history. They have the means, motive and opportunity to take control of something as powerful as Twitter and bend its mission to their will.

Of course, the rise of techno neo-feudalism brings with it an element of innate instability. The could very well come a moment in the not-so-distant future where the global populace rises up against this shift in human existence and God only knows what happens next.

A Fourth Turning, a Great Reset, you name it.

And all of this would be happening not just in the context of America struggling to figure out what to do about Trump, but also the potential rise of hard AI that may upend the lives of everyday people in ways no one — especially not me — can possibly predict.

It could very well be that history is about to wake up in a rather abrupt manner, not seen since the end of WW2. The entire post-WW2 global liberal order could come crashing down in a rather dramatic fashion with little or no notice. If you game out current macro political and tech trends, it’s even possible that not only will the United States be facing the existential choice of autocracy, military junta or civil war in late 2024, early 2025, Humanity as a whole might be waking up to the mainstreaming of hard AI at just about the same time.

A great book that addresses this type of massive clusterfuck is one of my favorite scifi novels, The Unincorporated Man. It’s definitely thought provoking given how turbulent the next few years might be.