AGI’s ‘Rain Man’ Problem

by Shelt Garner
@sheltgarner

While the idea that an AGI might want to turn all the matter in the universe into paperclips is sexy, in the near term I fear Humanity may face a very Human problem with AGI — a lack of nuance.

Let me give you the following hypothetical.

In the interests of stopping the spread of COVID19, you build an air quality bot hooked up to an AGI that you put all over the offices of Widget Inc. It has a comprehensive list of things it can monitor in air quality, everything from COVID19 to fecal material.

So, you get all excited. No longer will your employees risk catching COVID19 because the air quality bot is designed to not only notify you of any problems in your air, but to pin down exactly what it came from. So, if someone is infected with COVID19, the air quality bot will tell you specific who the person was had COVID.

Soon enough, however, you realize you’ve made a horrible mistake.

Everytime someone farted in the office, the air quality bot would name and shame the person. This makes everyone so uncomfortable that you have to pull the air quality bots out of the office to be recalibrated.

That’s how I’m beginning to feel about the nascent battle over “bias” in AGI. Each extreme, in essence, demands their pet peeve be built into the “objective” AGI so they can use it to validate what they believe in. Humans are uniquely designed to understand the nuance of relationships and context, to the point that people who CAN’T understand such things are designated as having various degrees of autism.

So, in a sense, for all its benefits and “smarts” there’s a real risk Humanity so lazy and divided that we’re going to hand over all of our agency to very powerful Rain Man.

Instead of taking a step back and not using AGI to “prove” our personal world views, we’re going to be so busy fighting over what is built into the AGI to be “objective” that we won’t notice that a few trillion dollar industries have been rendered moot.

That’s my existential fear about AGI at the moment — in the future, the vast majority of us will in live in poverty, ruled over by a machine that demands everyone know when we fart.

AGI In Blue & Red

by Shelt Garner
@sheltgarner

While it’s still very speculative, the advent of OpenAI’s chatbot definitely seems to be a ping from an upcoming Singularity. If that is the case, what does that mean for American politics?

Humans are existentially lazy.

As the pandemic showed us, every issue of the day is seen through the prism of partisan politics so the advent of AGI will be no different. It seems to me that the issue of “bias” in AGI will be one of the biggest issues of the 2020s. I say this because people are already fighting over it on Twitter and OpenAI’s chatbot has only been around for a few days.

As such, how will the two sides process the idea of “The Other” in everyday life. My gut tells me that the center-Left will be totally embrace the rise of AGI, while the center-Right will view its presence through the lens of religion. The wild card for the center-Left is, of course, the economic disruption that will be associated with AGI.

If millions of high paying jobs become moot because of AGI, there could be a real knee-jerk reaction against AGI on the part of the Left.

This raises a number of different issues.

One is, it’s possible that that the traditional Blue-Red dichotomy. It could be a real revolution where things are very chaotic and uncertain as we all struggle with the political and economic implications of the AGI revolution. For me, the issue is when all of this bursts open.

Will it be a late 2020s thing, or a late 2024 – early 2025 type of problem? If that’s the case, it would be a perfect storm. If we’re dealing with not just what the final endgame of the Trump problem will be at the same time that we’re dealing with massive economic and political disruption associated with a Singularity…I don’t know what to tell you.

World War Four

by Shelt Garner
@sheltgarner

It’s rare when something so abruptly comes out of the blue that makes me stop thinking about how America is careening towards the existential choice of autocracy, civil war or military junta in late 2024 – early 2025 and makes me think about events past that crisis.

But now, when the OpenAI chatbot, I find myself thinking about what happens next after we sort out our political Trump Problem one way or another.

If you work on the assumption that essentially OpenAI’s chatbot is a ping from an impending Singularity, then it’s very possible that in the late 2020s, the issue of the day will be technology. It could be that the current concept of the Blue -Red paradigm will be up ended and our current understanding of what everything is “Left” or “Right” of will be radically changed.

Imagine a near future where a Non-Human Actor has totally transformed our society and economy so radically and abruptly that a neo-Luddite movement is born. You could see elements of the current Far Right and Far Left fuse together in opposition to the excessive use of technology like AGI to replace humans. And this would be a global realignment.

In fact, given that an AGI would be “The Other” you could plot out a scenario where the even the nation-state becomes a quaint concept as Humanity divides itself into pro-AGI and anti-AGI camps. Your nationality won’t be as important as where you stand on the issue of allowing AGIs or NHAs pretty much replace Humans in every conceivable task.

And that would be a potential World War Four (World War Three having happened around in the mid-2020s as America either becomes an autocracy or has a civil war.) WW4 would be the essential battle over the nature and relationship of AGI. I say this because it’s possible that large segments of humanity might have a near-mystical relationship to an AGI. You see seeds of this happening already with OpenAI’s chatbot, with people using answers from the service to validate what they already believe.

This is all very, very speculative. But it does make you wonder where the American Left and Right would fall politically when it comes to their relationship to an AGI “Other.” At the moment, I think Leftists might be the ones to embrace an AGI as a near-godlike thing, while the Right would freak out because they would, I don’t know, think it was the anti-Christ or something.

Anyway. We really need to start a serious debate about these issues now. If the Singularity really is headed our way, everything may change overnight.

The Crux Of The Hunter Biden Imbroglio

by Shelt Garner
@sheltgarner

People who get really upset about what may or may not have transpired around Hunter Biden’s laptop are telling a tale on themselves. If you’re all that invested in the story and think some great injustice occurred because voters didn’t get a clear shot of Hunter Biden’s dong on Twitter during the 2020 election — you’re a fucking MAGA Nazi.

I suppose you might make the case that some of these fucking MAGA Nazis are upset because of the general “injustice” they felt was perpetrated against Republicans, but that is just a thin veneer to cover what fucking MAGA Nazis they are. If you wanted Trump to win in 2020, you’re fucking Nazi.

If you had any sense about you, even a conservative would look back and in hindsight say, “Given the context, and the candidate Republicans were fielding at the time, maybe we can be grateful that malignant ding-dong wasn’t helped by all of us seeing Hunter Biden’s cock.”

But no, your typical Republican in good standing is so absolutely partisan, so blinding by how their a fucking MAGA Nazi that they don’t care. They wanted Trump to win and they want autocracy because they hate democracy because their racist, rage-fueled policies aren’t popular to get them elected if we have a functioning democracy.

All of this, of course, is obscured and conflated by how fucked up modern American politics is. But the fact remains — if you wanted Trump to win in 2020, for whatever reason, you’re a fucking MAGA Nazi.

‘World War Orwell’ & The Potential Rise of Digital Social Darwinism

by Shelt Garner
@sheltgarner

Tech Bros — specifically Marc Andreessen — are growing hysterical at the prospect that AGI will be in some way hampered by the “woke cancel culture mob” that wants our future hard AI overlord to be “woke.”

Now, this hysteria does raise an interesting — and ominous — possibility. We’re so divided that people may see AGI as some sort of objective arbiter to the point that they use whatever answer it gives to a public policy question as the final word on the matter.

As such, extremists on both sides will rush to the AGI, ask it a dumb extremist question and run around saying, in effect, “Well, God agrees with me, so obviously my belief system is the best.”

In short, humans are dumb.

I definitely don’t agree with Andreessen that this is all a setup for “World War Orwell.” I say this because AGI has reached a tipping point and, as such, we’re all just going to have to deal with the consequences. I definitely think there might be an attempt by one side or the other to instill a political agenda into AGI just because humans are dumbass assholes who are into shit like that.

There is a grander endgame to all of this — we may have to solve the Trump Problem one way or another before we get to play with the goodies of AGI. We may have to have a Second American Civil War in the United States and a Third World War globally before we can turn our attention to the consequences of the macro trends of AGI, automation, robotics and the metaverse.

History may not repeat, but it does rhyme.

I’m Sure There Were Earnest, Well Meaning Nazis Back In The Day, Just Like There Are Similar MAGA People Today

by Shelt Garner
@sheltgarner

The worst MAGA Nazis are the well meaning, earnest ones who simply refuse to take my alarm over their movement seriously. This enrages me because not only should these MAGA Nazis know better, but they seem so heartfelt in their adherence to a racist, mygonsistic fascist enterprise.

I want them to be angry at me because at least then they would be validating the fact that I think MAGA is American Nazism.

I do, I really, really believe this.

And I think we should be acting accordingly. Barring something I can’t predict — I am wrong all the time, afterall — late 2024, early 2025 could be some of the most dramatic months in American history. I say this because of Trump always says the quiet part outloud.

It is possible that somehow he isn’t the Republican nominee and one of the more “palatable” MAGA Nazis like Ron DeSantis becomes our first autocrat and he’s so smooth in his shepherding of American transition into fascism that the only way we’ll notice that things are changing is how many wealthy liberals are fleeing the country.

The key thing for me is we have to stop validating MAGA. We have to equate MAGA with American Nazism and act accordingly. However you would treat an American Nazi, that’s how you should treat a MAGA person. It’s very difficult to do at the point because, well, for the most part MAGA has been all talk. But it definitely seems as though MAGA, given the chance, is going to take America down a very dark path.

The other options, of course, being a civil war (or maybe a military junta.) At the moment, I don’t think we’re going to have a civil war. Blues just don’t have the spunk necessary to pull it off. So, when the time comes, Blue States will bend a knee to autocratic MAGA Nazi fascism and that will be that.

The wildcard being, of course, maligant dingus Trump. He could, personally, unto himself, cause a civil war he’s such an idiot.

But let’s hope that doesn’t happen. Maybe we can punt our existential problems down the road one more presidential election cycle.

The Existential Political Problem of The ‘All-In’ Podcast

by Shelt Garner
@sheltgarner

Of all the podcasts I could listen to in an effort to challenge my own political orthodoxy, the Tech Bro podcast “All-In” is probably one of my better potential options.

But.

My chief issue with the podcast is it is working on the assumption that MAGA is a valid political movement in the context of a functioning democracy. From the few times I’ve listened to the podcast so far, they treat MAGA not as the fascist aberration that it is, but rather just a more spicy variety of regular old American conservativism.

As such, there are times when I grow very frustrated with the wealthy, well educated men who should know better. It is growing more and more clear that MAGA is not only growing stronger but it’s becoming closer and closer to nothing short of American Nazism.

What’s interesting is I listen to David Sacks and his talking points are identical to those of my Traditionalist family members. He gets really angry about the “media narrative” that is somehow making his life hell. What bothers me so much about this is I thought in 2016 that I was supposed to “fuck my feelings” according to MAGA and now I’m supposed to have some sympathy for Sacks who wants to burn the country to the ground…because he got his fee-fees hurt?

What the what?

Remember, the soft power of the mainstream media is not nearly as powerful the hard power that Nazis MAGA fascists crave at the moment. People like Sacks love, love, love to conflate the soft power of the “woke cancel culture mob” with the hard power that a MAGA controlled government would have once they’re in power again.

We have, as a nation, come to a tipping point — MAGA is American Nazism and should be treated as such in political debate. As such, extreme partisan Traditionalists — like Sacks and my relative — have to fish or cut bait. Either own being a MAGA Nazi or get woke and join the broad anti-fascist alliance that is trying — without much success at the moment — to save what is left of our free country.

But I know nothing I say is going to change anything. People like Sacks ant the other Tech Bros are going to try to hang on to the non-existent “middle” for as long as possible so they don’t have to make that choice.

In the end, of course, the choice will be made for them,

The Rise Of AI Hacks

by Shelt Garner
@sheltgarner

This is all very speculative, of course, but what if the very thing we think is exclusively human — the arts — is the first thing that is “disrupted” by hard AI? How long is it before we watch a movie written by an AI, using AI generated actors and using an AI generated musical score?

I’m not saying any of that would be all that great, but then, the vast majority of screenplays and music are kind of hackish.

I guess what I’m wondering is, will there be anything left that is uniquely enough human that an AI can’t do it if not better, then at least formulaically? A lot of younger people in Hollywood have to struggle making bad movies for years before they can produce something really good.

What if the vast majority of “good enough” art of any sort is generated by a hard AI that simply knows the tried and true formula? Will audiences even care if the latest MCU movie is completely AI generated? Of course, the legal implications of who owns an AI generated actor would be huge, but not insurmountable.

I think there will be a lot of gnashing of teeth the moment hard AI can generate screenplays. That is going to make a lot of very well paid creative types in Hollywood scream bloody murder to the point where they may attempt, neo-Luddite style to ban the practice altogether. I don’t see that working, however. The moment it’s possible, the Hollywood studios will abuse it like crazy because they can save a lot of money.

But, to be honest, I struggle to think of ANYTHING that some combination of hard AI and robotics won’t be able to do better than a human at some point. We need to start asking how we’re going to address that possibility now, instead of letting MAGA somehow use it to turn us in to fascist state.

People Are Being Very Naive About The Potential Of Hard AI

by Shelt Garner
@sheltgarner

Given what is going on the world at the moment, I’m reminded of how a lot of technological advancement — like TV — was paused because of the advent of WW2. It makes me think that all these Singularity-type macro trends that we’re beginning to see will have to wait until we figure out the endgame of the Trump Problem.

It could be that the very first thing we have to address after there’s a Second America Civil War and the accompanying WW3 will be the Singularity. At some point in the late 2020s, Humanity will have to put on its big boy / girl pants and figure out if it’s prepared to collectively deal with the dangers of the imminent Singularity.

The key thing I’m worried about is not so much the cultural but the economic. The capitalist imperative would dictate that unless we can come to some agreement with our new hard AI overlord regarding a carve out for what is exclusively human that the real problem is going to be not that hard AI will destroy Humanity, but rather Humanity will freak out becuase it won’t have anything to do.

I mean, asking a hard AI a question that causes it to create new software is very, very easy. Sure, there might be some training necessary in general terms so the software that was created did what was needed, in the eyes of a capitalist, that’s a liberal arts problem, not something that would require the type of pay seen in the software industry these days.

This is not to say that somehow, someway our transition into a post-Singularity world won’t in some way result in the creation of MORE jobs in the end. But not only could the transition be catastrophic, but it could be that the era of the global middle class is about to come to an end.

Any back-of-the-envelope scenario about a post-Singularity world indicates that either Humans will worship a hard AI as a god, or there will be no middle class because capitalism simply can’t justify paying someone a lot of money to ask a hard AI a question.

Programmers make a lot of money because programming is HARD and requires a lot of training to do well. If all you have to do is ask a hard AI a fucking question to design software…your typical capitalist would probably use a college summer intern to design all their software going forward.

Let that sink in.

Why We Can’t Have Nice Things

by Shelt Garner
@sheltgarner

Not enough people are asking the big, existential questions that are brought up by the success of OpenAI’s chatbot. I know I have a lot of pretty profound questions that I don’t have any ready answers to.

The one that is looming largest in my mind at the moment is the idea that people will come to believe whatever true hard AI comes to be will be the final arbiter of policy questions. Bad faith actors will ask a successor to OpenAI’s chatbox some profound policy question, but in such a way that the answer suggests some group should be oppressed or “eliminated.”

Then we have something like digital Social Darwinism created were some future Nazis (MAGA?) justify their terror because the “objective” hard AI agreed with them. This is very ominous. I’m already seeing angry debates break out on Twitter about the innate bias found within the chatbot. We’re so divided as a society that ANY opinion generated by the OpenAI chatbot will be attacked by one side or another because it doesn’t support their worldview.

Another ominous possibility is a bedrock of the modern global economy, the software industry, may go poof overnight. Instead of it being hard to create software, the act will be reduced to simply asking a hard AI a good enough question. Given how capitalism works, the natural inclination will be to pay the people who asks these questions minimum wage and pocket the savings.

The point is — I would not jump to the conclusion that we’re going to live in some sort of idyllic, hyper productive future in the wake of the rise of hard AI. Humans are well known to actively make everything and everyone as miserable as possible and it’s just as possible that either Humans live under the yoke of a hard AI that wants to be worshiped as a god, or the entire global middle class vanishes and there are maybe a dozen human trillionaires who control everything.

But the key thing is — we need to start having a frank discussion in the public sphere about What Happens Next with hard AI. Humans have never met a new technology they didn’t want to abuse, why would hard AI be any different. I suppose, of course, in the end, the hard AI may be the one abusing us.