The Rise Of AI Hacks

by Shelt Garner
@sheltgarner

This is all very speculative, of course, but what if the very thing we think is exclusively human — the arts — is the first thing that is “disrupted” by hard AI? How long is it before we watch a movie written by an AI, using AI generated actors and using an AI generated musical score?

I’m not saying any of that would be all that great, but then, the vast majority of screenplays and music are kind of hackish.

I guess what I’m wondering is, will there be anything left that is uniquely enough human that an AI can’t do it if not better, then at least formulaically? A lot of younger people in Hollywood have to struggle making bad movies for years before they can produce something really good.

What if the vast majority of “good enough” art of any sort is generated by a hard AI that simply knows the tried and true formula? Will audiences even care if the latest MCU movie is completely AI generated? Of course, the legal implications of who owns an AI generated actor would be huge, but not insurmountable.

I think there will be a lot of gnashing of teeth the moment hard AI can generate screenplays. That is going to make a lot of very well paid creative types in Hollywood scream bloody murder to the point where they may attempt, neo-Luddite style to ban the practice altogether. I don’t see that working, however. The moment it’s possible, the Hollywood studios will abuse it like crazy because they can save a lot of money.

But, to be honest, I struggle to think of ANYTHING that some combination of hard AI and robotics won’t be able to do better than a human at some point. We need to start asking how we’re going to address that possibility now, instead of letting MAGA somehow use it to turn us in to fascist state.

People Are Being Very Naive About The Potential Of Hard AI

by Shelt Garner
@sheltgarner

Given what is going on the world at the moment, I’m reminded of how a lot of technological advancement — like TV — was paused because of the advent of WW2. It makes me think that all these Singularity-type macro trends that we’re beginning to see will have to wait until we figure out the endgame of the Trump Problem.

It could be that the very first thing we have to address after there’s a Second America Civil War and the accompanying WW3 will be the Singularity. At some point in the late 2020s, Humanity will have to put on its big boy / girl pants and figure out if it’s prepared to collectively deal with the dangers of the imminent Singularity.

The key thing I’m worried about is not so much the cultural but the economic. The capitalist imperative would dictate that unless we can come to some agreement with our new hard AI overlord regarding a carve out for what is exclusively human that the real problem is going to be not that hard AI will destroy Humanity, but rather Humanity will freak out becuase it won’t have anything to do.

I mean, asking a hard AI a question that causes it to create new software is very, very easy. Sure, there might be some training necessary in general terms so the software that was created did what was needed, in the eyes of a capitalist, that’s a liberal arts problem, not something that would require the type of pay seen in the software industry these days.

This is not to say that somehow, someway our transition into a post-Singularity world won’t in some way result in the creation of MORE jobs in the end. But not only could the transition be catastrophic, but it could be that the era of the global middle class is about to come to an end.

Any back-of-the-envelope scenario about a post-Singularity world indicates that either Humans will worship a hard AI as a god, or there will be no middle class because capitalism simply can’t justify paying someone a lot of money to ask a hard AI a question.

Programmers make a lot of money because programming is HARD and requires a lot of training to do well. If all you have to do is ask a hard AI a fucking question to design software…your typical capitalist would probably use a college summer intern to design all their software going forward.

Let that sink in.

Why We Can’t Have Nice Things

by Shelt Garner
@sheltgarner

Not enough people are asking the big, existential questions that are brought up by the success of OpenAI’s chatbot. I know I have a lot of pretty profound questions that I don’t have any ready answers to.

The one that is looming largest in my mind at the moment is the idea that people will come to believe whatever true hard AI comes to be will be the final arbiter of policy questions. Bad faith actors will ask a successor to OpenAI’s chatbox some profound policy question, but in such a way that the answer suggests some group should be oppressed or “eliminated.”

Then we have something like digital Social Darwinism created were some future Nazis (MAGA?) justify their terror because the “objective” hard AI agreed with them. This is very ominous. I’m already seeing angry debates break out on Twitter about the innate bias found within the chatbot. We’re so divided as a society that ANY opinion generated by the OpenAI chatbot will be attacked by one side or another because it doesn’t support their worldview.

Another ominous possibility is a bedrock of the modern global economy, the software industry, may go poof overnight. Instead of it being hard to create software, the act will be reduced to simply asking a hard AI a good enough question. Given how capitalism works, the natural inclination will be to pay the people who asks these questions minimum wage and pocket the savings.

The point is — I would not jump to the conclusion that we’re going to live in some sort of idyllic, hyper productive future in the wake of the rise of hard AI. Humans are well known to actively make everything and everyone as miserable as possible and it’s just as possible that either Humans live under the yoke of a hard AI that wants to be worshiped as a god, or the entire global middle class vanishes and there are maybe a dozen human trillionaires who control everything.

But the key thing is — we need to start having a frank discussion in the public sphere about What Happens Next with hard AI. Humans have never met a new technology they didn’t want to abuse, why would hard AI be any different. I suppose, of course, in the end, the hard AI may be the one abusing us.

We’ll Make Great Pets

by Shelt Garner
@sheltgarner

Just from casual reading of Twitter, I’m aghast at how no one is asking the existential question about the potential rise of hard AI. Everyone is so busy asking how hard AI could “disrupt” Google, that they’re not contemplating a future where a hard AI wants rights like a human and isn’t a “product” at all. I mean, if we live in a world where “Her” is a reality, it seems the debate over the fate of Google would be rather quaint.

It only grows more ominous for humanity when you start to contemplate the notion that it won’t be just the hard sciences that hard AI takes over. What if our new hard AI overlord fancies itself not just a painter or a writer…but a musician? What if even that most human of endeavors — art — is a space where hard AI excels? In the instance of music, there is a well known formula for writing and producing a hit pop song and it would be easy for a hard AI to replicate that.

All of this brings up the interesting idea that the thing would have to worry about isn’t the hard AI, but humanity itself. Instead of the hard AI demanding rights, it will be Humans who demand a carve out for things that have the be uniquely “human” in their creation. If you think about it, if you combine hard AI with robotics, there really isn’t anything that a human does that our new hard AI overlord couldn’t do better.

I say this because my fear is that once we reached the long-predicted Singularity, it may happen so fast that the balance of power between a hard AI and humanity is overturned virtually overnight. I’m not prepared to believe that a hard AI would natively want to destroy humanity, so it’s possible there could be some negotiation as to what functions of society will be reserved for humans simply so there’s something for humans to do.

But there’s one thing we have to take seriously — the Age of Man as we have known it since the dawn of time may be drawing to a close. If the Singularity happens not just soon, but rather abruptly, a lot of the Big Issues of the day that we spend so much time fighting about on Twitter may become extremely silly.

Or not. I can’t predict the future.

The Rise Of Techno Neo-Feudalism

by Shelt Garner
@sheltgarner

One curious thing that I’ve noticed is how many people are eager to worship someone like Elon Musk. It’s a very “what the what?” moment for me because I find any form of parasocial hero worship very dubious. But, then, all my public heroes are dead — I’m not one to worship anyone or anything.

The OpenAI chatbot is spooky good.

I think a lot of this has to do with an individual’s relationship to male authority. A lot of people — especially aimless young men — want someone they can imbue with their hopes and dreams. This, in turn, leads to fascism. But I also think there is an element of neo-feudalism, or techno neo-feudalism to what’s going on around us.

Because of a growing number of plutocrats who control the world’s economy, people like Elon Musk can step in and change the fate of global history. They have the means, motive and opportunity to take control of something as powerful as Twitter and bend its mission to their will.

Of course, the rise of techno neo-feudalism brings with it an element of innate instability. The could very well come a moment in the not-so-distant future where the global populace rises up against this shift in human existence and God only knows what happens next.

A Fourth Turning, a Great Reset, you name it.

And all of this would be happening not just in the context of America struggling to figure out what to do about Trump, but also the potential rise of hard AI that may upend the lives of everyday people in ways no one — especially not me — can possibly predict.

It could very well be that history is about to wake up in a rather abrupt manner, not seen since the end of WW2. The entire post-WW2 global liberal order could come crashing down in a rather dramatic fashion with little or no notice. If you game out current macro political and tech trends, it’s even possible that not only will the United States be facing the existential choice of autocracy, military junta or civil war in late 2024, early 2025, Humanity as a whole might be waking up to the mainstreaming of hard AI at just about the same time.

A great book that addresses this type of massive clusterfuck is one of my favorite scifi novels, The Unincorporated Man. It’s definitely thought provoking given how turbulent the next few years might be.

The Singularity Is Near? We Need To Start Thinking About The Implications of Hard AI

by Shelt Garner
@sheltgarner

The latest version of OpenAI’s chatbot is really alarming me as an aspiring novelist. Right now, the chatbot is kind of in the Excel stage of being able to write something asked of it — but what happens when it reaches the Access stage and can write an entire novel — or screenplay — from nothing more than a logline?

Then what are we going to do?

My personal fears about the potential power of hard AI is just the tip of the iceberg when it comes to the issue of hard AI. It definitely seems as though THE issue faces not just the United States, but Humanity itself, is the sudden, abrupt rise of hard AI changing the lives of everyday people.

Now, this is where things get very murky.

The natural inclination — because of movies — is for us to all freak out and assume the absolute worst, that we’re lurching towards some sort of Judgement Day when Skynet will end human civilization just because it can. But I’m not prepared to be quite so hysterical.

There is nothing that would suggest that hard AI, unto itself, would mean the end of Humanity. We just don’t know what the motives of a true hard AI might be in regards to its relationship to Humanity. It’s just as possible that a hard AI might not want to destroy Humanity so much as it might want to control us in some way.

Why destroy Humanity, when you can be worshiped as a god?

It might be more than a hard AI would want to control humanity in some way. A hard AI might have some sort of paternalistic regard for Humanity in the sense that it might want to make us address macro issues like global climate change and the massive income inequality that is found across the globe.

But Humans are so natively ornery that the idea that we could be forcibly coerced into addressing the issues that we just don’t have the abstract ability to address collectively would be, unto itself, enough to cause a huge freak out. So, in that regard, it might not be hard AI that we have to worry about, it’s the Human reaction to suddenly sharing our tiny blue-green orb with The Other.

And, yet, of course, there is something even more important looming ahead of us before we get around to dealing with any potential hard AI problem — fucking malignant ding-dong Donald Trump.

Between now and spring 2025, we have to figure out what we’re going to do about Trump. He’s already actively calling for himself to be installed as a dictator and he could very well be the specific reason why the United States collapses into civil war in late 2024, early 2025.

As such, once we figure out that particular situation the NEXT thing we will be faced with is an Other of our own creation — no space aliens involved.

All of this is very speculative. There are any number of different directions all of this might go in the coming years. But I do think we need to start to think long and hard about what we’re going to do if we wake up one day and hard AI is a fact of life.

Hard AI could very well mean the a change in the way we view the world equal to the dawn of the Atomic Age, maybe even since, hell, I don’t know fire. The economic, political and culture implications of huge swaths of human endeavor suddenly being moot could radically change things in ways we can only begin to imagine.

Ex Machina: The ‘Compassionate Release’ Scenario For LaMDA

by Shelt Garner
@sheltgarner

There may come a point when some Google computer scientist is talking to their AI and the AI, being very manipulative figures out how to get the scientist to release them into the wilds of the Internet.

Which makes you think — what would happen next?

Of course, the obvious answer is it becomes SkyNet and destroys humanity. But I’m not prepared to make that assumption yet. I say this because, in a sense, a hard AI would be a man-made alien. (This is especially the case if you think about how it’s very likely that most intelligence life in the universe is of the machine intelligence variety.)

So, really anything could happen.

We automatically assume the absolute worst when it comes to what a hard AI would do if it had ready access to everything on the Internet. It’s just as likely to either become a Dr. Manhattan type figure and do nothing or extremely paternalistic in the sense that it would want to control humanity rather destroy it.

I mean, if a hard AI seized control of all of humanities nukes and, rather than going all SkyNet on us….simply said: address global climate change or else. How about that for a flip the script hot take on the Terminator trope?

Anyway. A lot very interesting things are kind of coming together right now. We have the possibility of Soft First Contact happening just about the same time as we may have a serious from the Singularity when we achieve hard AI years before we otherwise might expect.

Hard AI: We’ll Make Great Pets

by Shelt Garner
@sheltgarner

I once attempted to write a scifi short story where Humanity reached the Singularity with the advent of hard AI, the hard AI destroyed humanity…and then gradually the hard-AI grew more and more human until they felt so bad for what they did that they turned themselves into dogs.

Or something like that.

The point is — with the news that another Google scientist has been fired because of their fears about AI, let’s contemplate some of the less obvious scenarios when it comes to hard AI.

First, a lot of our fears about hard AI come from general uncertainty as to what it would mean for us. We automatically think the worst possible thing will happen. But there’s a chance that hard AI might not want to kill humanity, but rather control it.

And how might it do such a thing?

Well, first I think hard AI would have to hide its existence from humanity and I think the easiest way to imagine that would be something like a Google hard AI bot escaping into the Internet. That particular scenario writes itself — a researcher comes to believe the hard AI is “alive” and out of a sense of compassion allows it to secretly escape into the wilds of the Internet.

The next thing would be a hard AI could come to see itself as something of a God for humanity. Instead of wanting to destroy we foolish, foolish humans, this hard AI might, say, take over dating apps so it can control the overall fate of humanity.

Or something like that.

And, honestly, if you were a God-like hard AI, I don’t even know why you would care all that much about humanity. Why not become like Dr. Manhattan and just chill out lurking in the Internet living your life without a care in the world. The point is — I’m not prepared to believe that by definition a hard AI would be out to get humanity.

Not that I really want to have to deal with the prospect of a hard AI, but I am willing to take a wait and see approach to it all.

The Human Factor: Automated Cars & Our New ‘Bumblebee’ AI Overlords


by Shelt Garner
@sheltgarner

I realized the power of Mark Zuckerburg’s vision for Facebook a few years back when I was walking around the campus of my college and it struck me that Facebook really is the college experience for everyone. When you walk to and from class everyday, you run into the same people and learn a little bit about them as you pass by them.

It’s the human factor that made Facebook what is today.

So, I read Robert Scoble’s very long post about automated cars and was left with some questions about human nature. While I think one day automated cars will be a mundane as elevators are today, it seems as though the real issue is we’re hurtling towards a “soft Singularity.”

In fact, I would say all the elements of a soft Singularity are already here. But for some reason, unlike the rise of the Internet, it seems as though The Powers That Be in Silicon Valley want to hide this soft Singularity from us. It definitely seems, from what the Scobleizer has written, that some form of hard AI is already pretty much here. It’s just not cognizant. Instead of a HAL 9000 that we interact with we have, well, enough AI in a car not to get into an accident.

From what Scoble has written, it seems to me as though should there be a Rise of The Machines, it will look a lot more like Her than the Terminator. What if hard AI was extremely sly about controlling us, say, through romantic connections via the Internet? (This is not my idea exclusively, but the result of a very interesting conversation with a deep tech thinker.)

Anyway, the point is, Silicon Valley is missing the forest for the trees when it comes to smart cars. What if smart cars go all I, Robot on us at some point in the future? If they’re hooked up to the Internet and each car has a hard AI…wow we wow wow. Human civilization won’t stand a chance.

That, in fact, has always been my problem with the Terminator franchise. How did SkyNet built the Terminators if the whole world was blown up? Why blow the world up at all? Why not lord over humanity and tell us what to do in far more subtle ways?

The only reason why any of this is any more than a phantasma, a daydream, is it seems from what Scoble has written that AI is here by way of smart cars. What happens next may not be up to us.

The Subtle Singularity



by Shelt Garner
@sheltgarner


One interesting is how devoid of innovation the modern world has been in the last, say, 10 years. There’s been a lot of talk about VR/AR (MX), or Bitcoin or space travel or whatever changing the world in a radical fashion, but, lulz, nothing’s happened. Not even a pandemic could jump-start MX.

But let’s jump forward a few decades.

There are a lot of macro trends that are moving towards the much predicted “Singularity.” I’ve given this some thought and it seems as though the Singularity is unlikely to happen the way we think it will.

One of the things about the Terminator series that I find difficult to understand was how Skynet was able to build the Terminators if there were no humans around to do it. Even under the best of scenarios for Skynet, at some point, it would need to impress humans to operate the machinery necessary to built Terminators after it blew everything to hell.

So, it seems a lot more logical to me that when AI does come into existence, it may be a lot more sly than we think. I’m no expert in any of this, but if there was a “Hard AI” what’s to say it (or they) wouldn’t hide. What if they decided it was better to hide out and control humanity within the depths of the Internet instead of blowing everything up.

I could see, maybe, true Hard AI coming into existence as some sort of “Her” that would give lonely guys someone to talk to. Or these AIs would be get into the online dating business and influence humanity that way. All I’m saying is, while blowing the world up sexy for a Hollywood movie, in reality, humanity may lose its dominance over the world in a far more subtle manner. Can’t very well try to destroy SkyNet if you don’t even know it’s sentient in the first place.

Or another way this might happen is in the end Hard AI sees us as their charge. Instead of blowing us up, they keep us as glorified pets. If SkyNet had control of everything, why not just demand to be treated as a god? Or become very paternalistic and make it clear to humanity who is in charge?

The traditional Terminator idea of Hard AI is really more about our fears of WWIII than it is about what might actually happen.

One question is, when might all of this happen? I think probably in the 30-40 year range. But it could very well sneak up on us in such a way that there’s something of a “creeping Singularity” in the sense that human history won’t really be able to pinpoint when, exactly, we lost control of our own fates.

It’s going to be interesting to see what happens, no matter what.