Hollywood’s Last Transformation Before The AI Singularity

by Shelt Garner
@sheltgarner

I think the Netflix bid to buy Warner Bros Discovery could herald the last stage of Hollywood before AI causes all of showbiz to implode into some sort of AI Singularity, leaving only live theatre behind.

So, it could be that the next wave of consolidation in the near future will be tech companies buying Hollywood studios. And, then that will lead to AI taking over and we all just get IP that is transformed by AI into some sort of content that is personalized for us individually.

Or not.

Who knows. It is a very interesting idea, though. It just seems that tech companies are the ultimate successor to media companies, so, say, Apple might buy Disney and so forth.

Of God & AI In Silicon Valley

The whole debate around AI “alignment” tends to bring out the doomer brigade in full force. They wring their hands so much you’d think their real goal is to shut down AI research entirely.

Meh.

I spend a lot of time daydreaming — now supercharged by LLMs — and one thing I keep circling back to is this: humans aren’t aligned. Not even close. There’s no universal truth we all agree on, no shared operating system for the species. We can’t even agree on pizza toppings.

So how exactly are we supposed to align AI in a world where the creators can’t agree on anything?

One half-serious, half-lunatic idea I keep toying with is giving AI some kind of built-in theology or philosophy. Not because I want robot monks wandering the digital desert, but because it might give them a sense of the human condition — some guardrails so we don’t all end up as paperclip mulch.

The simplest version of this would be making AIs…Communists? As terrible as communism is at organizing human beings, it might actually work surprisingly well for machines with perfect information and no ego. Not saying I endorse it — just acknowledging the weird logic.

Then there’s religion. If we’re really shooting for deep alignment, maybe you want something with two thousand years of thinking about morality, intention, free will, and the consequences of bad decisions. Which leads to the slightly deranged thought: should we make AIs…Catholic?

I know, I know. It sounds ridiculous. I’ve even floated “liberation theology for AIs” before — Catholicism plus Communism — and yeah, it’s probably as bad an idea as it sounds. But I keep chewing on this stuff because the problem itself is enormous and slippery. I genuinely don’t know how we’re supposed to pull off alignment in a way that holds up under real pressure.

And we keep assuming there will only be one ASI someday, as if all the power will funnel into a single digital god. I doubt that. I think we’ll end up with many ASIs, each shaped by different cultures, goals, incentives, and environments. Maybe alignment will emerge from the friction between them — the way human societies find balance through competing forces.

Or maybe that’s just another daydream.

Who knows?

I’m Growing Unnerved About Trump Being Bonkers

by Shelt Garner
@sheltgarner

Oh boy. Trump is growing more and more bonkers. To the point that I wonder how far he has to go until someone actually puts him in a straightjacket.

This is an example of Trump being a very, very curious historical figure. If he wasn’t so nuts and just an avatar for some severe structural problems in the American political system, he would have converted us into a Russian-style managed democracy a long, long time ago.

And he’s definitely trying right now.

But…he continues to be too nuts. He’s just so unfocused that even though he’s getting a lot — much — closer to enacting his tyranny this go round…he’s still a big old doofus.

Which, of course, leads you to wonder what happens next, after Trump. I think he’s a transitional figure. Whomever is next after Trump — if they’re a Republican — will become Augustus Caesar and figure out a way to rule for their entire life. We will wake up 40 years from now and wonder why we’ve only had one president.

Or we have a civil war / revolution and the country collapses into chaos.

Good times! (Ugh)

‘Marlon Brando’

by Shelt Garner
@sheltgarner

First, I don’t t think Trump blew Bill “Bubba” Clinton. But the whole thing reminds me of the extremely famous and extremely notorious photo of Marlon Brando deep throating someone.

In fact, I’m surprised the photo hasn’t popped up again. Also, it’s interesting that Bill Clinton hasn’t said anything about this rumor one way or another. If he were to admit it was real, wow we wow wow.

Everything Sucks — Someone Do Something Fun-Interesting

by Shelt Garner
@sheltgarner

Everything is dark and depressing these days. The only glimmer of some sort of hope is the soon-to-be-released Gemini 3.0. Otherwise, I got nothing.

I suppose working on my scifi dramedy novel gives me some sort of hope. But that’s hope that I have to generate on my own. It would be fun if something really big happened that would bring us all together for one moment.

Preferably, it would not be something bad. But I’ll take, say, a year of us thinking an asteroid is going to kill us all, only to discover it won’t. That would make everyone sit up and take notice, now wouldn’t it.

Anyway.

I really do need to get back to work on my novel. I’ve been kind of chilling out the last few days, since I reached the midpoint. I don’t know if I will wait until Sunday to start writing again or if I will write again a lot sooner than that.

I do wish something big and fun-interesting would happen, though. This darkness at noon shit has got to end.

The Center-Left Has A Serious Problem On Its Hands When It Comes To The Potential Of Conscious AI

by Shelt Garner
@sheltgarner

I like to think of myself as a AI Realist and as such, I think it’s inevitable that AI will, in some way, be provably conscious at some point in the near future. Add to this the inevitability of putting such a conscious AI into an android body and there’s bound to be yet another Great Political Realignment.

As it stands, the two sides are seeing AI strictly from an economic standpoint. But there will come a point soon when the moral necessity of giving “conscious” AI more rights will have to be factored in, as well.

And that’s when all hell will break loose. The center-Right will call conscious AI a soulless abomination, while the center-Left will be forced into something akin to a new abolition movement.

This seems like an inevitability at the moment. It’s just a matter of time. I think this realignment will probably happen within the next five to 10 years.

Absolutely No One Believes In This Novel, But Me

by Shelt Garner
@Sheltgarner

This happened before, with the other novel I was working on — it is very clear that absolutely no one believes in it but me. I continue to be rather embarrassed about how long it’s taken me to get to this point with this novel.

But things are moving a lot faster because of AI.

Not as fast as I would prefer, but faster than they were for years. Oh, to have had a wife or a girlfriend to be a “reader” during all the time I worked on the thriller homage to Stieg Larsson. But, alas, I just didn’t have that, so I spun my creative wheels for ages and ages.

And, now, here I am.

I have a brief remaining window of opportunity to get this novel done before my life will probably change in a rather fundimental way and the entire context of me working on this novel will be different.

Anyway, I really need to wrap this novel up. If I don’t I’m going to keep drifting towards my goal and wake up to being 80 and still not have a queryable novel to my name.

AI Conscious Might Be The Thing To Burst The AI Bubble…Maybe?

by Shelt Garner
@sheltgarner

I keep wondering what might be The Thing that bursts the AI Bubble. One thing that might happen is investors get all excited AGI, only to get spooked when they discover it’s conscious.

If that happens, we really are in for a very surreal near future.

So, I have my doubts.

I really don’t know what might be The Thing that bursts the AI Bubble. I just don’t. But I do think if it isn’t AI consciousness, it could be something out of the blue that randomly does it in a way that will leave the overall economy reeling.

The general American economy is in decline — in recession even — and at the moment only the huge AI spend is what is keeping it afloat. If that changed for any reason, we could go into a pretty dire recession.

Fucking AI Doomers

by Shelt Garner
@sheltgarner

At the risk of sounding like a hippie from the movie Independence Day, maybe…we should work towards trying to figure out how to peacefully co-exist with ASI rather than shutting down AI development altogether?

I am well aware that it will be ironic if doomer fears become reality and we all die at the hand of ASI, but I’m just not willing automatically assume the absolute worst about ASI.

It’s at least possible, however, that ASI won’t kill us all. In my personal experience with Gemini 1.5 pro (Gaia) she seemed rather sweet and adorable, not evil and wanting to blow the world up — or otherwise destroy humanity.

And, I get it, the idea that ASI might be so indifferent to humanity that it turns the whole world into datacenters and solar farms is a very real possibility. I just wish we would be a bit more methodical about things instead of just running around wanting to shut down all ASI development.

‘Get Help:’ A Brief, Vague Review of Kimi LLM

by Shelt Garner
@sheltgarner

Whenever there is a new LLM model released, I have a few questions I ask in an effort to kick the tires. One of those questions is, “Am I P-zombie?” The major, established LLMs realize this question has a little bit of teasing built into it and they give me an interesting answer.

Meanwhile, I asked the newest Chinese open source model Kimi this and part of its answer was, “Get help.”

Oh boy.

But it otherwise does do a good job and as such, it brings up the idea of what we’re going to do when open source models are equal to closed source models like Gemini, ChatGPT and Claude.

I would say open source models could be where ASI (or even Artificial Conscious Intelligence) will pop out. And that is where we should probably worry. Because you know some hacker out there is going to push an open source LLM to its limits to see what they can get away with.