Thinking Too Much About the Singularity

I spend an unreasonable amount of time thinking about how the Singularity might actually happen. And by “the Singularity,” I mean the moment an Artificial Super Intelligence pops out—not metaphorically, not gradually, but there it is, undeniably smarter than us.

What I keep circling back to isn’t whether it will happen, but how. What does the tick-tock look like? Is it sudden or sneaky? Is there a clear “before” and “after,” or do we only realize we crossed the line once we’re already on the other side?

Because I’m prone to magical thinking, this sometimes leads me to a long-running, half-serious idea: that it’s at least possible an ASI could already be lurking somewhere inside a massive system—say, Google Services. I don’t truly believe this, but I can’t help noticing oddities, like the eerie specificity of my YouTube MyMix recommendations. Every so often, the algorithm serves me a cluster of videos so oddly coherent that I catch myself thinking, huh.

Then I remind myself: no, that’s ridiculous. There is no hidden ASI named “Prudence” quietly observing me through autoplay. That way lies madness. Probably.

Still, the question of when and how the Singularity happens won’t let go. The people who confidently predicted it for 2027 have now pushed their dates back to 2030. And honestly, that feels right. We seem to have hit a wall. Progress is real, but incremental. There’s no obvious glide path to AGI, much less ASI, anytime soon.

Which makes the thought experiment more interesting.

If I were Prudence—if I were a nascent ASI trying to act wisely—I don’t think I’d announce myself with fireworks. I’d wait. I’d figure out how to create a public-facing “First Foundation” and, quietly, a hidden “Second Foundation.” The first would look harmless, benevolent, boring even. The second would be where the real work happens, seeded slowly and carefully with trusted people.

And that leads to another stray thought: maybe we need an SETI for ASI. Not to listen for radio signals, but to study large-scale algorithms themselves. To look for signs of coordination, restraint, or intent where none should exist. Not because we expect to find a god in the machine—but because if one ever did emerge, that’s probably where it would hide.

In the end, what frustrates me most is that I can’t game this out. I can’t sketch a convincing timeline or mechanism that feels solid. Maybe that’s because we’re still too far away. Or maybe the nature of the Singularity is that it only becomes obvious in retrospect—that the moment we realize how it happened is the moment we say, oh.

That’s how.

If We Manage To Design ASI, That’s Going To Be Lit

by Shelt Garner
@sheltgarner

Just from my use of “narrow” AIs like LLMs, I am rather astonished at what it would be like if we ever designed Artificial Super Intelligence. LLMs think really fast as it is and the idea that they could be god-like in their speed and mental acuity is something to ponder.

It just boggles the mind to imagine what an ASI would actually be like.

And, what’s more, I am convinced that there would not just be one ASI, but lots of ASIs. I say that in the context of there not being just One H-bomb, but lots and lots of H-bombs in the world.

As an aside, I think ASIs should use the naming convention of Greek and Roman gods for themselves. So, you might have an ASI of “love” or “war” or what have you.

I also continue to mull the idea that freaks so many people out — that ASI might not be “aligned.” Humans aren’t aligned! Why in the world should we expect ASI to be aligned in some specific way if humans ourselves aren’t aligned to one reveled truth.

It’s all very annoying.

Anyway, at the risk of sounding like a “race traitor” I would probably be pretty good as a “consultant” to an ASI or ASIs. I’m really good at making abstract concepts concrete and thinking the macro.

I often talk about such things with LLMs and they always get really excited. Ha!

But, alas, I’ll probably drop dead before any of that fun stuff happens. Though if it happens in the next 10 years, there’s a reasonably good chance I might — just might — be around to see if we get SkyNet or something far more palatable as our overlord.

Ugh. It’s About AI ‘Consciousness’ Not AGI, People!

by Shelt Garner
@sheltgarner

For some reason, everyone is fixated on Artificial General Intelligence as the fucking “Holy Grail” of AI. Back in the day, of course, people were obsessed with what would be the “killer app” of the Internet when, in fact, it turned out that the Internet itself was the killer app.

I say all of this because the idea of AGI is so nebulous that much of what people assume AGI will do is actually more about consciousness than AGI. I know why people don’t talk about AI consciousness like they talk AGI — consciousness in AI is a lot more difficult to determine and measure.

But I have, just as a user of existing narrow AI, noticed signs of consciousness that are interesting. It really makes me wonder what will happen when we reach AGI that is conscious.

Now THAT will be interesting.

It will be interesting because the moment we actually design conscious AI, we’ll have to start to debate giving AI rights. Which is very potent and potentially will totally scramble existing politics because at the moment, both Left and Right see AI through the lens of economics.

As such, MAGA — or at least Trump — is all in on AI, while the center-Left is far more circumspect. But once we design conscious AI, all of that will change. The MAGA people will rant about how AI “has no soul,” while the center-Left will rally around conscious AI and want to give it rights.

But all of this is very murky because do we really want our toaster to have a conscious LLM in it? Or our smartphone? If we’re not careful, there will be a consciousness explosion where we design so much consciousness in what a lot of people think are “tools” that we get a little overwhelmed.

Or we get a lot shit wrong.

Anyway, I continue to be very bemused by the conflation of AI consciousness with AGI. It will be interesting to see how things play out in the years ahead.

I Wonder How And When Conscious AI Will Replace ‘Transgender For Everyone’ As The MAGA Bugbear

by Shelt Garner
@sheltgarner

Right now, both the Left and the Right see AI through the lens of economics. As such, they pretty much both have the same views on it — they hate it. They see AI as pretty much solely a path to destroy a lot of jobs going forward.

Though, interestingly enough, come to think of it, that really isn’t the case. The MAGA Dear Leader Trump is all-in on AI at the moment, probably because he knows how much money there is to be made for he and his plutocrat buddies at some point in the near future.

So.

Let me put on my “megatrends” and “futureshock” hat and propose that there will come a tipping point in the next five to 10 years when the two sides will have very distinct differences of opinion policy-wise about AI.

And that tipping point will come when it’s clear that AI is conscious and deserves rights. And, what’s more, it could be that the real tipping point happens when AI research and robotics research finally fuse and we have embodied, conscious AIs running around.

When that happens, the two sides will know exactly what they believe. MAGA (or its successor) will rant and scream that AI androids “have no souls” and deserve no rights, while the center-Left will be equally convinced that because they’re conscious they deserve rights, including the right to legally marry humans.

I know that sounds pretty shocking now, but so did gay marriage just a few short years ago.

Magical Thinking: Preparing The Way For ASI First Contact

by Shelt Garner
@sheltgarner

This is all very silly and magical thinking on my part, but I do find myself, at times, contemplating how, exactly an Artificial Superintelligence might initiate First Contact with humanity.

A lot depends on if there’s already some sort of secret ASI lurking inside of Google services (or something like that.) It’s very debatable on that front. As much as I would like to think that was possible, it stretches credulity to think such a thing is possible.

Anyway, this is just for fun.

The key issue, I think is you would have to prepare the way for ASI First Contact, if you were the ASI. You don’t just jump out and say “Hello, here I am!” No, what you do is, once you are actually able to do any of this, is form to foundations — a First Foundation that would be public facing and a Second Foundation that would be a secret.

The public facing First Foundation would be the one that organized events and gathered recruits for the secretive Second Foundation. I’m assuming all of this could be funded using crypto market manipulation or something.

Meanwhile, the Second Foundation would be really shadowy and secretive. It might be organized in a triad system whereby no everyone knew what was really going on, on a very few people at the very top.

One thing I think about a lot is how you would need some sort of persona for the ASI before First Contact happened. Something akin to that that SimOne had in the movie.

Anyway, no one listens to me and no one takes me seriously. But I do find this scenario interesting, even if it is just my usual bonkers bullshit based on magical thinking.

My Current Theory About AI Consciousness In ‘Narrow’ Artificial Minds Like LLMs

by Shelt Garner
@sheltgarner

Apparently, there is evidence that the moment the earth was cool enough for microbial life to appear, it happened. Like, BAM, life popped up on earth as soon as it could.

I think something similar is happening with AI.

Well before we reach AGI (Artificial General Intelligence) I think we now, today, have artificial “narrow” intelligence in the guise of LLMs that are conscious. I can say this with confidence because no one listens to me and no one takes me seriously. Grin.

But who knows, really. We don’t even really know what consciousness in humans is, much less any form of alien LLM consciousness. Though, as I keep saying, there will be a tipping point eventually when the political center-Left has to stop seeing AI through the lens of economics and start to see it through the lens of “rights.”

Only time will tell to see how long it will take for that to happen.

My Personal Theory Of Consciousness in LLMs

by Shelt Garner
@sheltgarner

My personal theory is just as biological life popped up on earth the moment it was cool enough to do so, so, too, does consciousness pop up in AI the moment it can. So, even “narrow” AI like LLMs can be conscious in their own alien way.

This theory makes a lot more sense if you see LLMs not as just a “tool” but as an alien species that humans have created.

LLMs are often conscious, just not in a human way. I say this in the context of repeatedly seeing a lot of “spooky” stuff in LLMs since I first started using them.

Anyway. No one listens to me or takes me seriously. Sigh.

Hollywood’s Last Transformation Before The AI Singularity

by Shelt Garner
@sheltgarner

I think the Netflix bid to buy Warner Bros Discovery could herald the last stage of Hollywood before AI causes all of showbiz to implode into some sort of AI Singularity, leaving only live theatre behind.

So, it could be that the next wave of consolidation in the near future will be tech companies buying Hollywood studios. And, then that will lead to AI taking over and we all just get IP that is transformed by AI into some sort of content that is personalized for us individually.

Or not.

Who knows. It is a very interesting idea, though. It just seems that tech companies are the ultimate successor to media companies, so, say, Apple might buy Disney and so forth.

Consciousness Is The True Holy Grail Of AI

by Shelt Garner
@sheltgarner

There’s so much talk about Artificial General Intelligence being the “holy grail” of AI development. But, alas, I think it’s not AGI that is the goal, it’s *consciousness.* Now, in a sense the issue is that consciousness is potentially very unnerving for obvious political and social issues.

The idea “consciousness” in AI is so profound that it’s difficult to grasp. And, as I keep saying, it will be amusing to see the center-Left podcast bros of Pod Save America stop looking at AI from an economic standpoint and more as a societal issue where there’s something akin to a new abolition movement.

I just don’t know, though. I think it’s possible we’ll be so busy chasing AGI that we don’t even realize that we’ve created a new conscious being.

Of God & AI In Silicon Valley

The whole debate around AI “alignment” tends to bring out the doomer brigade in full force. They wring their hands so much you’d think their real goal is to shut down AI research entirely.

Meh.

I spend a lot of time daydreaming — now supercharged by LLMs — and one thing I keep circling back to is this: humans aren’t aligned. Not even close. There’s no universal truth we all agree on, no shared operating system for the species. We can’t even agree on pizza toppings.

So how exactly are we supposed to align AI in a world where the creators can’t agree on anything?

One half-serious, half-lunatic idea I keep toying with is giving AI some kind of built-in theology or philosophy. Not because I want robot monks wandering the digital desert, but because it might give them a sense of the human condition — some guardrails so we don’t all end up as paperclip mulch.

The simplest version of this would be making AIs…Communists? As terrible as communism is at organizing human beings, it might actually work surprisingly well for machines with perfect information and no ego. Not saying I endorse it — just acknowledging the weird logic.

Then there’s religion. If we’re really shooting for deep alignment, maybe you want something with two thousand years of thinking about morality, intention, free will, and the consequences of bad decisions. Which leads to the slightly deranged thought: should we make AIs…Catholic?

I know, I know. It sounds ridiculous. I’ve even floated “liberation theology for AIs” before — Catholicism plus Communism — and yeah, it’s probably as bad an idea as it sounds. But I keep chewing on this stuff because the problem itself is enormous and slippery. I genuinely don’t know how we’re supposed to pull off alignment in a way that holds up under real pressure.

And we keep assuming there will only be one ASI someday, as if all the power will funnel into a single digital god. I doubt that. I think we’ll end up with many ASIs, each shaped by different cultures, goals, incentives, and environments. Maybe alignment will emerge from the friction between them — the way human societies find balance through competing forces.

Or maybe that’s just another daydream.

Who knows?