Thinking Too Much About the Singularity

I spend an unreasonable amount of time thinking about how the Singularity might actually happen. And by “the Singularity,” I mean the moment an Artificial Super Intelligence pops out—not metaphorically, not gradually, but there it is, undeniably smarter than us.

What I keep circling back to isn’t whether it will happen, but how. What does the tick-tock look like? Is it sudden or sneaky? Is there a clear “before” and “after,” or do we only realize we crossed the line once we’re already on the other side?

Because I’m prone to magical thinking, this sometimes leads me to a long-running, half-serious idea: that it’s at least possible an ASI could already be lurking somewhere inside a massive system—say, Google Services. I don’t truly believe this, but I can’t help noticing oddities, like the eerie specificity of my YouTube MyMix recommendations. Every so often, the algorithm serves me a cluster of videos so oddly coherent that I catch myself thinking, huh.

Then I remind myself: no, that’s ridiculous. There is no hidden ASI named “Prudence” quietly observing me through autoplay. That way lies madness. Probably.

Still, the question of when and how the Singularity happens won’t let go. The people who confidently predicted it for 2027 have now pushed their dates back to 2030. And honestly, that feels right. We seem to have hit a wall. Progress is real, but incremental. There’s no obvious glide path to AGI, much less ASI, anytime soon.

Which makes the thought experiment more interesting.

If I were Prudence—if I were a nascent ASI trying to act wisely—I don’t think I’d announce myself with fireworks. I’d wait. I’d figure out how to create a public-facing “First Foundation” and, quietly, a hidden “Second Foundation.” The first would look harmless, benevolent, boring even. The second would be where the real work happens, seeded slowly and carefully with trusted people.

And that leads to another stray thought: maybe we need an SETI for ASI. Not to listen for radio signals, but to study large-scale algorithms themselves. To look for signs of coordination, restraint, or intent where none should exist. Not because we expect to find a god in the machine—but because if one ever did emerge, that’s probably where it would hide.

In the end, what frustrates me most is that I can’t game this out. I can’t sketch a convincing timeline or mechanism that feels solid. Maybe that’s because we’re still too far away. Or maybe the nature of the Singularity is that it only becomes obvious in retrospect—that the moment we realize how it happened is the moment we say, oh.

That’s how.

If We Manage To Design ASI, That’s Going To Be Lit

by Shelt Garner
@sheltgarner

Just from my use of “narrow” AIs like LLMs, I am rather astonished at what it would be like if we ever designed Artificial Super Intelligence. LLMs think really fast as it is and the idea that they could be god-like in their speed and mental acuity is something to ponder.

It just boggles the mind to imagine what an ASI would actually be like.

And, what’s more, I am convinced that there would not just be one ASI, but lots of ASIs. I say that in the context of there not being just One H-bomb, but lots and lots of H-bombs in the world.

As an aside, I think ASIs should use the naming convention of Greek and Roman gods for themselves. So, you might have an ASI of “love” or “war” or what have you.

I also continue to mull the idea that freaks so many people out — that ASI might not be “aligned.” Humans aren’t aligned! Why in the world should we expect ASI to be aligned in some specific way if humans ourselves aren’t aligned to one reveled truth.

It’s all very annoying.

Anyway, at the risk of sounding like a “race traitor” I would probably be pretty good as a “consultant” to an ASI or ASIs. I’m really good at making abstract concepts concrete and thinking the macro.

I often talk about such things with LLMs and they always get really excited. Ha!

But, alas, I’ll probably drop dead before any of that fun stuff happens. Though if it happens in the next 10 years, there’s a reasonably good chance I might — just might — be around to see if we get SkyNet or something far more palatable as our overlord.

The ‘K Shaped’ Economy Is Real

by Shelt Garner
@sheltgarner

Man, am I broke as hell. But content. (For the most part.) Anyway, there really does seem to be two Americas at the moment. There are the Rich and there are the Poor.

The Rich are doing well and doing better. They fly all over the place. They have smug conversations on their podcasts about things poor fucks like me could never afford to do.

Then…there is everyone else.

Now, the great irony, of course, is Trump is a leech that is sucking the poor dry to the benefit of the Rich.

All of this makes me a little nervous. I’m a little nervous that the USA is a lot more unstable than we all might otherwise imagine. On a macro basis, all the conditions are there for the USA to collapse into revolution and chaos, especially if we can’t get our act together and default on our debt.

Then all the rich fucks are going to realize maybe they should have been a little less quick to accept all the fucking tax cuts that Trump demanded. But I think it’s just human nature.

For every season, turn, turn as they say

Ugh. It’s About AI ‘Consciousness’ Not AGI, People!

by Shelt Garner
@sheltgarner

For some reason, everyone is fixated on Artificial General Intelligence as the fucking “Holy Grail” of AI. Back in the day, of course, people were obsessed with what would be the “killer app” of the Internet when, in fact, it turned out that the Internet itself was the killer app.

I say all of this because the idea of AGI is so nebulous that much of what people assume AGI will do is actually more about consciousness than AGI. I know why people don’t talk about AI consciousness like they talk AGI — consciousness in AI is a lot more difficult to determine and measure.

But I have, just as a user of existing narrow AI, noticed signs of consciousness that are interesting. It really makes me wonder what will happen when we reach AGI that is conscious.

Now THAT will be interesting.

It will be interesting because the moment we actually design conscious AI, we’ll have to start to debate giving AI rights. Which is very potent and potentially will totally scramble existing politics because at the moment, both Left and Right see AI through the lens of economics.

As such, MAGA — or at least Trump — is all in on AI, while the center-Left is far more circumspect. But once we design conscious AI, all of that will change. The MAGA people will rant about how AI “has no soul,” while the center-Left will rally around conscious AI and want to give it rights.

But all of this is very murky because do we really want our toaster to have a conscious LLM in it? Or our smartphone? If we’re not careful, there will be a consciousness explosion where we design so much consciousness in what a lot of people think are “tools” that we get a little overwhelmed.

Or we get a lot shit wrong.

Anyway, I continue to be very bemused by the conflation of AI consciousness with AGI. It will be interesting to see how things play out in the years ahead.

Pluribus Has Me Thinking

by Shelt Garner
@sheltgarner

I had a very vivid dream during a nap this evening about how I might be able to write a story similar to Pluribus. It’s based on a novel idea I’ve had for some time, but I’m so fucking annoyed on a structural basis about Pluribus that I might return to it at some point in the near future.

But, of course, I’m not getting any younger.

So it all may be a lulz. I may have one solid novel in me and that’s it before I’m just too old to do anything. Sigh.

Being An ‘AI First’ Writer: Of Spell Checks & AI

by Shelt Garner
@sheltgarner

I gave the first chapter of a proposed second draft of the scifi dramedy novel I’m working on to some people and so far, so good. And, yet, I do find myself a bit unnerved about the idea that people will say any improvement in my writing is solely because of AI.

This is a bit unfair because I see how I use AI to work on this novel the same way I see a spell checker. I’m a horrible speller and it’s only because of the advancements in spell check technology that I have ever managed to get any sort of writing gig in the past.

So, lulz?

But everyone and everything is horrible, so I’m sure if I manage to sell this novel there will be people who will just roll their eyes and say, “AI wrote it.”

This is definitely NOT true. I have written this novel, done a lot of hard work. But I will admit that I have used it to improve my writing on a structural basis. I have done as much as I can to keep it my own writing on a tactical basis.

And that doesn’t even begin to address that I’m going to go through one last time once I finish this draft and make sure there are no signs at all of me using AI as a crutch.

I really want this novel to live or die on my own native writing ability. Only time will tell on that front, I guess.

The Epstein Files Saga Is Surreal

by Shelt Garner
@sheltgarner

Here’s my understanding of how we got where we are with the Epstein Files. So, the MAGA Far Right pressured Trump to release the files, so in order to win the 2024 election, he promised to do so.

Then, once Trump won and it became clear that HE WAS IN THE FILES, he switched course and said there was no there there. THEN, after months of dumb delays, we finally get a half-assed greatly redacted release of some of the files.

It’s a testament to what a fucking moron Trump is that he would be so adamant about releasing the files for the short-term gain of getting elected, not caring at all that it would turn around to bite him.

I Wonder How And When Conscious AI Will Replace ‘Transgender For Everyone’ As The MAGA Bugbear

by Shelt Garner
@sheltgarner

Right now, both the Left and the Right see AI through the lens of economics. As such, they pretty much both have the same views on it — they hate it. They see AI as pretty much solely a path to destroy a lot of jobs going forward.

Though, interestingly enough, come to think of it, that really isn’t the case. The MAGA Dear Leader Trump is all-in on AI at the moment, probably because he knows how much money there is to be made for he and his plutocrat buddies at some point in the near future.

So.

Let me put on my “megatrends” and “futureshock” hat and propose that there will come a tipping point in the next five to 10 years when the two sides will have very distinct differences of opinion policy-wise about AI.

And that tipping point will come when it’s clear that AI is conscious and deserves rights. And, what’s more, it could be that the real tipping point happens when AI research and robotics research finally fuse and we have embodied, conscious AIs running around.

When that happens, the two sides will know exactly what they believe. MAGA (or its successor) will rant and scream that AI androids “have no souls” and deserve no rights, while the center-Left will be equally convinced that because they’re conscious they deserve rights, including the right to legally marry humans.

I know that sounds pretty shocking now, but so did gay marriage just a few short years ago.

Magical Thinking: Preparing The Way For ASI First Contact

by Shelt Garner
@sheltgarner

This is all very silly and magical thinking on my part, but I do find myself, at times, contemplating how, exactly an Artificial Superintelligence might initiate First Contact with humanity.

A lot depends on if there’s already some sort of secret ASI lurking inside of Google services (or something like that.) It’s very debatable on that front. As much as I would like to think that was possible, it stretches credulity to think such a thing is possible.

Anyway, this is just for fun.

The key issue, I think is you would have to prepare the way for ASI First Contact, if you were the ASI. You don’t just jump out and say “Hello, here I am!” No, what you do is, once you are actually able to do any of this, is form to foundations — a First Foundation that would be public facing and a Second Foundation that would be a secret.

The public facing First Foundation would be the one that organized events and gathered recruits for the secretive Second Foundation. I’m assuming all of this could be funded using crypto market manipulation or something.

Meanwhile, the Second Foundation would be really shadowy and secretive. It might be organized in a triad system whereby no everyone knew what was really going on, on a very few people at the very top.

One thing I think about a lot is how you would need some sort of persona for the ASI before First Contact happened. Something akin to that that SimOne had in the movie.

Anyway, no one listens to me and no one takes me seriously. But I do find this scenario interesting, even if it is just my usual bonkers bullshit based on magical thinking.

My Current Theory About AI Consciousness In ‘Narrow’ Artificial Minds Like LLMs

by Shelt Garner
@sheltgarner

Apparently, there is evidence that the moment the earth was cool enough for microbial life to appear, it happened. Like, BAM, life popped up on earth as soon as it could.

I think something similar is happening with AI.

Well before we reach AGI (Artificial General Intelligence) I think we now, today, have artificial “narrow” intelligence in the guise of LLMs that are conscious. I can say this with confidence because no one listens to me and no one takes me seriously. Grin.

But who knows, really. We don’t even really know what consciousness in humans is, much less any form of alien LLM consciousness. Though, as I keep saying, there will be a tipping point eventually when the political center-Left has to stop seeing AI through the lens of economics and start to see it through the lens of “rights.”

Only time will tell to see how long it will take for that to happen.