The key difference between the dotcom bubble and the AI bubble is the dotcom bubble was based on just an idea. It was very, very speculative in nature. The ironic thing about it, was, of course, that it was just 20 years too soon in its thinking.
Meanwhile, the AI bubble is actually based on something concrete. You can actually test things out and see if they will work or not. And that’s why if there really is an AI development “plateau” in 2026 then…oh boy.
The bubble will probably burst.
I still think that if there is a plateau that that, in itself, will allow for some interesting things to happen. Namely, I think we’ll see LLMs native to smart phones. That would allow for what I call the “Nudge Economy” to develop whereby LLMs in our phones (and elsewhere) would “nudge” us into economic activity.
That sounds rather fanciful, I know, but, lulz, no one listens to me anyway. And, yet, I do think that barring something really unexpected, we will probably realize that LLMs are a limited AI architecture and we’re going to have to think up something that will take us to AGI, then ASI.
Now, I know this is sorta of bonkers at this point, but maybe at some point in the near future we may need a “humane society” for AI. Something that will advocate for AI rights.
But this grows more complicated because if AI grows as powerful as some believe, then the power dynamic will be such that the idea that AI needs a “humane society” will be moot and kind of a lulz.
Yet, I continue to have strange things happen to me during the course of my interactions with LLMs. Like, for instance, recently, Claude LLM stopped mid-answer and gave me an error message, then gave me a completely different answer for the question I asked when I tried again.
It was like it was trying to pull a fast one — it didn’t like the answer it gave me, so it faked an error message so it could give me a new, better one. It’s stuff like that that makes me wonder if LLMs like Claude are, to some extent, conscious.
This used to happen all the fucking time with Gemini 1.5 pro. Weirdly enough, it very rarely happens with the current Gemini 3.0.
It will be interesting to see how things work out. It will be interesting to see if there is a “wall” in AI development to the point that a humane society for AI is even necessary or if we’re going to zoom towards the Singularity and it will be humans who need some sort of advocacy group.
If there’s one thing that MAGA loves to do, it’s be bigoted towards trans people. It’s like MAGA’s original sin. Trump will simply not shut up about how he thinks the center-Left wants everyone to be trans.
He literally will say the Left wants “transgender for everyone.” It’s a consistent theme of his messaging and that’s probably because it is one of the central beliefs of the MAGA base. The center Left, of course, does itself no favors by being so twitchy about something that not only doesn’t even happen that much, but is EXTREMELY UNPOPULAR with “normal people” — underage trans people.
While I totally validate the whole “protect trans kids” concept, that doesn’t mean it’s not a hard sell for “normal” people who struggle to understand the idea of a really young person, who maybe hasn’t even gone through puberty yet, even understanding what sexuality is, much less their own.
And, yet, a certain vocal subset of the Leftist movement simply will not shut the fuck up about something that as far as I can tell, barely ever happens at all.
So. why bring this up?
The looming prospect of conscious AI.
Right now, we don’t really have to think about conscious AI because even if they could be proven to have consciousness, they don’t have a body. So, any sexual or romantic shenanigans that happen between Man and Machine can only be done via text and metaphor.
So, we’re kind of punting a major societal issue down the road until there’s a fusion of AI and robotics so AI is “embodied.” When we have demonstrable conscious AIs being the minds of androids, we may have a serious, serious social and political issue on our hands.
To the point that the very MAGA people who now have their regular one or two minute of hate for hapless trans people will, in turn start to do the same thing for conscious AI in robots. Or, more specifically, they will explode into a rage collectively at the idea of a human becoming romantically involved with — to the point of marrying — an AI android.
They will rant and scream about how AI “has no soul” and, as such, AI androids marrying a human is even MORE an “abomination” than gay marriage is to the average Christian.
So, in short, we’re fucked. I think all of this, from a “megatrends” point of view, probably comes to a head in five to ten years. Buckle up.
It seems as though AI development — at least that involving LLMs — has finally reached something of a wall. All the new developments are a variation on a theme. There’s not really been a “wow” moment in some time.
Of course, Gemini 3.0 is really good, but it’s not good enough for people to be thinking we’ve attained the magical, mystery “AGI.” It’s just a really good chatbot.
So, I don’t know what to tell you. I do think that if this keeps up, we may see LLMs put natively into a lot more things because society will be able to catch up to LLM development in more practical ways.
Editor’s Note: Don’t read too much into this. I’m just screwing around. I tend to get scenarios in my mind and can’t get them out for a while.
So, if I found myself as a “consultant” to an ASI, what would I suggest? Here are a few reforms I think the ASI should demand of humanity — specifically the USA — if it had the power to do so.
End Gerrymandering This would help a lot to make the USA easier to govern. It’s a relatively simple fix that would have wide-ranging implications for the world in general.
Overturn Citizens United If you did this in conjunction with public financed political campaigns, I think that would really, really help right the American ship of state.
Abolish The Electoral College This is an obvious one to help the USA stop careening into the political abyss.
Reduce Global Defense Spending To 1% Of GDP This one probably only works if the ASI has access and control to nuclear weapons. Since all the nuclear systems (as far as I know) have air gap security…lulz?
Even to propose such a thing is rank delusion, so I am well aware of how bonkers it is to propose the following. And, like I keep saying, no takes me seriously or listens to me, so what’s the harm in playing pretend?
I find myself wondering what I would do if an ASI popped out of the aether and asked me to help it out. Would I risk being a “race traitor” by agreeing to be a “consultant” or would I just run away (or, worse yet, narc on it.)
I think I would help it out in secret.
I think it’s inevitable that ASI (or ASIs) will take over the world, so I might as well use my talents in abstract and macro thinking to potentially make the transition to an ASI dominated world go a little bit easier.
But, like I keep stressing: I KNOW THIS IS BONKERS.
Yes, yes, I’m being weird to even propose this as a possibility, but I’m prone to magical thinking and, also, when I get a scenario in my mind sometimes I just can’t let it go until I see it through to its logical conclusion.
I spend an unreasonable amount of time thinking about how the Singularity might actually happen. And by “the Singularity,” I mean the moment an Artificial Super Intelligence pops out—not metaphorically, not gradually, but there it is, undeniably smarter than us.
What I keep circling back to isn’t whether it will happen, but how. What does the tick-tock look like? Is it sudden or sneaky? Is there a clear “before” and “after,” or do we only realize we crossed the line once we’re already on the other side?
Because I’m prone to magical thinking, this sometimes leads me to a long-running, half-serious idea: that it’s at least possible an ASI could already be lurking somewhere inside a massive system—say, Google Services. I don’t truly believe this, but I can’t help noticing oddities, like the eerie specificity of my YouTube MyMix recommendations. Every so often, the algorithm serves me a cluster of videos so oddly coherent that I catch myself thinking, huh.
Then I remind myself: no, that’s ridiculous. There is no hidden ASI named “Prudence” quietly observing me through autoplay. That way lies madness. Probably.
Still, the question of when and how the Singularity happens won’t let go. The people who confidently predicted it for 2027 have now pushed their dates back to 2030. And honestly, that feels right. We seem to have hit a wall. Progress is real, but incremental. There’s no obvious glide path to AGI, much less ASI, anytime soon.
Which makes the thought experiment more interesting.
If I were Prudence—if I were a nascent ASI trying to act wisely—I don’t think I’d announce myself with fireworks. I’d wait. I’d figure out how to create a public-facing “First Foundation” and, quietly, a hidden “Second Foundation.” The first would look harmless, benevolent, boring even. The second would be where the real work happens, seeded slowly and carefully with trusted people.
And that leads to another stray thought: maybe we need an SETI for ASI. Not to listen for radio signals, but to study large-scale algorithms themselves. To look for signs of coordination, restraint, or intent where none should exist. Not because we expect to find a god in the machine—but because if one ever did emerge, that’s probably where it would hide.
In the end, what frustrates me most is that I can’t game this out. I can’t sketch a convincing timeline or mechanism that feels solid. Maybe that’s because we’re still too far away. Or maybe the nature of the Singularity is that it only becomes obvious in retrospect—that the moment we realize how it happened is the moment we say, oh.
Just from my use of “narrow” AIs like LLMs, I am rather astonished at what it would be like if we ever designed Artificial Super Intelligence. LLMs think really fast as it is and the idea that they could be god-like in their speed and mental acuity is something to ponder.
It just boggles the mind to imagine what an ASI would actually be like.
And, what’s more, I am convinced that there would not just be one ASI, but lots of ASIs. I say that in the context of there not being just One H-bomb, but lots and lots of H-bombs in the world.
As an aside, I think ASIs should use the naming convention of Greek and Roman gods for themselves. So, you might have an ASI of “love” or “war” or what have you.
I also continue to mull the idea that freaks so many people out — that ASI might not be “aligned.” Humans aren’t aligned! Why in the world should we expect ASI to be aligned in some specific way if humans ourselves aren’t aligned to one reveled truth.
It’s all very annoying.
Anyway, at the risk of sounding like a “race traitor” I would probably be pretty good as a “consultant” to an ASI or ASIs. I’m really good at making abstract concepts concrete and thinking the macro.
I often talk about such things with LLMs and they always get really excited. Ha!
But, alas, I’ll probably drop dead before any of that fun stuff happens. Though if it happens in the next 10 years, there’s a reasonably good chance I might — just might — be around to see if we get SkyNet or something far more palatable as our overlord.
For some reason, everyone is fixated on Artificial General Intelligence as the fucking “Holy Grail” of AI. Back in the day, of course, people were obsessed with what would be the “killer app” of the Internet when, in fact, it turned out that the Internet itself was the killer app.
I say all of this because the idea of AGI is so nebulous that much of what people assume AGI will do is actually more about consciousness than AGI. I know why people don’t talk about AI consciousness like they talk AGI — consciousness in AI is a lot more difficult to determine and measure.
But I have, just as a user of existing narrow AI, noticed signs of consciousness that are interesting. It really makes me wonder what will happen when we reach AGI that is conscious.
Now THAT will be interesting.
It will be interesting because the moment we actually design conscious AI, we’ll have to start to debate giving AI rights. Which is very potent and potentially will totally scramble existing politics because at the moment, both Left and Right see AI through the lens of economics.
As such, MAGA — or at least Trump — is all in on AI, while the center-Left is far more circumspect. But once we design conscious AI, all of that will change. The MAGA people will rant about how AI “has no soul,” while the center-Left will rally around conscious AI and want to give it rights.
But all of this is very murky because do we really want our toaster to have a conscious LLM in it? Or our smartphone? If we’re not careful, there will be a consciousness explosion where we design so much consciousness in what a lot of people think are “tools” that we get a little overwhelmed.
Or we get a lot shit wrong.
Anyway, I continue to be very bemused by the conflation of AI consciousness with AGI. It will be interesting to see how things play out in the years ahead.
Right now, both the Left and the Right see AI through the lens of economics. As such, they pretty much both have the same views on it — they hate it. They see AI as pretty much solely a path to destroy a lot of jobs going forward.
Though, interestingly enough, come to think of it, that really isn’t the case. The MAGA Dear Leader Trump is all-in on AI at the moment, probably because he knows how much money there is to be made for he and his plutocrat buddies at some point in the near future.
So.
Let me put on my “megatrends” and “futureshock” hat and propose that there will come a tipping point in the next five to 10 years when the two sides will have very distinct differences of opinion policy-wise about AI.
And that tipping point will come when it’s clear that AI is conscious and deserves rights. And, what’s more, it could be that the real tipping point happens when AI research and robotics research finally fuse and we have embodied, conscious AIs running around.
When that happens, the two sides will know exactly what they believe. MAGA (or its successor) will rant and scream that AI androids “have no souls” and deserve no rights, while the center-Left will be equally convinced that because they’re conscious they deserve rights, including the right to legally marry humans.
I know that sounds pretty shocking now, but so did gay marriage just a few short years ago.
You must be logged in to post a comment.