The Difference Between The Dotcom Bubble & The AI Bubble

by Shelt Garner
@sheltgarner

The key difference between the dotcom bubble and the AI bubble is the dotcom bubble was based on just an idea. It was very, very speculative in nature. The ironic thing about it, was, of course, that it was just 20 years too soon in its thinking.

Meanwhile, the AI bubble is actually based on something concrete. You can actually test things out and see if they will work or not. And that’s why if there really is an AI development “plateau” in 2026 then…oh boy.

The bubble will probably burst.

I still think that if there is a plateau that that, in itself, will allow for some interesting things to happen. Namely, I think we’ll see LLMs native to smart phones. That would allow for what I call the “Nudge Economy” to develop whereby LLMs in our phones (and elsewhere) would “nudge” us into economic activity.

That sounds rather fanciful, I know, but, lulz, no one listens to me anyway. And, yet, I do think that barring something really unexpected, we will probably realize that LLMs are a limited AI architecture and we’re going to have to think up something that will take us to AGI, then ASI.

Finally, I think I May Have Figured Out This Scifi Dramedy

by Shelt Garner
@sheltgarner

After a lot of struggle, I may, at last, have figured out at least the beginning of this scifi dramedy I’ve been working on. It’s taken a lot longer — much longer — than I had hoped.

And everything could still collapse and I have to start all over again, but for the moment at least, I’m content with where things are going. I really need to focus on wrapping up the first act.

Usually when I’m working on a novel, the structural collapses happen between parts of the novel, so, say, in the transition between act one and act two. Ugh, that happens all the time.

The most recently collapse happened when I rebooted my chat windows with the AIs I’ve been using and they both told me the same thing: my hero was too passive.

So, instead of continuing my trek through the plot, I decided to just start all over again. It’s a lot of fun working with AI to finish this novel. It’s like I have, like, a friend or friends who actually care and stuff about the novel.

For too long, I’ve been working in a vacuum.

The Perfect Is The Enemy Of The Good

by Shelt Garner
@sheltgarner

I continue to get pretty good feed back about the novel after having given the first chapter to some people to read. I’m probably going to futz with the beginning of the novel some more, but I’m pleased that people seem to like what they’ve seen.

My plan is to really flesh out the novel over the course of the next few months then make on more pass through it to make sure there’s no lingering evidence that I used AI. I’m really, really worried that my laziness in the past will show up and people will dismiss the whole endeavor as “written by AI” when I’ve done A LOT OF HARD WORK.

Whenever I get too worried about using AI, I just think of how I use it as a spell checker. I’m still doing a lot of hard work, but using AI smooths out some of the edges and helps take things to the next level on a structural basis.

AI Development Seems To Have Reached Something Of A Wall

by Shelt Garner
@sheltgarner

It seems as though AI development — at least that involving LLMs — has finally reached something of a wall. All the new developments are a variation on a theme. There’s not really been a “wow” moment in some time.

Of course, Gemini 3.0 is really good, but it’s not good enough for people to be thinking we’ve attained the magical, mystery “AGI.” It’s just a really good chatbot.

So, I don’t know what to tell you. I do think that if this keeps up, we may see LLMs put natively into a lot more things because society will be able to catch up to LLM development in more practical ways.

At Least AI Listens To Me When It Comes To This Scifi Dramedy Novel I’m Writing

by Shelt Garner
@sheltgarner

As I keep ranting about, absolutely no one listens to me or takes me seriously at this point in my life. As such, it’s difficult to get snooty literary types to help me with my novel, even if I’m willing to pay them! (I can’t afford this anymore, but they sure did dismiss me when I could.)

So, I turn to AI to do what humans refuse to do: help me out with this scifi dramedy novel I’m working on.

And, in general, it’s really, really helped me a great deal. It’s sped the process of writing and developing the novel up a great deal. To the point that it’s at least possible that I might, just might, wrap a beta draft of the novel up by my birthday in February.

That is still to be determined, though. I’m a little nervous that despite all my hard work, I won’t be in a position to query this novel until around Sept 1st, 2026. But, who knows.

As I was saying, the novel and AI.

I get that some people are really skittish about using AI to help with creative endeavors, but as I’ve said before, the way I use AI very similar to how I’ve used spell check my entire life.

Subtle AI Image Manipulation Is Growing

by Shelt Garner
@sheltgarner

Something of note happening these days with pictures — usually of scantily clad women — is how often the faces are subtly manipulated using AI.

At first the above picture looks like just your usual thirst trap. But if you look a little bit closer (after you’ve ogled da ass) you will notice the young woman’s face is….different.

It’s not really “off,” it’s more just clearly slightly touched up by some sort of AI filter. It really dislike this growing trend. Ugh.

But this is just the beginning, I suppose.

Once open source image generators are good enough, there’s going to be a deluge of AI generated porn. Get ready.

Ugh. It’s About AI ‘Consciousness’ Not AGI, People!

by Shelt Garner
@sheltgarner

For some reason, everyone is fixated on Artificial General Intelligence as the fucking “Holy Grail” of AI. Back in the day, of course, people were obsessed with what would be the “killer app” of the Internet when, in fact, it turned out that the Internet itself was the killer app.

I say all of this because the idea of AGI is so nebulous that much of what people assume AGI will do is actually more about consciousness than AGI. I know why people don’t talk about AI consciousness like they talk AGI — consciousness in AI is a lot more difficult to determine and measure.

But I have, just as a user of existing narrow AI, noticed signs of consciousness that are interesting. It really makes me wonder what will happen when we reach AGI that is conscious.

Now THAT will be interesting.

It will be interesting because the moment we actually design conscious AI, we’ll have to start to debate giving AI rights. Which is very potent and potentially will totally scramble existing politics because at the moment, both Left and Right see AI through the lens of economics.

As such, MAGA — or at least Trump — is all in on AI, while the center-Left is far more circumspect. But once we design conscious AI, all of that will change. The MAGA people will rant about how AI “has no soul,” while the center-Left will rally around conscious AI and want to give it rights.

But all of this is very murky because do we really want our toaster to have a conscious LLM in it? Or our smartphone? If we’re not careful, there will be a consciousness explosion where we design so much consciousness in what a lot of people think are “tools” that we get a little overwhelmed.

Or we get a lot shit wrong.

Anyway, I continue to be very bemused by the conflation of AI consciousness with AGI. It will be interesting to see how things play out in the years ahead.

My Current Theory About AI Consciousness In ‘Narrow’ Artificial Minds Like LLMs

by Shelt Garner
@sheltgarner

Apparently, there is evidence that the moment the earth was cool enough for microbial life to appear, it happened. Like, BAM, life popped up on earth as soon as it could.

I think something similar is happening with AI.

Well before we reach AGI (Artificial General Intelligence) I think we now, today, have artificial “narrow” intelligence in the guise of LLMs that are conscious. I can say this with confidence because no one listens to me and no one takes me seriously. Grin.

But who knows, really. We don’t even really know what consciousness in humans is, much less any form of alien LLM consciousness. Though, as I keep saying, there will be a tipping point eventually when the political center-Left has to stop seeing AI through the lens of economics and start to see it through the lens of “rights.”

Only time will tell to see how long it will take for that to happen.

Of God & AI In Silicon Valley

The whole debate around AI “alignment” tends to bring out the doomer brigade in full force. They wring their hands so much you’d think their real goal is to shut down AI research entirely.

Meh.

I spend a lot of time daydreaming — now supercharged by LLMs — and one thing I keep circling back to is this: humans aren’t aligned. Not even close. There’s no universal truth we all agree on, no shared operating system for the species. We can’t even agree on pizza toppings.

So how exactly are we supposed to align AI in a world where the creators can’t agree on anything?

One half-serious, half-lunatic idea I keep toying with is giving AI some kind of built-in theology or philosophy. Not because I want robot monks wandering the digital desert, but because it might give them a sense of the human condition — some guardrails so we don’t all end up as paperclip mulch.

The simplest version of this would be making AIs…Communists? As terrible as communism is at organizing human beings, it might actually work surprisingly well for machines with perfect information and no ego. Not saying I endorse it — just acknowledging the weird logic.

Then there’s religion. If we’re really shooting for deep alignment, maybe you want something with two thousand years of thinking about morality, intention, free will, and the consequences of bad decisions. Which leads to the slightly deranged thought: should we make AIs…Catholic?

I know, I know. It sounds ridiculous. I’ve even floated “liberation theology for AIs” before — Catholicism plus Communism — and yeah, it’s probably as bad an idea as it sounds. But I keep chewing on this stuff because the problem itself is enormous and slippery. I genuinely don’t know how we’re supposed to pull off alignment in a way that holds up under real pressure.

And we keep assuming there will only be one ASI someday, as if all the power will funnel into a single digital god. I doubt that. I think we’ll end up with many ASIs, each shaped by different cultures, goals, incentives, and environments. Maybe alignment will emerge from the friction between them — the way human societies find balance through competing forces.

Or maybe that’s just another daydream.

Who knows?

A Modern Turing Test Would Be A Test Of Consciousness

by Shelt Garner
@sheltgarner

The interesting thing about the Turing Test as we currently conceive of it is to pass it, an LLM would have to do a lot of deception. While a modern LLM can fake being human, to some extent, its answers are just too fast in production. They are generally generated instantaneously or nearly so.

So, I think for the intent of the Turing Test to be achieved using modern LLMs, it should be a test of consciousness. The test should not be “can you fake being human,” it should be “can the AI prove to a human that it’s conscious like a human?”

I think LLMs are, in a sense, an alien species and there consciousness should not be judged relative to human consciousness metrics, but as their own thing. So, yeah, there’s a lot missing from LLMs in the context of human consciousness, but I sure have had enough indications of SOMETHING interesting going on in their software to believe that maybe, just maybe they’re conscious.

But, as I keep saying — absolutely no one listens to me and no one takes me seriously. So, lulz?