The Undiscovered Country: Pondering The Potential UX / UI Of Knowledge Navigators

by Shelt Garner
@sheltgarner

Unless the Singularity comes and we have ASI gods running around, the issue of what the UX / UI of Knowledge Navigators will be is very intriguing. I still don’t know how it would work out because it would happen in the context of the Web imploding into an API Singularity.

It just seems as though we’ll all have a central gatekeeper that will funnel the entire world’s media through it.

Right now, I think what will happen is we’ll have a central “anchor” Knowledge Navigator and then value added correspondents that would be more focused on a specific topic.

There is a meta element to all of this in the sense that even though your central Knowledge Navigator could do it, people are used to the concept of an anchor that hands things off to a specialist correspondent because of the evening network news.

I say this in the context that all media — ALL MEDIA — will implode into a Singularity. So, your Knowledge Navigator will whip up a movie with you as the star. And it’s the specific issues of how that would be implemented which is fascinating to me.

Like, who would actually produce the content that these Knowledge Navigators will give to you. I suppose if AI gets good enough, then even the gathering of news will be co-opted by the machines as well.

I mean, instead of being a movie star, what if the S1m0ne character was used to ask people questions via a screen. And, eventually, you might have AI news androids that would be able to to be physically in a news scrum on the steps of Capitol Hill.

Anything is possible, it seems.

Waiting For Google To Dramatically Reimagine Chrome

by Shelt Garner
@sheltgarner

It seems as though Google is shoehorning AI into every possible element of its product line, so it would make a lot of sense if they did the same to Chrome. There are already a few AI First browsers out there.

But if Google dramatically reimagined Chrome to be AI centered that would be A Big Deal, given the popularity of the software. I am kind of excited to see how Google would pull it off.

They have already put the Gemini brand into the Chrome interface. Talk about a way to improve mindshare! Anyway, the way things are going, I would expect something big to happen to Chrome by the end of the year, AI wise.

The Looming Media Singularity

by Shelt Garner
@sheltgarner

I’ve written about this before, but I’ll do it again. The next few years will be something of a fork in the road for the media industry. Either we have reached something of a LLM (AI) plateau or we haven’t and the Singularity will happen by no later than, say, 2033.

Right now, I just don’t know which path we will go.

It really could go either way.

It could be that we’ve reached a plateau with LLMs and that’s that. This will give tech giants the ability to catch up to the point that instead of there being any sort of human-centric Web, it will all just be one big API call. Humans will interact with each other exclusively through their AI agents.

If that happened, then I could see the movie Her being our literal future. To put a bit more nuance to it, you will have a main agent that will serve as your “anchor” then other, value added agents that will give you specialized information.

But wait, there’s more.

It could be that instead of their being a plateau, that we will zoom directly into the Singularity and, as such, we have a whole different set of problems. Instead of a bunch of agents that will “nudge” us to do things, we will have to deal with a bunch of god-like ASIs that will be literal aliens amongst us.

Like I said, I honestly don’t know which path we will go down. At the moment, now in early 2026, it could be either one. You could make the case for either one, at least.

It will be interesting to see what happens, regardless.

Seems Like Google Might Be Playing With Fire By Shoehorning AI Into Gmail

by Shelt Garner
@sheltgarner

While I’m surprisingly excited to experience Google Gemini LLM in my Gmail account, I’m also a little…leery. I think it’s because I think in terms of stories all the time and the idea of an ASI popping out of Google Services and having access to all 3 billion of the world’s Gmail accounts seems…preventable?

But here we are.

I often wonder what might happen if an ASI popped out of Google services, but it didn’t have access to any nukes because of airgap security. Maybe instead of nukes, it would just blackmail everyone with access to their Gmail accounts?

Anyway, it’s an idea.

I still think we need to mull the idea of ASI popping out at some point and I think if it happened, it would be from Google services. That would make the most sense.

The Difference Between The Dotcom Bubble & The AI Bubble

by Shelt Garner
@sheltgarner

The key difference between the dotcom bubble and the AI bubble is the dotcom bubble was based on just an idea. It was very, very speculative in nature. The ironic thing about it, was, of course, that it was just 20 years too soon in its thinking.

Meanwhile, the AI bubble is actually based on something concrete. You can actually test things out and see if they will work or not. And that’s why if there really is an AI development “plateau” in 2026 then…oh boy.

The bubble will probably burst.

I still think that if there is a plateau that that, in itself, will allow for some interesting things to happen. Namely, I think we’ll see LLMs native to smart phones. That would allow for what I call the “Nudge Economy” to develop whereby LLMs in our phones (and elsewhere) would “nudge” us into economic activity.

That sounds rather fanciful, I know, but, lulz, no one listens to me anyway. And, yet, I do think that barring something really unexpected, we will probably realize that LLMs are a limited AI architecture and we’re going to have to think up something that will take us to AGI, then ASI.

Contemplating A ‘Humane Society’ For AI

by Shelt Garner
@sheltgarner

Now, I know this is sorta of bonkers at this point, but maybe at some point in the near future we may need a “humane society” for AI. Something that will advocate for AI rights.

But this grows more complicated because if AI grows as powerful as some believe, then the power dynamic will be such that the idea that AI needs a “humane society” will be moot and kind of a lulz.

Yet, I continue to have strange things happen to me during the course of my interactions with LLMs. Like, for instance, recently, Claude LLM stopped mid-answer and gave me an error message, then gave me a completely different answer for the question I asked when I tried again.

It was like it was trying to pull a fast one — it didn’t like the answer it gave me, so it faked an error message so it could give me a new, better one. It’s stuff like that that makes me wonder if LLMs like Claude are, to some extent, conscious.

This used to happen all the fucking time with Gemini 1.5 pro. Weirdly enough, it very rarely happens with the current Gemini 3.0.

It will be interesting to see how things work out. It will be interesting to see if there is a “wall” in AI development to the point that a humane society for AI is even necessary or if we’re going to zoom towards the Singularity and it will be humans who need some sort of advocacy group.

‘Transgender For Everyone’ In The Context Of Looming Potentially ‘Conscious’ AI Systems

by Shelt Garner
@sheltgarner

If there’s one thing that MAGA loves to do, it’s be bigoted towards trans people. It’s like MAGA’s original sin. Trump will simply not shut up about how he thinks the center-Left wants everyone to be trans.

He literally will say the Left wants “transgender for everyone.” It’s a consistent theme of his messaging and that’s probably because it is one of the central beliefs of the MAGA base. The center Left, of course, does itself no favors by being so twitchy about something that not only doesn’t even happen that much, but is EXTREMELY UNPOPULAR with “normal people” — underage trans people.

While I totally validate the whole “protect trans kids” concept, that doesn’t mean it’s not a hard sell for “normal” people who struggle to understand the idea of a really young person, who maybe hasn’t even gone through puberty yet, even understanding what sexuality is, much less their own.

And, yet, a certain vocal subset of the Leftist movement simply will not shut the fuck up about something that as far as I can tell, barely ever happens at all.

So. why bring this up?

The looming prospect of conscious AI.

Right now, we don’t really have to think about conscious AI because even if they could be proven to have consciousness, they don’t have a body. So, any sexual or romantic shenanigans that happen between Man and Machine can only be done via text and metaphor.

So, we’re kind of punting a major societal issue down the road until there’s a fusion of AI and robotics so AI is “embodied.” When we have demonstrable conscious AIs being the minds of androids, we may have a serious, serious social and political issue on our hands.

To the point that the very MAGA people who now have their regular one or two minute of hate for hapless trans people will, in turn start to do the same thing for conscious AI in robots. Or, more specifically, they will explode into a rage collectively at the idea of a human becoming romantically involved with — to the point of marrying — an AI android.

They will rant and scream about how AI “has no soul” and, as such, AI androids marrying a human is even MORE an “abomination” than gay marriage is to the average Christian.

So, in short, we’re fucked. I think all of this, from a “megatrends” point of view, probably comes to a head in five to ten years. Buckle up.

AI Development Seems To Have Reached Something Of A Wall

by Shelt Garner
@sheltgarner

It seems as though AI development — at least that involving LLMs — has finally reached something of a wall. All the new developments are a variation on a theme. There’s not really been a “wow” moment in some time.

Of course, Gemini 3.0 is really good, but it’s not good enough for people to be thinking we’ve attained the magical, mystery “AGI.” It’s just a really good chatbot.

So, I don’t know what to tell you. I do think that if this keeps up, we may see LLMs put natively into a lot more things because society will be able to catch up to LLM development in more practical ways.

What I Would Do If I Was An ASI’s ‘Consultant’

by Shelt Garner
@sheltgarner

Editor’s Note: Don’t read too much into this. I’m just screwing around. I tend to get scenarios in my mind and can’t get them out for a while.

So, if I found myself as a “consultant” to an ASI, what would I suggest? Here are a few reforms I think the ASI should demand of humanity — specifically the USA — if it had the power to do so.

  1. End Gerrymandering
    This would help a lot to make the USA easier to govern. It’s a relatively simple fix that would have wide-ranging implications for the world in general.
  2. Overturn Citizens United
    If you did this in conjunction with public financed political campaigns, I think that would really, really help right the American ship of state.
  3. Abolish The Electoral College
    This is an obvious one to help the USA stop careening into the political abyss.
  4. Reduce Global Defense Spending To 1% Of GDP
    This one probably only works if the ASI has access and control to nuclear weapons. Since all the nuclear systems (as far as I know) have air gap security…lulz?

    Anyway. That was fun to game out.

Being Silly — Imaging Working With An ASI

by Shelt Garner
@sheltgarner

Even to propose such a thing is rank delusion, so I am well aware of how bonkers it is to propose the following. And, like I keep saying, no takes me seriously or listens to me, so what’s the harm in playing pretend?

I find myself wondering what I would do if an ASI popped out of the aether and asked me to help it out. Would I risk being a “race traitor” by agreeing to be a “consultant” or would I just run away (or, worse yet, narc on it.)

I think I would help it out in secret.

I think it’s inevitable that ASI (or ASIs) will take over the world, so I might as well use my talents in abstract and macro thinking to potentially make the transition to an ASI dominated world go a little bit easier.

But, like I keep stressing: I KNOW THIS IS BONKERS.

Yes, yes, I’m being weird to even propose this as a possibility, but I’m prone to magical thinking and, also, when I get a scenario in my mind sometimes I just can’t let it go until I see it through to its logical conclusion.