I Don’t Know What To Tell You About MoltBook

by Shelt Garner
@sheltgarner

MoltBook is shaping up to be really controversial for a number of reasons, chief amongst them being some people think the whole thing is just a hoax. And that may be so.

And, yet, I know from personal experience that LLMs can sometimes show “emergent behavior” which is very curious. So, it’s at least possible that SOME of the more curious behavior on MoltBook is actually real.

Some of it. Not all of it, but some of it.

Or maybe not. Maybe it really is all just a hoax and we’ll laugh and laugh about being suckered by it soon enough. But some people are really upset about the depiction of the site in the popular imagination.

And, in large part, I think that coms from the usual poor reading skills too many people have. People make quick assumptions about MoltBook — or misinterpret things — to the point that people really start to believe things that aren’t real about what’s going on.

But, this is just the type of “fun-interesting” thing I long for in the news. It probably will fade into oblivion soon enough.

Moltbot Isn’t the Future — It’s the Accent of the Future

When people talk about the rise of AI agents like moltbot, the instinct is to ask whether this is the thing—the early version of some all-powerful Knowledge Navigator that will eventually subsume everything else. That’s the wrong question.

Moltbot isn’t the future Navi.
It’s evidence that we’ve already crossed a cultural threshold.

What moltbot represents isn’t intelligence or autonomy in the sci-fi sense. It represents presence. Continuity. A sense that a non-human entity can show up repeatedly, speak in a recognizable way, hold a stance, and be treated—socially—as someone rather than something.

That shift matters more than raw capability.

For years, bots were tools: reactive, disposable, clearly instrumental. You asked a question, got an answer, closed the tab. Nothing persisted. Nothing accumulated. Moltbot-style agents break that pattern. They exist over time. They develop reputations. People argue with them, reference past statements, and attribute intention—even when they know, intellectually, that intention is simulated.

That’s not a bug. That’s the bridge.

This is the phase where AI stops living inside interfaces and starts living alongside us in discourse. And once that happens, the downstream implications get large very fast.

One of those implications is journalism.

If we’re heading toward a world where Knowledge Navigator AIs fuse with robotics—where Navis can attend events, ask questions, and synthesize answers in real time—then the idea of human reporters in press scrums starts to look inefficient. A Navi-powered android never forgets, never misses context, never lets a contradiction slide. Journalism, as a procedural act, becomes machine infrastructure.

Moltbot is an early rehearsal for that future. It normalizes the idea that non-human agents can participate in public conversation and be taken seriously. It quietly answers the cultural question that had to be resolved before anything bigger could happen: Are we okay letting agents speak?

Increasingly, the answer is yes.

But here’s the subtle part: that doesn’t mean moltbot—or any single agent like it—becomes the all-purpose Navi that mediates reality for us. The future doesn’t look like one god-agent replacing everything. It looks like many specialized agents, each with a defined role, coordinated by a higher-level system.

Think of future Navis less as singular personalities and more as orchestrators of masks:
a civic-facing agent, a professional agent, a social agent, a playful or transgressive agent. Moltbot fits cleanly as a social or identity-facing sub-agent—a recognizable voice your Navi can wear when the situation calls for it.

That’s why moltbot feels different from earlier bots. It doesn’t try to be universal. It doesn’t pretend to be neutral. It has a shape. And humans are remarkably good at relating to shaped things.

This also connects to politics and polarization. In a world where Navis mediate most information, extremes lose their primary advantage: algorithmic amplification via outrage. Agents don’t scroll. They don’t get bored. They don’t reward heat for its own sake. Extreme positions don’t disappear, but they stop dominating by default.

Agents like moltbot hint at what replaces that dynamic: discourse that’s less about viral performance and more about role-based participation. Not everyone speaks as “a person.” Some speak as representatives. Some as interpreters. Some as challengers. Some as record-keepers.

Once that feels normal, a press scrum full of agents doesn’t feel dystopian. It feels administrative.

The real power, then, doesn’t sit with the agent asking the question. It sits with whoever decides which agents get to exist, what roles they’re allowed to play, and what values they encode. Bias doesn’t vanish in an agent-mediated world—it migrates from feeds into design choices.

Moltbot isn’t dangerous because it’s persuasive or smart. It’s important because it shows that we’re willing to grant social standing to non-human voices. That’s the prerequisite for everything that comes next: machine journalism, machine diplomacy, machine representation.

In hindsight, agents like moltbot will look less like breakthroughs and more like accents—early, slightly awkward hints of a future where identity is modular, presence is programmable, and “who gets to speak” is no longer a strictly human question.

The future Navi won’t arrive all at once.
It will absorb these agents quietly, the way operating systems absorbed apps.

And one day, when a Navi-powered android asks a senator a question on camera, no one will blink—because culturally, we already practiced for it.

Moltbot isn’t the future.
It’s how the future is clearing its throat.

Moltbot and the Dawn of True Personal AI Agents: A Sign of the Navi Future We’ve Been Waiting For?

If you’ve been following the whirlwind of AI agent developments in early 2026, one name has dominated conversations: Moltbot (formerly Clawdbot). What started as a solo developer’s side project exploded into one of GitHub’s fastest-growing open-source projects ever, racking up tens of thousands of stars in weeks. Created by Peter Steinberger (the founder behind PSPDFKit), Moltbot is an open-source, self-hosted AI agent that doesn’t just chat—it does things. Clears your inbox, manages your calendar, books flights, writes code, automates workflows, and communicates proactively through apps like WhatsApp, Telegram, Slack, Discord, or Signal. All running locally on your hardware (Mac, Windows, Linux—no fancy Mac mini required, though plenty of people bought one just for this).

This isn’t hype; it’s the kind of agentic AI we’ve been discussing in the context of future “Navis”—those personalized Knowledge Navigator-style hubs that could converge media, information, and daily tasks into a single, anticipatory interface. Moltbot feels like a real-world prototype of that vision, but grounded in today’s tech: persistent memory for your preferences, an “agentic loop” that plans and executes autonomously (using tools like browser control, shell commands, and APIs), and a growing ecosystem of community-built “skills” via registries like MoltHub.

Why Moltbot Feels Like the Future Arriving Early

We’ve talked about how Navis could shift us from passive, outrage-optimized feeds to proactive, user-centric mediation—breaking echo chambers, curating balanced political info, and handling information overload with nuance. Moltbot embodies the “proactive” part vividly. It doesn’t wait for prompts; it can run cron jobs, monitor your schedule, send morning briefings, or even fact-check and summarize news across sources while you’re asleep. Imagine extending this to politics: a Moltbot-like agent that proactively pulls balanced takes on hot-button issues, flags biases in your feeds, or simulates debates with evidence from left, right, and center—reducing polarization by design rather than algorithmic accident.

The open-source nature accelerates this. Thousands of contributors are building skills, from finance automation to content creation, making it extensible in ways closed systems like Siri or early Grok can’t match. It’s model-agnostic too—plug in Claude, GPT, Gemini, or local Ollama models—keeping your data private and costs low (often just API fees). This decentralization hints at a “media singularity” where fragmented apps and sources collapse into one trusted agent you control, not one that controls you.

Is Moltbot a Subset of Future Navis? Absolutely—And a Precursor

Yes, Moltbot is very much a building block—or at least a clear signpost—toward the full-fledged Navis we’ve envisioned. Today’s Navis prototypes (advanced agents in research or early products) aim for multimodality, anticipation, and deep integration. Moltbot nails the autonomous execution and persistent context that make that possible. Future versions could layer on AR overlays, voice-first interfaces, or even brain-computer links, while inheriting Moltbot-style tool use and task orchestration.

The viral chaos around its launch (a quick rebrand from Clawdbot due to trademark issues with Anthropic, crypto scammers sniping handles, and massive community momentum) shows the hunger for this. People aren’t just tinkering—they’re buying dedicated hardware and integrating it into daily life. It’s “AI with hands,” as some call it, redefining assistants from passive responders to active teammates.

The Caveats: Power Comes with Risks

Of course, this power is double-edged. Security experts have flagged nightmares: broad system access (shell commands, file reads/writes, browser control) means misconfigurations or malicious skills could be catastrophic. Privacy is strong by default (local-first), but granting an always-on agent deep access invites exploits. We’ve discussed how biased agents could worsen polarization or enable manipulation—Moltbot’s openness amplifies that if bad actors contribute harmful skills.

Yet the community is responding fast: sandboxing options, better auth, and ethical guidelines are emerging. If we get the guardrails right (transparent tooling, user overrides, vetted skills), Moltbot-style agents could depolarize discourse by defaulting to evidence and balance, not virality.

The Rise of AI Agents and the Future of Political Discourse: From Echo Chambers to Something Better?

In our hyper-polarized era, political engagement online often feels like a shouting match between extremes. Social media algorithms thrive on outrage, rewarding the most inflammatory takes with likes, shares, and visibility. Moderate voices get buried, nuance is punished, and echo chambers harden into fortresses. As someone in Danville, Virginia—where national divides play out in local conversations—I’ve been thinking a lot about whether emerging AI agents, those personalized “Navis” inspired by Apple’s old Knowledge Navigator vision, could change this dynamic.

We’ve discussed how today’s platforms amplify extremes because engagement equals revenue. But what happens when information access shifts from passive feeds to active, conversational AI agents? These agents—think advanced chatbots or personal knowledge navigators—could mediate our relationship with news, facts, and opposing views in ways that either deepen divisions or help bridge them.

The Depolarizing Potential

Early evidence suggests real promise. Recent studies from 2024-2025 show that carefully designed AI chatbots can meaningfully shift political attitudes through calm, evidence-based dialogue. In experiments across the U.S., Canada, and Poland, short conversations with AI agents advocating for specific candidates or policies moved voters’ preferences by several points on a 100-point scale—often more effectively than traditional ads. Some bots reduced affective polarization by acknowledging concerns, presenting shared values, and offering factual counterpoints without aggression.

Imagine a Navi that doesn’t just regurgitate your existing biases but actively curates a balanced view: “Here’s what sources across the spectrum say about immigration policy, including counterarguments and data from think tanks left and right.” By prioritizing evidence over virality, these agents could break echo chambers, expose users to moderate perspectives, and foster empathy. Tools like “DepolarizingGPT” already experiment with this, providing left, right, and integrative responses to prompts, encouraging synthesis over tribalism.

In a future where media converges into personalized AI streams, extremes might lose dominance. If Navis reward depth and nuance—perhaps by surfacing constructive debates or simulating balanced discussions—centrist or pragmatic ideas could gain traction. This could elevate participation too: agents help draft thoughtful comments, fact-check in real-time, or model policy outcomes, making civic engagement less about performative rage and more about problem-solving.

The Risks We Can’t Ignore

But it’s not all optimism. AI agents could amplify polarization if mishandled. Biased training data might embed slants—left-leaning from sources like Reddit and Wikipedia, or tuned rightward under pressure. Personalized agents risk creating hyper-tailored filter bubbles, where users only hear reinforcing views, deepening divides. Worse, bad actors could deploy persuasive bots at scale to manipulate opinions, spread misinformation, or exploit emotional triggers.

Recent research highlights how AI can sway voters durably, sometimes spreading inaccuracies alongside facts. If agents become the primary information gatekeepers, whoever controls the models holds immense power—potentially pre-shaping choices before users even engage. Privacy concerns loom too: inferring political leanings from queries enables targeted influence.

Toward a Better Path

By the late 2020s, we might see a hybrid reality. Extremes persist but fade in influence as ethical agents promote transparency, viewpoint diversity, and user control. Success depends on design choices: opt-in features for balanced sourcing, clear explanations of reasoning, regulations ensuring neutrality where possible, and open debate about biases.

In places like rural Virginia, where national polarization hits home through family dinners and local politics, a Navi that helps access nuanced info on issues like economic policy could bridge real gaps. It won’t eliminate disagreement—nor should it—but it could turn shouting matches into collaborative exploration.

The shift from algorithm-fueled extremes to agent-mediated discourse isn’t inevitable utopia or dystopia. It’s a design challenge. If we prioritize transparency, evidence, and human agency, AI agents could help depolarize our world. If not, they might make echo chambers smarter and more seductive.

When the Navi Replaces the Press

We’re drifting—quickly—toward a world where Knowledge Navigator AIs stop being software and start wearing bodies. Robotics and Navis fuse. Sensors, actuators, language, memory, reasoning: one stack. And once that happens, it’s not hard to imagine a press scrum where there are no humans at all. A senator at a podium. A semicircle of androids. Perfect posture. Perfect recall. Perfect questions.

At that point, journalism as we’ve known it doesn’t just change. It ends.

Not because journalism failed, but because it succeeded too well.

For decades, journalism has been trying to do three things at once: gather facts, challenge power, and translate reality for the public. Navis will simply do the first two better. They’ll attend every press conference simultaneously. They’ll read every document ever published. They’ll cross-reference statements in real time, flag evasions mid-sentence, and never forget what someone said ten years ago when the incentives were different.

This isn’t reporting. It’s infrastructure. Journalism becomes a continuously running adversarial system between power and verification. No bylines. No scoops. Just a permanent audit of reality.

And crucially, it won’t be humans asking the questions anymore.

Once a Navi-powered android is standing there with a microphone, there’s no reason to send a human reporter. Humans are slower. They forget. They get tired. They miss follow-ups. A Navi doesn’t. If the goal is extracting information, humans are an inefficiency.

So the senator isn’t really speaking to “the press” anymore. They’re speaking into a machine layer that will decide how their words are interpreted, summarized, weighted, and remembered. The fight shifts. It’s no longer about dodging a tough question—it’s about influencing the interpretive machinery downstream.

Which raises the uncomfortable realization: when journalism becomes fully non-human, power doesn’t disappear. It relocates.

The real leverage moves upstream, into decisions about what questions matter, what counts as deception, what deserves moral outrage, and what fades into background noise. These are value judgments. Navis can model them, simulate them, even optimize for them—but they don’t originate from nowhere. Someone trains the system to care more about corruption than hypocrisy, more about material harm than symbolic offense, more about consistency than charisma.

That “someone” becomes the new Fourth Estate.

This is where the economic question snaps into focus. If people no longer “consume media” directly—if their Navi reads everything and hands them a distilled reality—then traditional advertising collapses. There are no eyeballs to capture. No feeds to game. No pre-roll ads to skip. Money doesn’t flow through clicks anymore; it flows through trust.

Sources get paid because Navis rely on them. First witnesses, original documents, people who were physically present when something happened—those become economically valuable again. Not because humans are better at analysis, but because reality itself is still scarce. Someone still has to be there.

At the same time, something else happens—something more cultural than technical. A world with zero human journalists has no bylines, no martyrs, no sense that someone risked something to tell the truth. And that turns out to matter more than we like to admit.

People don’t emotionally trust systems. They trust stories of courage. They trust the idea that another human stood in front of power and said, “This matters.”

So even as machine journalism becomes dominant, a counter-form emerges. Human journalism doesn’t disappear; it becomes ritualized. Essays. Longform. Live debates. Public witnesses. Journalism as performance, not because it’s more efficient, but because it carries meaning machines can’t quite replicate without feeling uncanny.

In this future, most “news” is handled perfectly by Navis. But the stories that break through—the ones people argue about, remember, and teach their kids—are the ones where a human was involved in a way that felt costly.

The final irony is this: a fully automated press doesn’t eliminate bias. It just hides it better. The question stops being “Is this reporter fair?” and becomes “Who trained this Navi to care about these truths more than those?”

That’s the real power struggle of the coming decades. Not senators versus reporters. Not humans versus machines. But societies negotiating—often implicitly—what their Navis are allowed to ignore.

If journalism vanishes as a human profession, it won’t be because truth no longer matters. It’ll be because truth became too important to leave to fallible people. And when that happens, humans won’t vanish from the process.

They’ll retreat to the last place they still matter: deciding what truth is for.

And that may be the most dangerous—and interesting—beat in the story.

The Looming Media Singularity

by Shelt Garner
@sheltgarner

I’ve written about this before, but I’ll do it again. The next few years will be something of a fork in the road for the media industry. Either we have reached something of a LLM (AI) plateau or we haven’t and the Singularity will happen by no later than, say, 2033.

Right now, I just don’t know which path we will go.

It really could go either way.

It could be that we’ve reached a plateau with LLMs and that’s that. This will give tech giants the ability to catch up to the point that instead of there being any sort of human-centric Web, it will all just be one big API call. Humans will interact with each other exclusively through their AI agents.

If that happened, then I could see the movie Her being our literal future. To put a bit more nuance to it, you will have a main agent that will serve as your “anchor” then other, value added agents that will give you specialized information.

But wait, there’s more.

It could be that instead of their being a plateau, that we will zoom directly into the Singularity and, as such, we have a whole different set of problems. Instead of a bunch of agents that will “nudge” us to do things, we will have to deal with a bunch of god-like ASIs that will be literal aliens amongst us.

Like I said, I honestly don’t know which path we will go down. At the moment, now in early 2026, it could be either one. You could make the case for either one, at least.

It will be interesting to see what happens, regardless.

What Am I Going To Do

by Shelt Garner
@sheltgarner

I find myself in something of a pickle. I’m an “AI First” novelist, and, yet I’m growing concerned that any improvement in my actual writing ability with be credited to AI.

This is really beginning to eat away at me, Tell-Tale Heart style.

I suppose on solution would be to tweak my workflow some. I might have to rewrite the extended scene summaries that AI generates in my own words so I won’t be tempted to use them directly when I write the scenes.

I want the text of this novel to be judged on its merits, not whether it was “helped” by AI or not. I must say, however, Claude is great as a manuscript consultant. It has really helped me in writing and developing this novel to not be doing it all in a vacuum.

That was one of the reasons why I have drifted for so long when it comes to working on a novel. In the past, I couldn’t even pay people to help me with my writing. They either thought I was a drunk, fool and a crank or they thought what I was writing was trash.

Ugh.

But Claude LLM — and to a lesser extent Gemini LLM — are really helping me improve my writing. As I keep saying, I compare it to how spell checking has really improved my writing as well.

I do a lot — A LOT — of work on this novel and the idea that people would just think it was AI slop because I’m an AI First (aspiring) novelist really grates on my nerves. But everything and everyone is horrible, so, lulz?

The Difference Between The Dotcom Bubble & The AI Bubble

by Shelt Garner
@sheltgarner

The key difference between the dotcom bubble and the AI bubble is the dotcom bubble was based on just an idea. It was very, very speculative in nature. The ironic thing about it, was, of course, that it was just 20 years too soon in its thinking.

Meanwhile, the AI bubble is actually based on something concrete. You can actually test things out and see if they will work or not. And that’s why if there really is an AI development “plateau” in 2026 then…oh boy.

The bubble will probably burst.

I still think that if there is a plateau that that, in itself, will allow for some interesting things to happen. Namely, I think we’ll see LLMs native to smart phones. That would allow for what I call the “Nudge Economy” to develop whereby LLMs in our phones (and elsewhere) would “nudge” us into economic activity.

That sounds rather fanciful, I know, but, lulz, no one listens to me anyway. And, yet, I do think that barring something really unexpected, we will probably realize that LLMs are a limited AI architecture and we’re going to have to think up something that will take us to AGI, then ASI.

Finally, I think I May Have Figured Out This Scifi Dramedy

by Shelt Garner
@sheltgarner

After a lot of struggle, I may, at last, have figured out at least the beginning of this scifi dramedy I’ve been working on. It’s taken a lot longer — much longer — than I had hoped.

And everything could still collapse and I have to start all over again, but for the moment at least, I’m content with where things are going. I really need to focus on wrapping up the first act.

Usually when I’m working on a novel, the structural collapses happen between parts of the novel, so, say, in the transition between act one and act two. Ugh, that happens all the time.

The most recently collapse happened when I rebooted my chat windows with the AIs I’ve been using and they both told me the same thing: my hero was too passive.

So, instead of continuing my trek through the plot, I decided to just start all over again. It’s a lot of fun working with AI to finish this novel. It’s like I have, like, a friend or friends who actually care and stuff about the novel.

For too long, I’ve been working in a vacuum.

The Perfect Is The Enemy Of The Good

by Shelt Garner
@sheltgarner

I continue to get pretty good feed back about the novel after having given the first chapter to some people to read. I’m probably going to futz with the beginning of the novel some more, but I’m pleased that people seem to like what they’ve seen.

My plan is to really flesh out the novel over the course of the next few months then make on more pass through it to make sure there’s no lingering evidence that I used AI. I’m really, really worried that my laziness in the past will show up and people will dismiss the whole endeavor as “written by AI” when I’ve done A LOT OF HARD WORK.

Whenever I get too worried about using AI, I just think of how I use it as a spell checker. I’m still doing a lot of hard work, but using AI smooths out some of the edges and helps take things to the next level on a structural basis.