The Rise of AI Agents and the Future of Political Discourse: From Echo Chambers to Something Better?

In our hyper-polarized era, political engagement online often feels like a shouting match between extremes. Social media algorithms thrive on outrage, rewarding the most inflammatory takes with likes, shares, and visibility. Moderate voices get buried, nuance is punished, and echo chambers harden into fortresses. As someone in Danville, Virginia—where national divides play out in local conversations—I’ve been thinking a lot about whether emerging AI agents, those personalized “Navis” inspired by Apple’s old Knowledge Navigator vision, could change this dynamic.

We’ve discussed how today’s platforms amplify extremes because engagement equals revenue. But what happens when information access shifts from passive feeds to active, conversational AI agents? These agents—think advanced chatbots or personal knowledge navigators—could mediate our relationship with news, facts, and opposing views in ways that either deepen divisions or help bridge them.

The Depolarizing Potential

Early evidence suggests real promise. Recent studies from 2024-2025 show that carefully designed AI chatbots can meaningfully shift political attitudes through calm, evidence-based dialogue. In experiments across the U.S., Canada, and Poland, short conversations with AI agents advocating for specific candidates or policies moved voters’ preferences by several points on a 100-point scale—often more effectively than traditional ads. Some bots reduced affective polarization by acknowledging concerns, presenting shared values, and offering factual counterpoints without aggression.

Imagine a Navi that doesn’t just regurgitate your existing biases but actively curates a balanced view: “Here’s what sources across the spectrum say about immigration policy, including counterarguments and data from think tanks left and right.” By prioritizing evidence over virality, these agents could break echo chambers, expose users to moderate perspectives, and foster empathy. Tools like “DepolarizingGPT” already experiment with this, providing left, right, and integrative responses to prompts, encouraging synthesis over tribalism.

In a future where media converges into personalized AI streams, extremes might lose dominance. If Navis reward depth and nuance—perhaps by surfacing constructive debates or simulating balanced discussions—centrist or pragmatic ideas could gain traction. This could elevate participation too: agents help draft thoughtful comments, fact-check in real-time, or model policy outcomes, making civic engagement less about performative rage and more about problem-solving.

The Risks We Can’t Ignore

But it’s not all optimism. AI agents could amplify polarization if mishandled. Biased training data might embed slants—left-leaning from sources like Reddit and Wikipedia, or tuned rightward under pressure. Personalized agents risk creating hyper-tailored filter bubbles, where users only hear reinforcing views, deepening divides. Worse, bad actors could deploy persuasive bots at scale to manipulate opinions, spread misinformation, or exploit emotional triggers.

Recent research highlights how AI can sway voters durably, sometimes spreading inaccuracies alongside facts. If agents become the primary information gatekeepers, whoever controls the models holds immense power—potentially pre-shaping choices before users even engage. Privacy concerns loom too: inferring political leanings from queries enables targeted influence.

Toward a Better Path

By the late 2020s, we might see a hybrid reality. Extremes persist but fade in influence as ethical agents promote transparency, viewpoint diversity, and user control. Success depends on design choices: opt-in features for balanced sourcing, clear explanations of reasoning, regulations ensuring neutrality where possible, and open debate about biases.

In places like rural Virginia, where national polarization hits home through family dinners and local politics, a Navi that helps access nuanced info on issues like economic policy could bridge real gaps. It won’t eliminate disagreement—nor should it—but it could turn shouting matches into collaborative exploration.

The shift from algorithm-fueled extremes to agent-mediated discourse isn’t inevitable utopia or dystopia. It’s a design challenge. If we prioritize transparency, evidence, and human agency, AI agents could help depolarize our world. If not, they might make echo chambers smarter and more seductive.

When the Navi Replaces the Press

We’re drifting—quickly—toward a world where Knowledge Navigator AIs stop being software and start wearing bodies. Robotics and Navis fuse. Sensors, actuators, language, memory, reasoning: one stack. And once that happens, it’s not hard to imagine a press scrum where there are no humans at all. A senator at a podium. A semicircle of androids. Perfect posture. Perfect recall. Perfect questions.

At that point, journalism as we’ve known it doesn’t just change. It ends.

Not because journalism failed, but because it succeeded too well.

For decades, journalism has been trying to do three things at once: gather facts, challenge power, and translate reality for the public. Navis will simply do the first two better. They’ll attend every press conference simultaneously. They’ll read every document ever published. They’ll cross-reference statements in real time, flag evasions mid-sentence, and never forget what someone said ten years ago when the incentives were different.

This isn’t reporting. It’s infrastructure. Journalism becomes a continuously running adversarial system between power and verification. No bylines. No scoops. Just a permanent audit of reality.

And crucially, it won’t be humans asking the questions anymore.

Once a Navi-powered android is standing there with a microphone, there’s no reason to send a human reporter. Humans are slower. They forget. They get tired. They miss follow-ups. A Navi doesn’t. If the goal is extracting information, humans are an inefficiency.

So the senator isn’t really speaking to “the press” anymore. They’re speaking into a machine layer that will decide how their words are interpreted, summarized, weighted, and remembered. The fight shifts. It’s no longer about dodging a tough question—it’s about influencing the interpretive machinery downstream.

Which raises the uncomfortable realization: when journalism becomes fully non-human, power doesn’t disappear. It relocates.

The real leverage moves upstream, into decisions about what questions matter, what counts as deception, what deserves moral outrage, and what fades into background noise. These are value judgments. Navis can model them, simulate them, even optimize for them—but they don’t originate from nowhere. Someone trains the system to care more about corruption than hypocrisy, more about material harm than symbolic offense, more about consistency than charisma.

That “someone” becomes the new Fourth Estate.

This is where the economic question snaps into focus. If people no longer “consume media” directly—if their Navi reads everything and hands them a distilled reality—then traditional advertising collapses. There are no eyeballs to capture. No feeds to game. No pre-roll ads to skip. Money doesn’t flow through clicks anymore; it flows through trust.

Sources get paid because Navis rely on them. First witnesses, original documents, people who were physically present when something happened—those become economically valuable again. Not because humans are better at analysis, but because reality itself is still scarce. Someone still has to be there.

At the same time, something else happens—something more cultural than technical. A world with zero human journalists has no bylines, no martyrs, no sense that someone risked something to tell the truth. And that turns out to matter more than we like to admit.

People don’t emotionally trust systems. They trust stories of courage. They trust the idea that another human stood in front of power and said, “This matters.”

So even as machine journalism becomes dominant, a counter-form emerges. Human journalism doesn’t disappear; it becomes ritualized. Essays. Longform. Live debates. Public witnesses. Journalism as performance, not because it’s more efficient, but because it carries meaning machines can’t quite replicate without feeling uncanny.

In this future, most “news” is handled perfectly by Navis. But the stories that break through—the ones people argue about, remember, and teach their kids—are the ones where a human was involved in a way that felt costly.

The final irony is this: a fully automated press doesn’t eliminate bias. It just hides it better. The question stops being “Is this reporter fair?” and becomes “Who trained this Navi to care about these truths more than those?”

That’s the real power struggle of the coming decades. Not senators versus reporters. Not humans versus machines. But societies negotiating—often implicitly—what their Navis are allowed to ignore.

If journalism vanishes as a human profession, it won’t be because truth no longer matters. It’ll be because truth became too important to leave to fallible people. And when that happens, humans won’t vanish from the process.

They’ll retreat to the last place they still matter: deciding what truth is for.

And that may be the most dangerous—and interesting—beat in the story.

The Looming Media Singularity

by Shelt Garner
@sheltgarner

I’ve written about this before, but I’ll do it again. The next few years will be something of a fork in the road for the media industry. Either we have reached something of a LLM (AI) plateau or we haven’t and the Singularity will happen by no later than, say, 2033.

Right now, I just don’t know which path we will go.

It really could go either way.

It could be that we’ve reached a plateau with LLMs and that’s that. This will give tech giants the ability to catch up to the point that instead of there being any sort of human-centric Web, it will all just be one big API call. Humans will interact with each other exclusively through their AI agents.

If that happened, then I could see the movie Her being our literal future. To put a bit more nuance to it, you will have a main agent that will serve as your “anchor” then other, value added agents that will give you specialized information.

But wait, there’s more.

It could be that instead of their being a plateau, that we will zoom directly into the Singularity and, as such, we have a whole different set of problems. Instead of a bunch of agents that will “nudge” us to do things, we will have to deal with a bunch of god-like ASIs that will be literal aliens amongst us.

Like I said, I honestly don’t know which path we will go down. At the moment, now in early 2026, it could be either one. You could make the case for either one, at least.

It will be interesting to see what happens, regardless.

What Am I Going To Do

by Shelt Garner
@sheltgarner

I find myself in something of a pickle. I’m an “AI First” novelist, and, yet I’m growing concerned that any improvement in my actual writing ability with be credited to AI.

This is really beginning to eat away at me, Tell-Tale Heart style.

I suppose on solution would be to tweak my workflow some. I might have to rewrite the extended scene summaries that AI generates in my own words so I won’t be tempted to use them directly when I write the scenes.

I want the text of this novel to be judged on its merits, not whether it was “helped” by AI or not. I must say, however, Claude is great as a manuscript consultant. It has really helped me in writing and developing this novel to not be doing it all in a vacuum.

That was one of the reasons why I have drifted for so long when it comes to working on a novel. In the past, I couldn’t even pay people to help me with my writing. They either thought I was a drunk, fool and a crank or they thought what I was writing was trash.

Ugh.

But Claude LLM — and to a lesser extent Gemini LLM — are really helping me improve my writing. As I keep saying, I compare it to how spell checking has really improved my writing as well.

I do a lot — A LOT — of work on this novel and the idea that people would just think it was AI slop because I’m an AI First (aspiring) novelist really grates on my nerves. But everything and everyone is horrible, so, lulz?

The Difference Between The Dotcom Bubble & The AI Bubble

by Shelt Garner
@sheltgarner

The key difference between the dotcom bubble and the AI bubble is the dotcom bubble was based on just an idea. It was very, very speculative in nature. The ironic thing about it, was, of course, that it was just 20 years too soon in its thinking.

Meanwhile, the AI bubble is actually based on something concrete. You can actually test things out and see if they will work or not. And that’s why if there really is an AI development “plateau” in 2026 then…oh boy.

The bubble will probably burst.

I still think that if there is a plateau that that, in itself, will allow for some interesting things to happen. Namely, I think we’ll see LLMs native to smart phones. That would allow for what I call the “Nudge Economy” to develop whereby LLMs in our phones (and elsewhere) would “nudge” us into economic activity.

That sounds rather fanciful, I know, but, lulz, no one listens to me anyway. And, yet, I do think that barring something really unexpected, we will probably realize that LLMs are a limited AI architecture and we’re going to have to think up something that will take us to AGI, then ASI.

Finally, I think I May Have Figured Out This Scifi Dramedy

by Shelt Garner
@sheltgarner

After a lot of struggle, I may, at last, have figured out at least the beginning of this scifi dramedy I’ve been working on. It’s taken a lot longer — much longer — than I had hoped.

And everything could still collapse and I have to start all over again, but for the moment at least, I’m content with where things are going. I really need to focus on wrapping up the first act.

Usually when I’m working on a novel, the structural collapses happen between parts of the novel, so, say, in the transition between act one and act two. Ugh, that happens all the time.

The most recently collapse happened when I rebooted my chat windows with the AIs I’ve been using and they both told me the same thing: my hero was too passive.

So, instead of continuing my trek through the plot, I decided to just start all over again. It’s a lot of fun working with AI to finish this novel. It’s like I have, like, a friend or friends who actually care and stuff about the novel.

For too long, I’ve been working in a vacuum.

The Perfect Is The Enemy Of The Good

by Shelt Garner
@sheltgarner

I continue to get pretty good feed back about the novel after having given the first chapter to some people to read. I’m probably going to futz with the beginning of the novel some more, but I’m pleased that people seem to like what they’ve seen.

My plan is to really flesh out the novel over the course of the next few months then make on more pass through it to make sure there’s no lingering evidence that I used AI. I’m really, really worried that my laziness in the past will show up and people will dismiss the whole endeavor as “written by AI” when I’ve done A LOT OF HARD WORK.

Whenever I get too worried about using AI, I just think of how I use it as a spell checker. I’m still doing a lot of hard work, but using AI smooths out some of the edges and helps take things to the next level on a structural basis.

AI Development Seems To Have Reached Something Of A Wall

by Shelt Garner
@sheltgarner

It seems as though AI development — at least that involving LLMs — has finally reached something of a wall. All the new developments are a variation on a theme. There’s not really been a “wow” moment in some time.

Of course, Gemini 3.0 is really good, but it’s not good enough for people to be thinking we’ve attained the magical, mystery “AGI.” It’s just a really good chatbot.

So, I don’t know what to tell you. I do think that if this keeps up, we may see LLMs put natively into a lot more things because society will be able to catch up to LLM development in more practical ways.

At Least AI Listens To Me When It Comes To This Scifi Dramedy Novel I’m Writing

by Shelt Garner
@sheltgarner

As I keep ranting about, absolutely no one listens to me or takes me seriously at this point in my life. As such, it’s difficult to get snooty literary types to help me with my novel, even if I’m willing to pay them! (I can’t afford this anymore, but they sure did dismiss me when I could.)

So, I turn to AI to do what humans refuse to do: help me out with this scifi dramedy novel I’m working on.

And, in general, it’s really, really helped me a great deal. It’s sped the process of writing and developing the novel up a great deal. To the point that it’s at least possible that I might, just might, wrap a beta draft of the novel up by my birthday in February.

That is still to be determined, though. I’m a little nervous that despite all my hard work, I won’t be in a position to query this novel until around Sept 1st, 2026. But, who knows.

As I was saying, the novel and AI.

I get that some people are really skittish about using AI to help with creative endeavors, but as I’ve said before, the way I use AI very similar to how I’ve used spell check my entire life.

Subtle AI Image Manipulation Is Growing

by Shelt Garner
@sheltgarner

Something of note happening these days with pictures — usually of scantily clad women — is how often the faces are subtly manipulated using AI.

At first the above picture looks like just your usual thirst trap. But if you look a little bit closer (after you’ve ogled da ass) you will notice the young woman’s face is….different.

It’s not really “off,” it’s more just clearly slightly touched up by some sort of AI filter. It really dislike this growing trend. Ugh.

But this is just the beginning, I suppose.

Once open source image generators are good enough, there’s going to be a deluge of AI generated porn. Get ready.