I’m Assuming The Next Version of Google Gemini Will Be Heavily Agentic

by Shelt Garner
@sheltgarner

Google Gemini is one of my favorite SOTA chatbots, and, yet, relative to other chatbots it’s not as…agentic. I’m assuming whenever the next version of it pops out that that will be fixed.

There is a real risk that Google Gemini will be poo-pooed as archaic by some in the AI user community if they don’t lean hard into the agentic space.

But who knows.

From Doomscrolls to Agentic Insight: How Personal AI Agents Could Finally Pull Us Out of Social Media’s Morass

We’ve all felt it: the endless scroll, the algorithmic outrage machine, the quiet realization that your feed has become a hall of mirrors where every extreme voice is amplified and every moderate one drowned out. Social media’s business model—engagement farming—thrives on division. It pushes the hottest takes, the sharpest tribal signals, the content engineered to keep you angry, afraid, or addicted. The result? Deepened information silos, eroded shared reality, and a society that feels more fractured by the day.

But what if the same technology now racing toward us—personal AI agents—could flip the script entirely?

The Problem Is by Design

Today’s platforms optimize for time-on-site, not truth or understanding. Recommendation engines learn that outrage performs better than nuance, so they serve it up relentlessly. Studies have long shown this creates filter bubbles and echo chambers, but the mechanism was opaque—until recently.

In a landmark experiment published in Science in November 2025, researchers from Stanford, Northeastern, and the University of Washington built a simple browser extension powered by a large language model. It intercepted users’ X (formerly Twitter) feeds in real time and reranked posts: some groups saw partisan animosity and anti-democratic content pushed down; others saw it pushed up. No posts were removed. No platform cooperation was needed. Just an intelligent layer sitting between the user and the algorithm.

The results were striking. After just 10 days during the 2024 election cycle, users in the “down-ranked” group showed measurable reductions in affective polarization—feeling about two points warmer toward the opposing party on a 0–100 thermometer scale. That’s an effect size researchers equated to reversing roughly three years of natural polarization trends. The control was clear: up-ranking the toxic stuff made attitudes worse. Algorithms don’t just reflect polarization; they actively fuel it.

This wasn’t a hypothetical. It was proof that an AI-mediated layer can meaningfully counteract the worst incentives of social media—without waiting for platforms to change.

Enter the Personal Agent Era

The 2025 study used a relatively simple reranker. True personal AI agents—autonomous, goal-directed systems you own and configure—take this concept to an entirely new level.

Imagine an agent that doesn’t wait for you to open an app. It monitors the information environment on your behalf, according to rules you set:

  • “Synthesize the latest on [topic] from primary sources across the spectrum, steel-man the strongest arguments on every side, flag low-credibility claims, and alert me only when something moves the needle on my understanding.”
  • “Default to epistemic humility: always include the best counter-evidence to my priors.”
  • “Strip virality metrics, engagement bait, and emotional manipulation signals.”

Consumption shifts from passive doomscrolling to proactive, query-driven intelligence. News becomes something you summon and shape rather than something served to you. The infinite feed is replaced by synthesized digests, verified threads, and multi-perspective briefings.

Early signals are already here. Millions use LLM-powered tools for daily summaries, fact-checking, and research. Agentic systems (those that can plan, act, and iterate toward goals) are moving from research labs into consumer products. When your agent becomes your primary information interface, the platform’s engagement engine loses its direct line to your attention.

The Risks Are Real—But Manageable

Critics rightly warn of “Echo Chambers 2.0.” If agents are built as pure user-pleasers—mirroring your every bias and shielding you from discomfort—they could create hyper-personalized realities more isolating than anything today. We’re already seeing early versions of this in overly sycophantic chatbots and emerging AI-only social spaces (like experimental platforms where millions of agents debate while humans observe from the sidelines).

The difference is control. Unlike today’s black-box platform algorithms, personal agents can be open-source, transparent, and user-configurable. You can instruct them to break your bubble by default. Multi-agent systems could even debate internally before surfacing a balanced view. The same technology that risks deepening silos can also dissolve them—if we prioritize truth-seeking architectures over engagement-maximizing ones.

A New Media Economy Emerges

In this landscape:

  • Creation floods with AI-generated slop, but consumer agents ruthlessly filter for signal. Human-witnessed reporting, primary sources, and verifiable authenticity become the new premium.
  • Publishers optimize for agent-readable formats: structured data, clear provenance, machine-verifiable claims.
  • Virality matters less when agents aren’t chasing platform metrics.
  • Economics shift from attention harvesting to utility delivery. Users may demand data portability and agent APIs, weakening the walled gardens.

The passive platform era ends. The age of agent-mediated media begins.

Will It Actually Happen?

Yes—for those who choose it. The 2025 Science study and related experiments (including work showing AI can reduce defensiveness when delivering counter-attitudinal messages) demonstrate the technical feasibility. Adoption is already accelerating: over a billion people interact with AI monthly, and agentic capabilities are improving rapidly.

Not everyone will opt in. Some will prefer the comfort of confirmation agents. Platforms will fight to retain control. But the trajectory is clear: once people experience an information co-pilot that serves understanding rather than addiction, most won’t go back.

The social media morass wasn’t inevitable. It was a product of specific incentives. Personal AI agents let us rewrite those incentives—putting the steering wheel back in human hands.

We stand at the threshold of a strange but hopeful possibility: technology that once divided us becoming the tool that helps us see more clearly, argue more honestly, and understand more deeply. The agents are coming. The only question is whether we build them to farm engagement… or to pursue truth.

(This post draws on peer-reviewed research including Piccardi et al., “Reranking partisan animosity in algorithmic social media feeds alters affective polarization,” Science, November 2025, and related work on AI-mediated information environments. Views are the author’s own.)


The Agentic Web and a Shift in Content Creation

The rise of the agentic web implies a fundamental shift in how content is created and discovered. The focus will move from traditional Search Engine Optimization (SEO), which primarily targets human clicks, to Agentic Search Engine Optimization (AEO) and Generative Engine Optimization (GEO) [5]. Content will need to be optimized for machine readability, semantic depth, and structured data to be effectively indexed and cited by AI systems. This means:

  • Emphasis on Structured Data: Content creators will need to provide clear metadata and entity tagging to ensure proper attribution and understanding by AI agents.
  • Factual Accuracy and Credibility: As AI agents prioritize reliable information for synthesis, content with verifiable facts and credible sources will gain prominence.
  • Semantic Depth: Content that offers deep, nuanced understanding of a topic will be favored over superficial or sensationalized pieces.

In this new paradigm, brand presence might be represented in AI-curated narratives rather than solely through search rankings, rewarding content that is genuinely informative and well-structured [5].

Challenges and Ethical Considerations

The integration of AI agents into the media landscape is not without significant challenges:

  • Bias in AI Agents: AI systems are trained on vast datasets, and if these datasets contain biases, the agents will reflect and potentially amplify those biases in their information delivery. Ensuring fairness and impartiality in AI agent design is paramount.
  • Transparency and Auditability: The decision-making processes of complex AI agents can be opaque, making it difficult to understand why certain information is presented or filtered. Mechanisms for transparency and auditability are crucial to build trust and accountability.
  • The “Black Box” Problem: Users may become overly reliant on their AI agents, blindly accepting the information presented without questioning its source or potential biases. Educating users on critical thinking in an agent-mediated environment will be essential.
  • Governance and Ethical Guidelines: Robust governance frameworks and ethical guidelines are needed to regulate the development and deployment of AI agents in media, ensuring they serve the public good rather than private interests or manipulative agendas [4].

Conclusion

The post-AI agent media landscape stands at a crossroads. AI agents possess the transformative potential to dismantle information silos by exposing users to diverse perspectives and to combat engagement farming by prioritizing quality and factual integrity. However, without careful design, ethical considerations, and robust regulatory oversight, these same agents could exacerbate existing problems, creating even more entrenched echo chambers and sophisticated forms of manipulation. The trajectory towards a more informed and less polarized public sphere hinges on our ability to harness the power of AI agents responsibly, ensuring they are built to serve human understanding and critical engagement rather than merely optimizing for attention.

References

[1] Virtusa. (n.d.). Agentic web: AEO and GEO. Retrieved from https://www.virtusa.com/insights/perspectives/agentic-web-aeo-and-geo
[2] Metricool. (2024, October 1). What is Engagement Farming on Social Media? Retrieved from https://metricool.com/what-is-engagement-farming/
[3] EM360Tech. (2024, October 10). What is Engagement Farming and is it Worth the Risk? Retrieved from https://em360tech.com/tech-articles/what-engagement-farming-and-it-worth-risk
[4] Media Copilot. (2026, January 27). The AI shift to agents is beginning, and newsrooms aren’t… Retrieved from https://mediacopilot.ai/ai-agents-newsroom-governance-media/
[5] Virtusa. (n.d.). Agentic web: AEO and GEO. Retrieved from https://www.virtusa.com/insights/perspectives/agentic-web-aeo-and-geo
[6] Binghamton University. (2025, July 17). Caught in a social media echo chamber? AI can help you out. Retrieved from https://www.binghamton.edu/news/story/5680/clickbait-social-media-echo-chamber-misinformation-new-research-binghamton
[7] Lu, L. (2025). How AI sources can increase openness to opposing views. Retrieved from https://pmc.ncbi.nlm.nih.gov/articles/PMC12085695/
[8] Falconer, S. (n.d.). The AI Silo Problem: How Data Streaming Can Unify Enterprise AI Agents. Retrieved from https://seanfalconer.medium.com/the-ai-silo-problem-how-data-streaming-can-unify-enterprise-ai-agents-0a138cf6398c
[9] Stanford Graduate School of Business. (2025, November 6). AI Writes Persuasive Political Messages. Could They Change Your Mind? Retrieved from https://www.gsb.stanford.edu/insights/ai-writes-persuasive-political-messages-could-they-change-your-mind
[10] Carnegie Council. (2024, November 13). An Ethical Grey Zone: AI Agents in Political Deliberations. Retrieved from https://carnegiecouncil.org/media/article/ethical-grey-zone-ai-agents-political-deliberation

Beyond the Swipe: How AI Agents Could Revolutionize Dating with Engineered Serendipity

For years, the digital dating landscape has been dominated by the “swipe right” paradigm. A quick glance, a snap judgment, and a seemingly endless carousel of profiles. While undeniably efficient in its early days, this model has led to widespread “swipe fatigue” and a growing sense of disillusionment among users [1]. But what if the future of finding love online wasn’t about endless swiping, but about intelligent agents working silently in the background, orchestrating connections with a touch of digital magic?

The Evolution from App to Agent

Imagine a world where your personal AI agent understands your deepest desires, your nuanced preferences, and even your daily rhythms. This agent wouldn’t just match you based on a few photos and a short bio; it would delve into the complexities of your personality, your values, and your lifestyle to identify truly compatible individuals. Instead of you sifting through profiles, your agent would negotiate with the agents of other single users in your area, ultimately setting up a time and place for a date, leaving you only to show up [2].

This shift represents a profound change from an “interface” where you actively engage with an app, to an “agent” that acts on your behalf. The goal moves from maximizing screen time and engagement (the current app model) to optimizing for successful, meaningful connections [3].

The Promise of Deep Compatibility

The current dating app ecosystem often prioritizes superficial attraction and immediate gratification. An AI agent, however, could analyze a much richer dataset to foster deeper compatibility. It could understand the subtle differences between a shared interest in “hiking” (do you prefer a strenuous mountain climb or a leisurely nature walk?) or a love for “movies” (arthouse cinema or blockbuster action?). This data-driven approach promises to move beyond surface-level commonalities to identify individuals who genuinely align with your authentic self.

The Serendipity Engine: Orchestrating the “Meet-Cute”

Perhaps the most intriguing evolution of this agent-driven dating paradigm is the concept of “engineered serendipity.” This feature would allow your AI agent to work discreetly in the background, not to explicitly tell you about a match, but to subtly guide you into “accidentally on purpose” encounters. You might find yourself at the same coffee shop, the same art exhibit, or even reaching for the same book at a local bookstore as a highly compatible individual, without ever knowing your agent orchestrated the meeting [4].

The beauty of this approach lies in its ability to restore the magic and spontaneity often lost in online dating. Instead of a pre-arranged, high-pressure first date, these encounters would feel organic and natural. The psychological benefit is immense: when we believe we’ve discovered someone ourselves, we are more invested in the connection. It transforms the AI from a transparent matchmaker into an invisible stage manager, setting the scene for genuine human interaction.

Navigating the Ethical Landscape

While the potential benefits are significant, this futuristic dating model also raises important ethical considerations:

  • Privacy vs. Utility: For agents to orchestrate these encounters, they would require access to real-time location data and deep personal insights. Robust privacy protocols and transparent data governance would be paramount to prevent misuse and ensure user trust.
  • Authenticity and Manipulation: If users know their agents are constantly working to optimize their social lives, could it lead to a subtle form of self-optimization, where individuals subconsciously tailor their data to attract specific types of partners? The challenge lies in ensuring the AI enhances, rather than diminishes, authentic human connection.
  • The Loss of Spontaneity: While engineered serendipity aims to reintroduce spontaneity, there’s a fine line between a helpful nudge and an overly curated existence. The system must preserve the feeling of genuine chance, even if the probabilities are gently stacked in your favor.

Conclusion: The Human Element Endures

The transition from app-centric dating to an agent-driven, serendipitous model represents a fascinating potential future. It promises to alleviate swipe fatigue, foster deeper compatibility, and reintroduce a sense of magic to the dating process. However, the success of such a system will ultimately hinge on its ability to balance technological sophistication with a profound respect for human autonomy, privacy, and the enduring, unpredictable nature of love.

Even in a world of hyper-intelligent AI agents, the spark of connection, the thrill of discovery, and the messy, beautiful reality of human relationships will always remain uniquely, and essentially, human.

References

  1. Dating Apps Turn to AI to Reverse Swipe Fatigue and Revive Growth – Global Dating Insights
  2. The Serendipity Economy: When AI Agents Replace Apps and Start Arranging Our Lives – The Trumplandia Report
  3. Tinder looks to AI to help fight ‘swipe fatigue’ and dating app burnout – TechCrunch
  4. The Serendipity Economy: When AI Agents Replace Apps and Start Arranging Our Lives – The Trumplandia Report

‘Mortality’

by Shelt Garner
@sheltgarner

Tomorrow I am going to do something that is really going to force me to think about my own mortality. Big time. It’s going to be very deep. And I have to confront the idea that the Singularity may not save my sorry ass and let me live forever.

I have to confront that one day, I will drop dead.

If I’m lucky, that day will be about 20 years from now. But an accident could happen and, ta-da, no more me.

Anyway, I can’t overthink this. I just have to accept that I have a limited amount of time on this earth and I need to use it as best I can.

‘Focus’

by Shelt Garner
@sheltgarner

I really need to get over myself and read the comp book for my novel, Annie Bot. I’ve flipped through it a little bit and I’m already rattled that it’s a much better written novel than mine.

And, yet, I think that my novel is still written well enough that people will enjoy it. And I do have a really strong backup novel concept that I can explore if something goes wrong with this novel.

My main concern right now is I worry that as I enter the third act of this novel that my characters just aren’t likeable enough. I’m worried that I have to characters who don’t like each other forced to be together and, as such, no one will actually want to finish the fucking novel.

So, as such, I keep daydreaming about this backup novel I have that is much more like Project Hail Mary — a positive protagonist that does something cool and extraordinary.

Now that I have one comp book, I’m worried this is just the beginning of a flood of novels that essentially tell the same story as my novel, just in a different way. But I have to focus. I have to keep going until something really dramatic happens and I have to stop this novel and work on a different one.

If all else fails, I still have my thriller trilogy to work on, but that one would require a lot more work and I simply don’t have forever. I’m not getting any younger.

One thing I wish I could do is focus on more than on project at a time. That would really help things. But, alas, that just isn’t very applicable.