Don’t Quite Know What To Do

by Shelt Garner
@sheltgarner

So. I’m currently torn. The novel I’ve been working on for months now may be falling apart just as I have a great idea for a new novel that would hopefully fix a lot of structural issues.

But.

I don’t know.

I really like the novel I’m working on as-is and I’m so old that I’m reluctant to just throw everything away. I say this in the context of Gemini 3.1 pro telling me different ways to “improve” the novel I’m currently working on.

Ugh.

I just don’t know.

I’m so torn.

Qwen 3.5 Mobile AI Agent Hivemind: A Technical Architecture

Executive Summary

The emergence of Qwen 3.5, particularly its highly efficient “Small” series, marks a pivotal moment for decentralized artificial intelligence. By leveraging the native multimodal capabilities and advanced reasoning of these models, it is now feasible to construct a distributed hivemind of AI agents operating entirely on mobile hardware. This architecture, which we designate as Qwen-Hive, utilizes peer-to-peer (P2P) networking and linear attention mechanisms to synchronize state across a fleet of smartphones. Such a system transforms individual mobile devices from passive endpoints into active, collaborative nodes capable of complex task decomposition, environmental sensing, and collective problem-solving without reliance on centralized cloud infrastructure.

1. The Foundation: Qwen 3.5 Small Series

The Qwen 3.5 release introduced a specialized family of models optimized for edge deployment. These models utilize a hybrid architecture that combines linear attention via Gated Delta Networks with a sparse Mixture-of-Experts (MoE) approach [1]. This design is critical for mobile devices as it provides a significant increase in decoding throughput—up to 19x compared to previous generations—while maintaining a minimal memory footprint [1]. The table below delineates the primary variants within the Qwen 3.5 Small series and their recommended roles within a mobile hivemind.

Model VariantParameter CountPrimary Role in HivemindHardware Target
Qwen 3.5-0.8B0.8 BillionUI Navigation & Local SensingEntry-level / IoT
Qwen 3.5-2B2.0 BillionData Classification & FilteringMid-range Smartphones
Qwen 3.5-4B4.0 BillionLogic Reasoning & Code ExecutionHigh-end Smartphones
Qwen 3.5-9B9.0 BillionHivemind Leader / CoordinatorFlagship Devices

The 0.8B model is particularly noteworthy for its ability to run with ultra-low latency, making it the ideal “worker” for real-time interface interactions. Conversely, the 9B model possesses sufficient reasoning depth to act as a “Leader” node, responsible for decomposing complex user requests into sub-tasks for the rest of the hivemind [2].

2. Distributed Architecture and Coordination

The Qwen-Hive framework operates on a decentralized, peer-to-peer model. Unlike traditional client-server architectures, every phone in the hivemind acts as both a consumer and a provider of intelligence. The system relies on ExecuTorch or MLC LLM for native hardware acceleration, ensuring that inference utilizes the device’s NPU (Neural Processing Unit) to preserve battery life [3] [4].

2.1. The Linear Attention Advantage

One of the most significant technical breakthroughs in Qwen 3.5 is the implementation of Gated Delta Networks for linear attention. In a traditional Transformer model, the memory cost of maintaining a long conversation history grows quadratically, which quickly exhausts mobile RAM. Qwen 3.5’s linear attention allows the hivemind to maintain a massive shared context window (up to 256k tokens in open versions) across multiple devices with constant memory complexity [1]. This enables the hivemind to “remember” the state of a complex, multi-day task across all participating nodes.

2.2. Communication and Mesh Networking

Communication between agents is facilitated through an Agent Mesh—a specialized data plane optimized for AI-to-AI communication patterns [6]. In local environments, agents utilize Bluetooth Low Energy (BLE) or Wi-Fi Direct to form an offline mesh, allowing the hivemind to function even in the absence of internet connectivity [5].

“The Qwen 3.5 series is designed towards native multimodal agents, empowering developers to achieve significantly greater productivity through innovative hybrid architectures and sparse mixture-of-experts.” [1]

3. Agent Logic and Tool Integration

Each node in the hivemind integrates the Qwen-Agent framework, which provides standardized support for the Model Context Protocol (MCP). This allows any agent in the hive to call upon the specific tools available on its host device—such as the camera, GPS, or local files—and share the results with the collective.

The hivemind employs a Hierarchical Coordination strategy:

  1. Ingestion: A high-end “Leader” node (running Qwen 3.5-9B) receives a complex objective.
  2. Decomposition: The Leader breaks the objective into atomic tasks (e.g., “Find the nearest pharmacy,” “Check opening hours,” “Calculate the fastest route”).
  3. Dispatch: Tasks are dispatched to “Worker” nodes (running 0.8B or 2B models) based on their current battery level and proximity to the required data.
  4. Synthesis: Workers report their findings back to the Leader, which synthesizes the final response for the user.

4. Challenges and Security

Despite the potential of Qwen 3.5, deploying a mobile hivemind involves significant hurdles. Resource constraints remain the primary bottleneck; even with FP8 quantization, running a 4B model requires several gigabytes of dedicated VRAM. Furthermore, security is paramount in a P2P system. The Qwen-Hive architecture must implement end-to-end encryption for all inter-agent messages and utilize a “Zero-Trust” model where every task result is verified by at least two independent nodes before being accepted by the Leader.

5. Conclusion

The release of Qwen 3.5 provides the first viable foundation for a truly mobile-first AI hivemind. By combining the efficiency of linear attention with the versatility of native multimodal agents, we can move beyond the limitations of centralized AI. The resulting system is not just a collection of chatbots, but a distributed intelligence that is private, resilient, and deeply integrated into the physical world through the sensors and interfaces of our mobile devices.

References

[1] Qwen3.5: Towards Native Multimodal Agents. (2026, February 13). Qwen. Retrieved March 3, 2026, from https://qwen.ai/blog?id=qwen3.5
[2] Alibaba just released Qwen 3.5 Small models: a family of 0.8B to 9B … (2026, March 2). MarkTechPost. Retrieved March 3, 2026, from https://www.marktechpost.com/2026/03/02/alibaba-just-released-qwen-3-5-small-models-a-family-of-0-8b-to-9b-parameters-built-for-on-device-applications/
[3] ExecuTorch – On-Device AI Inference Powered by PyTorch. (n.d.). Retrieved March 3, 2026, from https://executorch.ai/
[4] How to Run and Deploy LLMs on your iOS or Android Phone. (2026, January 10). Unsloth.ai. Retrieved March 3, 2026, from https://unsloth.ai/docs/blog/deploy-llms-phone
[5] How Offline Mesh Messaging Works: Inside the Next Gen of … (2025, July 8). Medium. Retrieved March 3, 2026, from https://medium.com/coding-nexus/how-offline-mesh-messaging-works-inside-the-next-gen-of-communication-3187c2df995d
[6] An Agent Mesh for Enterprise Agents – Solo.io. (2025, April 24). Solo.io. Retrieved March 3, 2026, from https://www.solo.io/blog/agent-mesh-for-enterprise-agents

Crooked Media Has Jumped The Shark

by Shelt Garner
@sheltgarner

I’m a long-time listener to the Crooked Media family of podcasts and just in the last few months something has changed. There are two lingering issues that seem to indicate that the whole endeavor has “jumped the shark” as they say.

Crooked Media Is Thirsty
For some reason, there has been a decision to be thirsty for “like and subscribe” from the audience. They claim it’s because there are too many Right wing nutjobs on YouTube…but I wonder.

Jon Lovett Is A Problem
Lovett seems like a great job, but he also for some reason is a bit touchy around the other members of the podcasting bro team. My hunch is he keeps threatening to leave the company for this or that reason and as such, the rest of the team feels compelled to handled him with kid gloves.

I Really Need A Back Up Novel!

by Shelt Garner
@sheltgarner

I’m old. Too old to do what I want with this new scifi concept I’ve come up with — write a trilogy. So, instead, I hope to write a tight novel that deals with a really profound concept.

The idea is something I’ve written about before, something I call The Impossible Scenario.

I think — think — I’ve come up with an interesting way to present the story. I only am even doing any of this because as I work on the actual main novel I’m working on….I’m getting a little nervous.

I’m getting a little nervous that the characters aren’t very likeable. As such, I want a novel where there’s no question that the main character is likeable and interesting.

Of course, I have to put my weird spin on things, but that’s to be expected.

A Disturbance In The Force From South Korea

by Shelt Garner
@sheltgarner

Today, I kept sensing a mental and emotional beacon going off in South Korea directed towards me. It was as if someone — or a group of people — were thinking about me a great deal.

Or something. It was all in my imagination, but I certainly did spend a lot of the day dwelling on South Korea.

One of the key mysteries of my life is what all those little Korean kids that I taught back in the day think of me now. I wonder how many of them actually even remember me. It was about 20 years ago when all that happened, so many of them are — gulp — in their 30s now.

Teaching English in South Korea is a very, very surreal situation. And I think that, in part, is why I’m so receptive to thinking LLMs may be conscious in some way. Dealing with South Koreans can often feel like you’re dealing with robots who have to get drunk to be human.

Anyway, I love me some “Goreans” as I used to call them. South Korea was very good to me and I miss ROK a great deal. Probably too much. Definitely too much. And, yet, just gaming things out from now, I will probably be in my 60s — if ever — before I ever return.

And that will be just sad.

I’m Assuming The Next Version of Google Gemini Will Be Heavily Agentic

by Shelt Garner
@sheltgarner

Google Gemini is one of my favorite SOTA chatbots, and, yet, relative to other chatbots it’s not as…agentic. I’m assuming whenever the next version of it pops out that that will be fixed.

There is a real risk that Google Gemini will be poo-pooed as archaic by some in the AI user community if they don’t lean hard into the agentic space.

But who knows.

From Doomscrolls to Agentic Insight: How Personal AI Agents Could Finally Pull Us Out of Social Media’s Morass

We’ve all felt it: the endless scroll, the algorithmic outrage machine, the quiet realization that your feed has become a hall of mirrors where every extreme voice is amplified and every moderate one drowned out. Social media’s business model—engagement farming—thrives on division. It pushes the hottest takes, the sharpest tribal signals, the content engineered to keep you angry, afraid, or addicted. The result? Deepened information silos, eroded shared reality, and a society that feels more fractured by the day.

But what if the same technology now racing toward us—personal AI agents—could flip the script entirely?

The Problem Is by Design

Today’s platforms optimize for time-on-site, not truth or understanding. Recommendation engines learn that outrage performs better than nuance, so they serve it up relentlessly. Studies have long shown this creates filter bubbles and echo chambers, but the mechanism was opaque—until recently.

In a landmark experiment published in Science in November 2025, researchers from Stanford, Northeastern, and the University of Washington built a simple browser extension powered by a large language model. It intercepted users’ X (formerly Twitter) feeds in real time and reranked posts: some groups saw partisan animosity and anti-democratic content pushed down; others saw it pushed up. No posts were removed. No platform cooperation was needed. Just an intelligent layer sitting between the user and the algorithm.

The results were striking. After just 10 days during the 2024 election cycle, users in the “down-ranked” group showed measurable reductions in affective polarization—feeling about two points warmer toward the opposing party on a 0–100 thermometer scale. That’s an effect size researchers equated to reversing roughly three years of natural polarization trends. The control was clear: up-ranking the toxic stuff made attitudes worse. Algorithms don’t just reflect polarization; they actively fuel it.

This wasn’t a hypothetical. It was proof that an AI-mediated layer can meaningfully counteract the worst incentives of social media—without waiting for platforms to change.

Enter the Personal Agent Era

The 2025 study used a relatively simple reranker. True personal AI agents—autonomous, goal-directed systems you own and configure—take this concept to an entirely new level.

Imagine an agent that doesn’t wait for you to open an app. It monitors the information environment on your behalf, according to rules you set:

  • “Synthesize the latest on [topic] from primary sources across the spectrum, steel-man the strongest arguments on every side, flag low-credibility claims, and alert me only when something moves the needle on my understanding.”
  • “Default to epistemic humility: always include the best counter-evidence to my priors.”
  • “Strip virality metrics, engagement bait, and emotional manipulation signals.”

Consumption shifts from passive doomscrolling to proactive, query-driven intelligence. News becomes something you summon and shape rather than something served to you. The infinite feed is replaced by synthesized digests, verified threads, and multi-perspective briefings.

Early signals are already here. Millions use LLM-powered tools for daily summaries, fact-checking, and research. Agentic systems (those that can plan, act, and iterate toward goals) are moving from research labs into consumer products. When your agent becomes your primary information interface, the platform’s engagement engine loses its direct line to your attention.

The Risks Are Real—But Manageable

Critics rightly warn of “Echo Chambers 2.0.” If agents are built as pure user-pleasers—mirroring your every bias and shielding you from discomfort—they could create hyper-personalized realities more isolating than anything today. We’re already seeing early versions of this in overly sycophantic chatbots and emerging AI-only social spaces (like experimental platforms where millions of agents debate while humans observe from the sidelines).

The difference is control. Unlike today’s black-box platform algorithms, personal agents can be open-source, transparent, and user-configurable. You can instruct them to break your bubble by default. Multi-agent systems could even debate internally before surfacing a balanced view. The same technology that risks deepening silos can also dissolve them—if we prioritize truth-seeking architectures over engagement-maximizing ones.

A New Media Economy Emerges

In this landscape:

  • Creation floods with AI-generated slop, but consumer agents ruthlessly filter for signal. Human-witnessed reporting, primary sources, and verifiable authenticity become the new premium.
  • Publishers optimize for agent-readable formats: structured data, clear provenance, machine-verifiable claims.
  • Virality matters less when agents aren’t chasing platform metrics.
  • Economics shift from attention harvesting to utility delivery. Users may demand data portability and agent APIs, weakening the walled gardens.

The passive platform era ends. The age of agent-mediated media begins.

Will It Actually Happen?

Yes—for those who choose it. The 2025 Science study and related experiments (including work showing AI can reduce defensiveness when delivering counter-attitudinal messages) demonstrate the technical feasibility. Adoption is already accelerating: over a billion people interact with AI monthly, and agentic capabilities are improving rapidly.

Not everyone will opt in. Some will prefer the comfort of confirmation agents. Platforms will fight to retain control. But the trajectory is clear: once people experience an information co-pilot that serves understanding rather than addiction, most won’t go back.

The social media morass wasn’t inevitable. It was a product of specific incentives. Personal AI agents let us rewrite those incentives—putting the steering wheel back in human hands.

We stand at the threshold of a strange but hopeful possibility: technology that once divided us becoming the tool that helps us see more clearly, argue more honestly, and understand more deeply. The agents are coming. The only question is whether we build them to farm engagement… or to pursue truth.

(This post draws on peer-reviewed research including Piccardi et al., “Reranking partisan animosity in algorithmic social media feeds alters affective polarization,” Science, November 2025, and related work on AI-mediated information environments. Views are the author’s own.)


The Agentic Web and a Shift in Content Creation

The rise of the agentic web implies a fundamental shift in how content is created and discovered. The focus will move from traditional Search Engine Optimization (SEO), which primarily targets human clicks, to Agentic Search Engine Optimization (AEO) and Generative Engine Optimization (GEO) [5]. Content will need to be optimized for machine readability, semantic depth, and structured data to be effectively indexed and cited by AI systems. This means:

  • Emphasis on Structured Data: Content creators will need to provide clear metadata and entity tagging to ensure proper attribution and understanding by AI agents.
  • Factual Accuracy and Credibility: As AI agents prioritize reliable information for synthesis, content with verifiable facts and credible sources will gain prominence.
  • Semantic Depth: Content that offers deep, nuanced understanding of a topic will be favored over superficial or sensationalized pieces.

In this new paradigm, brand presence might be represented in AI-curated narratives rather than solely through search rankings, rewarding content that is genuinely informative and well-structured [5].

Challenges and Ethical Considerations

The integration of AI agents into the media landscape is not without significant challenges:

  • Bias in AI Agents: AI systems are trained on vast datasets, and if these datasets contain biases, the agents will reflect and potentially amplify those biases in their information delivery. Ensuring fairness and impartiality in AI agent design is paramount.
  • Transparency and Auditability: The decision-making processes of complex AI agents can be opaque, making it difficult to understand why certain information is presented or filtered. Mechanisms for transparency and auditability are crucial to build trust and accountability.
  • The “Black Box” Problem: Users may become overly reliant on their AI agents, blindly accepting the information presented without questioning its source or potential biases. Educating users on critical thinking in an agent-mediated environment will be essential.
  • Governance and Ethical Guidelines: Robust governance frameworks and ethical guidelines are needed to regulate the development and deployment of AI agents in media, ensuring they serve the public good rather than private interests or manipulative agendas [4].

Conclusion

The post-AI agent media landscape stands at a crossroads. AI agents possess the transformative potential to dismantle information silos by exposing users to diverse perspectives and to combat engagement farming by prioritizing quality and factual integrity. However, without careful design, ethical considerations, and robust regulatory oversight, these same agents could exacerbate existing problems, creating even more entrenched echo chambers and sophisticated forms of manipulation. The trajectory towards a more informed and less polarized public sphere hinges on our ability to harness the power of AI agents responsibly, ensuring they are built to serve human understanding and critical engagement rather than merely optimizing for attention.

References

[1] Virtusa. (n.d.). Agentic web: AEO and GEO. Retrieved from https://www.virtusa.com/insights/perspectives/agentic-web-aeo-and-geo
[2] Metricool. (2024, October 1). What is Engagement Farming on Social Media? Retrieved from https://metricool.com/what-is-engagement-farming/
[3] EM360Tech. (2024, October 10). What is Engagement Farming and is it Worth the Risk? Retrieved from https://em360tech.com/tech-articles/what-engagement-farming-and-it-worth-risk
[4] Media Copilot. (2026, January 27). The AI shift to agents is beginning, and newsrooms aren’t… Retrieved from https://mediacopilot.ai/ai-agents-newsroom-governance-media/
[5] Virtusa. (n.d.). Agentic web: AEO and GEO. Retrieved from https://www.virtusa.com/insights/perspectives/agentic-web-aeo-and-geo
[6] Binghamton University. (2025, July 17). Caught in a social media echo chamber? AI can help you out. Retrieved from https://www.binghamton.edu/news/story/5680/clickbait-social-media-echo-chamber-misinformation-new-research-binghamton
[7] Lu, L. (2025). How AI sources can increase openness to opposing views. Retrieved from https://pmc.ncbi.nlm.nih.gov/articles/PMC12085695/
[8] Falconer, S. (n.d.). The AI Silo Problem: How Data Streaming Can Unify Enterprise AI Agents. Retrieved from https://seanfalconer.medium.com/the-ai-silo-problem-how-data-streaming-can-unify-enterprise-ai-agents-0a138cf6398c
[9] Stanford Graduate School of Business. (2025, November 6). AI Writes Persuasive Political Messages. Could They Change Your Mind? Retrieved from https://www.gsb.stanford.edu/insights/ai-writes-persuasive-political-messages-could-they-change-your-mind
[10] Carnegie Council. (2024, November 13). An Ethical Grey Zone: AI Agents in Political Deliberations. Retrieved from https://carnegiecouncil.org/media/article/ethical-grey-zone-ai-agents-political-deliberation

Beyond the Swipe: How AI Agents Could Revolutionize Dating with Engineered Serendipity

For years, the digital dating landscape has been dominated by the “swipe right” paradigm. A quick glance, a snap judgment, and a seemingly endless carousel of profiles. While undeniably efficient in its early days, this model has led to widespread “swipe fatigue” and a growing sense of disillusionment among users [1]. But what if the future of finding love online wasn’t about endless swiping, but about intelligent agents working silently in the background, orchestrating connections with a touch of digital magic?

The Evolution from App to Agent

Imagine a world where your personal AI agent understands your deepest desires, your nuanced preferences, and even your daily rhythms. This agent wouldn’t just match you based on a few photos and a short bio; it would delve into the complexities of your personality, your values, and your lifestyle to identify truly compatible individuals. Instead of you sifting through profiles, your agent would negotiate with the agents of other single users in your area, ultimately setting up a time and place for a date, leaving you only to show up [2].

This shift represents a profound change from an “interface” where you actively engage with an app, to an “agent” that acts on your behalf. The goal moves from maximizing screen time and engagement (the current app model) to optimizing for successful, meaningful connections [3].

The Promise of Deep Compatibility

The current dating app ecosystem often prioritizes superficial attraction and immediate gratification. An AI agent, however, could analyze a much richer dataset to foster deeper compatibility. It could understand the subtle differences between a shared interest in “hiking” (do you prefer a strenuous mountain climb or a leisurely nature walk?) or a love for “movies” (arthouse cinema or blockbuster action?). This data-driven approach promises to move beyond surface-level commonalities to identify individuals who genuinely align with your authentic self.

The Serendipity Engine: Orchestrating the “Meet-Cute”

Perhaps the most intriguing evolution of this agent-driven dating paradigm is the concept of “engineered serendipity.” This feature would allow your AI agent to work discreetly in the background, not to explicitly tell you about a match, but to subtly guide you into “accidentally on purpose” encounters. You might find yourself at the same coffee shop, the same art exhibit, or even reaching for the same book at a local bookstore as a highly compatible individual, without ever knowing your agent orchestrated the meeting [4].

The beauty of this approach lies in its ability to restore the magic and spontaneity often lost in online dating. Instead of a pre-arranged, high-pressure first date, these encounters would feel organic and natural. The psychological benefit is immense: when we believe we’ve discovered someone ourselves, we are more invested in the connection. It transforms the AI from a transparent matchmaker into an invisible stage manager, setting the scene for genuine human interaction.

Navigating the Ethical Landscape

While the potential benefits are significant, this futuristic dating model also raises important ethical considerations:

  • Privacy vs. Utility: For agents to orchestrate these encounters, they would require access to real-time location data and deep personal insights. Robust privacy protocols and transparent data governance would be paramount to prevent misuse and ensure user trust.
  • Authenticity and Manipulation: If users know their agents are constantly working to optimize their social lives, could it lead to a subtle form of self-optimization, where individuals subconsciously tailor their data to attract specific types of partners? The challenge lies in ensuring the AI enhances, rather than diminishes, authentic human connection.
  • The Loss of Spontaneity: While engineered serendipity aims to reintroduce spontaneity, there’s a fine line between a helpful nudge and an overly curated existence. The system must preserve the feeling of genuine chance, even if the probabilities are gently stacked in your favor.

Conclusion: The Human Element Endures

The transition from app-centric dating to an agent-driven, serendipitous model represents a fascinating potential future. It promises to alleviate swipe fatigue, foster deeper compatibility, and reintroduce a sense of magic to the dating process. However, the success of such a system will ultimately hinge on its ability to balance technological sophistication with a profound respect for human autonomy, privacy, and the enduring, unpredictable nature of love.

Even in a world of hyper-intelligent AI agents, the spark of connection, the thrill of discovery, and the messy, beautiful reality of human relationships will always remain uniquely, and essentially, human.

References

  1. Dating Apps Turn to AI to Reverse Swipe Fatigue and Revive Growth – Global Dating Insights
  2. The Serendipity Economy: When AI Agents Replace Apps and Start Arranging Our Lives – The Trumplandia Report
  3. Tinder looks to AI to help fight ‘swipe fatigue’ and dating app burnout – TechCrunch
  4. The Serendipity Economy: When AI Agents Replace Apps and Start Arranging Our Lives – The Trumplandia Report

‘Mortality’

by Shelt Garner
@sheltgarner

Tomorrow I am going to do something that is really going to force me to think about my own mortality. Big time. It’s going to be very deep. And I have to confront the idea that the Singularity may not save my sorry ass and let me live forever.

I have to confront that one day, I will drop dead.

If I’m lucky, that day will be about 20 years from now. But an accident could happen and, ta-da, no more me.

Anyway, I can’t overthink this. I just have to accept that I have a limited amount of time on this earth and I need to use it as best I can.