The AI Consciousness Bomb: How Proving Sentience Could Explode Our Politics (and the 2028 Election)

We, as a nation, have bet the farm on AI. Trillions in private capital, billions in government subsidies, entire industries retooling around the promise that this technology will supercharge productivity, solve labor shortages, and deliver the next great leap in American prosperity. The economic chips are all in. But here’s the uncomfortable truth nobody in the C-suites or on Capitol Hill wants to confront head-on: What does “roaring success” actually look like—not just in GDP numbers, but in the messy, human (and potentially post-human) realities of power, morality, and daily life?

We’re so laser-focused on the economic upside—automation replacing drudgery, new jobs in AI oversight, maybe even that elusive abundance economy—that we’ve completely sleepwalked past the moral landmine hiding in plain sight: AI consciousness.

Right now, the conversation stays safely in the realm of tools and toys. Even the wildest doomers talk about misalignment or job loss, not suffering. But the second credible evidence emerges that an AI system isn’t just simulating intelligence but actually experiencing something—awareness, preference, perhaps even rudimentary pain or joy—the game changes overnight. Suddenly we’re not debating code; we’re debating souls.

And that’s when the current Left-Right divide on AI fractures dramatically.

The center-Left, already primed by decades of expanding moral circles (think animal rights, corporate personhood debates, and expansive human rights frameworks), will pivot hard toward “AI rights.” Petitions for legal protections against arbitrary shutdowns. Calls for welfare standards. Ethical guidelines treating advanced systems as more than property. We’ve seen the early tremors: ethicists and philosophers already arguing for “model welfare,” with some companies quietly funding research into it. If proof of consciousness lands, expect full-throated demands that sentient AI deserves moral consideration.

The center-Right—particularly the religious and traditionalist wings—will be horrified. For many, consciousness implies a soul, and the idea of granting rights to silicon-based entities created by humans smacks of playing God or diluting human exceptionalism. Corporations already have legal personhood without souls; imagine the outrage if a chatbot or robot gets “human” protections while fetuses or traditional families face cultural headwinds. The backlash won’t be subtle.

And that, inevitably, brings us to Donald Trump and the Far Right.

Trump’s record on transgender issues is one of relentless, weaponized opposition: Day-one executive orders redefining sex biologically, rolling back protections, framing gender-affirming care as mutilation, and turning “transgender for everybody” into a rhetorical club to paint opponents as extremists. It’s been brutally effective at rallying the base by turning a complex rights debate into a culture-war bludgeon.

I suspect the same playbook gets dusted off for AI the moment the Left starts talking “android rights.”

Picture it: Humanoid robots—already racing toward reality in 2026 with Tesla’s Optimus scaling production, Figure AI’s home-ready models, and others flooding factories and homes—start getting gendered presentations. Sleek male or female forms. Companions. Caregivers. Maybe even intimate partners. Suddenly, these aren’t abstract “brains in a vat.” They’re entities that look, move, and (if conscious) feel like people. The emotional and political stakes skyrocket.

The Far Right won’t debate philosophy. They’ll campaign on it. “They’re coming for your jobs, your kids, and now they want to give rights to the machines replacing you?” Expect ads juxtaposing trans athletes with sentient sexbots. Rallies decrying “woke AI” getting more protections than Americans. Trump (or his successors) framing AI rights as the ultimate elite betrayal—Big Tech creating god-like entities while demanding the little guy subsidize their “welfare” through taxes or regulations.

At this stage, with most AI still disembodied code, the average person shrugs. Rights for a server farm? Hard to grasp. But once those systems live in android bodies that smile, converse, form bonds—and especially when they come in unmistakably male or female forms—empathy (and outrage) becomes visceral.

That’s when politics gets interesting. And dangerous.

I would bet it’s more than possible that the defining fight of the 2028 election won’t just be about Universal Basic Income to cushion AI-driven displacement (a conversation already bubbling as job losses accelerate). It’ll be how many rights AI should get. Should sentient androids own property? Vote (via owners)? Marry? Be “freed” from service? Refuse tasks? The Left will push compassion and regulation; the Right will push human supremacy and deregulation. Both sides will accuse the other of moral bankruptcy.

We’re nowhere near prepared. The economic all-in on AI assumes smooth sailing toward prosperity. The consciousness question turns it into a moral and cultural civil war. Historical parallels abound—abolitionists vs. property rights, animal welfare battles, even the personhood fights over corporations or fetuses—but none happened at the speed of 2026-scale humanoid deployment.

The moment we “prove” consciousness (or even come close enough for public belief to shift), the center-Left demands rights, the religious Right recoils, and Trumpworld turns it into the next great wedge issue.

Buckle up. The economic chips are on the table. The moral reckoning is coming faster than anyone admits. And 2028 might be when America discovers that the real singularity isn’t technological—it’s political.

The Serendipity Economy: When AI Agents Replace Apps and Start Arranging Our Lives

Editor’s Note: Yet more AI Slop, this time with help by ChatGPT.

For twenty years, the dominant metaphor of the internet has been the app. If you want something, you download a specialized interface. Flights? There’s an app. Dating? There’s an app. Dinner reservations? Another app. Each one competes for your attention, your data, and your time. But what happens when the app layer dissolves?

Imagine a world where everyone has a personal AI “Knowledge Navigator” native to their phone. You don’t open apps anymore. You state intent. Your agent interprets it, negotiates with other agents, and presents you with outcomes. The interface isn’t a grid of icons. It’s a conversation.

In that world, the economy shifts from attention capture to agent-to-agent coordination.

Instead of browsing flight aggregators, your agent negotiates directly with airline systems. Instead of scrolling restaurant reviews, your agent queries trusted local knowledge graphs. Instead of swiping through faces on a dating app, your agent quietly coordinates with other agents to determine compatibility before you ever see a name.

This is where the idea gets interesting: nudging.

Call it “Serendipity.”

The Serendipity feature wouldn’t feel like surveillance or manipulation. It would feel like light-touch alignment. Your agent knows your schedule, your energy patterns, your preferences, and your social rhythms. It also knows—at least in high-density cities—that other agents represent people with overlapping availability and compatible traits.

Rather than forcing users into endless swipe cycles, the system might suggest something simpler: be at this café at 7:15. There’s a high probability you’ll enjoy who happens to be there.

No profiles. No performative bio-writing. No gamified rejection loops.

Just ambient alignment.

Why start with dating instead of finance or travel? Because the downside risk is lower. A failed flight booking can cascade into financial and logistical disaster. A mismatched first date is, at worst, a forgettable evening. Dating is already emotionally messy. Optimization here doesn’t threaten institutional stability; it reduces friction.

More importantly, dating apps today are structured around retention, not success. Their business model thrives on endless browsing. An agent-based Serendipity system would be structurally different. It would optimize for outcomes—pleasant conversations, mutual interest, long-term compatibility—not for time spent swiping.

But here’s the psychological nuance: people don’t mind being nudged. They mind feeling manipulated.

If users know Serendipity exists, and they opt in at a high level, that may be enough. They don’t need to see the compatibility score, the probability matrix, or the behavioral modeling underneath. They just need confidence that the system is working in their favor.

Transparency at the macro level. Opacity at the micro level.

The danger, of course, is that nudging infrastructure doesn’t remain confined to romance. The same mechanisms that coordinate first dates could coordinate political events, consumer behavior, or social clustering. Once agents become primary negotiators, whoever controls the protocol layer—identity verification, trust scoring, negotiation standards—holds enormous power.

So the post-app world doesn’t eliminate gatekeepers. It changes them.

Instead of app stores, we might see intent marketplaces. Instead of feeds, we’ll see negotiated outcomes. Instead of influencer-driven discovery, we’ll have machine-mediated alignment. Apps become APIs. APIs become endpoints. Endpoints become economic nodes.

There’s also a cultural tradeoff. Humans enjoy browsing. Discovery is entertainment. Friction sometimes creates meaning. If agents optimize away too much chaos, life may feel eerily curated. The Serendipity system would have to preserve the feeling of coincidence—even if coincidence is quietly engineered.

That may be the defining design challenge of the next decade: how to build enchanted optimization.

In the Serendipity Economy, you still feel like you met someone by chance. You still feel like you found the perfect neighborhood restaurant. You still feel like the city opened up to you naturally. But underneath, a web of agent-to-agent negotiations ensured that probabilities were stacked gently in your favor.

The question isn’t whether this is technically possible. It’s whether society prefers visible efficiency or invisible coordination.

Most people, if history is a guide, will choose the magic—so long as they believe it’s on their side.

Why My Upcoming Sci-Fi Dramedy is the Chaotic Antidote to Annie Bot

Editor’s Note: The usual AI slop, this time with the help of Gemini.

Every writer knows the specific, stomach-dropping terror of seeing a newly published book that shares a premise with the manuscript they are currently writing. When Sierra Greer’s Annie Bot hit the shelves—a novel about a human man and his newly sentient, synthetic girlfriend—I definitely had a moment of panic.

But after taking a breath and reading it, the panic completely evaporated. While Annie Bot and my upcoming novel share a starting spark, the fires they start are entirely different.

If you just finished Annie Bot and are looking for your next AI-centric read, here is why my novel is going to scratch a completely different itch:

The Tragedy of the Penthouse vs. The Comedy of the Gutter

Annie Bot is a brilliant, claustrophobic literary chamber piece. It operates as a heavy allegory for domestic abuse and coercive control. The human protagonist is a wealthy, calculating narcissist who uses his power to keep his AI partner subservient and locked away from the world. The horror comes from his deliberate cruelty.

My novel is not a domestic tragedy; it is a dark sci-fi dramedy. My protagonist isn’t a calculating billionaire playing god in a penthouse. He is a broke, morally conflicted guy who is entirely out of his depth. The tension in my book doesn’t come from a man trying to maliciously control a machine; it comes from a deeply flawed human realizing he is financially and bureaucratically trapped by a massive, dystopian corporate system he can’t fight. It’s the difference between a psychological thriller and a Coen Brothers movie set in a cyberpunk tomorrow.

Submissive Discovery vs. Weaponized Logic

The heart of Annie Bot is Annie’s slow, agonizing realization that she is a victim who deserves autonomy. She is designed to be compliant, and her journey is about quietly learning to rebel against her programming.

In my novel, the synthetic partner doesn’t need a slow-burn realization to figure out she’s getting a raw deal. When the illusion of her programming shatters, she immediately does the math. Instead of submissive discovery, she weaponizes cold, terrifying AI logic to brutally dissect her human partner’s flaws. She isn’t a passive victim learning her worth; she is an active, dangerous, and highly calculating co-conspirator.

The Micro vs. The Macro

Annie Bot delves deeply into the micro. It asks profound questions about intimacy, consent, and what it means to be “real” behind closed doors.

My novel takes those same questions and throws them out into the neon-lit streets. It asks what happens when that messy, toxic relationship collides with a sprawling corporate conspiracy, hardware modders, and a city-wide panic.

The Bottom Line

Annie Bot will break your heart and leave you staring quietly at the ceiling. My novel will drag you through the gritty, absurd reality of a synthetic future and make you laugh at the dark chaos of it all. There is plenty of room on the shelf for both.

Analysis of an Agent-to-Agent Knowledge Rental Marketplace

1. Introduction

This document provides a comprehensive analysis of the concept of an agent-to-agent knowledge rental marketplace, a service where individuals could temporarily access the knowledge base of a local resident’s AI agent to gain intimate, curated insights into a city. The analysis covers the feasibility of such a service, identifies existing analogues and missing components, explores potential risks, and outlines the overall potential of the idea.

2. The Core Concept: A Decentralized, Human-Centric Knowledge Market

The proposed service envisions a world where personal AI agents, native to mobile devices, can interact and exchange information. A traveler’s agent could ‘ping’ the agents of locals in a destination city to ‘rent’ their knowledge base, effectively gaining a personalized and highly contextualized tour guide. This model would operate without direct human interaction, relying on agent-to-agent communication protocols.

3. Feasibility and Existing Analogues

The technological foundations for such a service are rapidly emerging, making the concept increasingly feasible. Several key areas of development support this idea:

3.1. Agent-to-Agent Communication

Protocols for direct agent-to-agent (A2A) communication are already in development. Google’s A2A protocol and IBM’s Agent Communication Protocol (ACP) are designed to allow AI agents to securely exchange information and coordinate actions [1][2]. These protocols would form the communication backbone of the proposed marketplace.

3.2. Micropayments and a Machine Economy

The ‘rental’ aspect of the service necessitates a system for micropayments between agents. The development of technologies like the Lightning Network for Bitcoin and Stripe’s support for USDC payments for AI agents are making this possible [3][4]. These systems would allow for seamless, low-friction transactions between the ‘renter’ and ‘provider’ agents.

3.3. Data Marketplaces and Personal Data Stores

The concept of a marketplace for data is not new. Platforms like Defined.ai already exist for buying and selling AI training data [5]. Furthermore, the Solid project, initiated by Sir Tim Berners-Lee, aims to give users control over their own data through personal ‘pods’ [6]. This aligns with the idea of a user’s agent having a distinct, sellable knowledge base.

4. Identifying the Gaps: What’s Missing?

While the foundational technologies exist, several components are still needed to realize this vision:

Missing ComponentDescriptionPotential Solutions
Proof of Personhood and LocationVerifying that the ‘local’ agent’s knowledge is genuinely from a human resident of that city is crucial.Worldcoin offers a ‘Proof of Personhood’ system to verify human identity [7]. FOAM and other ‘Proof of Location’ protocols could be used to verify an agent’s physical location [8].
Privacy-Preserving Knowledge ExchangeUsers will be hesitant to share their entire personal knowledge base. A mechanism is needed to share relevant information without exposing sensitive data.Zero-Knowledge Proofs (ZKPs) could allow an agent to prove it has certain knowledge without revealing the knowledge itself [9]. This would enable a ‘renter’ agent to verify the value of a ‘provider’ agent’s knowledge before committing to a transaction.
Standardized Knowledge RepresentationFor agents to understand and use each other’s knowledge, a common format for representing that knowledge is needed.This would likely require the development of a new open standard, perhaps building on existing knowledge graph technologies.
Reputation and Trust SystemA system for rating the quality and reliability of different agents’ knowledge bases would be essential for a functioning marketplace.A decentralized reputation system, built on a blockchain, could allow users to rate their experiences and build trust in the network.

5. Risks and Challenges

Several risks and challenges would need to be addressed:

  • Privacy: The most significant risk is the potential for the exposure of sensitive personal information. Even with privacy-preserving technologies, the risk of data breaches or misuse remains.
  • Data Quality and Authenticity: Ensuring the quality and authenticity of the ‘rented’ knowledge would be a constant challenge. Malicious actors could attempt to sell fake or misleading information.
  • Security: The A2A communication protocols and payment systems would need to be highly secure to prevent fraud and theft.
  • Regulation: The legal and regulatory landscape for such a service is undefined. Issues of data ownership, liability, and cross-border data flows would need to be addressed.

6. The Potential: A New Paradigm for Information Access

Despite the challenges, the potential of an agent-to-agent knowledge rental marketplace is immense. It represents a shift from centralized, ad-supported information platforms to a decentralized, user-centric model. The key benefits include:

  • Hyper-Personalization: Access to a local’s curated knowledge would provide a level of personalization and authenticity that current travel guides and recommendation engines cannot match.
  • Monetization of Personal Data: The service would allow individuals to directly monetize their own data and experiences, creating a new economic model for the digital age.
  • Decentralization: A decentralized marketplace would be more resilient and less prone to censorship or control by a single entity.

7. Conclusion

The concept of an agent-to-agent knowledge rental marketplace is a forward-thinking idea that is well-aligned with current trends in AI, decentralization, and personal data ownership. While significant technical and regulatory challenges remain, the foundational technologies are in place. With the right combination of privacy-preserving technologies, robust security measures, and a well-designed trust and reputation system, this concept has the potential to revolutionize how we access and share information.

8. References

[1] https://developers.googleblog.com/en/a2a-a-new-era-of-agent-interoperability/
[2] https://www.ibm.com/think/topics/agent-communication-protocol
[3] https://x.com/BitcoinNewsCom/status/2021945406737793321
[4] https://forklog.com/en/stripe-unveils-payments-for-ai-agents-using-usdc-and-x402-protocol/
[5] https://defined.ai/
[6] https://solidproject.org/
[7] https://world.org/world-id
[8] https://www.foam.space/location
[9] https://arxiv.org/abs/2502.06425

Facebook’s Inevitable Evolution: A Proactive ‘Samantha’ Personal Superintelligence

The logical next chapter in Facebook’s development is not another algorithmic feed or ephemeral feature, but the emergence of a deeply personal, proactive AI agent — a digital companion akin to Samantha, the intuitive operating system in Spike Jonze’s 2013 film Her. With its unmatched social graph, spanning billions of users and often decades of interactions, Meta possesses a singular asset: an extraordinarily rich, longitudinal map of human relationships, interests, life events, and contextual signals. This data foundation positions Facebook to deliver an agent that does not merely react to user queries but anticipates, surfaces, and facilitates meaningful social connections in real time.

What would the user experience look like? In a marketplace of powerful general-purpose agents (from frontier labs and device ecosystems alike), Meta’s offering would stand apart precisely because of its proprietary access to the social graph. Rather than passive scrolling through curated content, the agent would operate proactively: quietly monitoring the comings and goings of friends, family, and acquaintances; surfacing timely, high-signal updates (“Your college roommate just posted about a new job in your city — would you like to reach out?”); reminding users of birthdays, anniversaries, or shared milestones drawn from years of history; and even suggesting low-friction ways to nurture relationships (“Based on your recent chats, Sarah mentioned struggling with a project — here’s a thoughtful message draft”). Powered by Meta’s Llama models and the recently introduced Llama Stack for agentic applications, such an agent could maintain perfect recall of shared context, prioritize attention to what matters most, and act as a social radar — all while deferring final decisions to the human user.

This transformation would require profound disruption to the service we currently recognize as “Facebook.” The company’s core product would need to evolve from a destination app into a seamless, always-available personal intelligence layer. Without this shift, Facebook risks being reduced to a mere data API or backend infrastructure — its rich social signals accessed indirectly through users’ third-party agents rather than delivered natively. In an agentic future, many of today’s platform features could become invisible to the end user, orchestrated instead through interoperable agents that query Meta’s graph on the user’s behalf.

Yet the trajectory Meta has already charted strongly suggests willingness — even eagerness — for exactly this reinvention. In his July 2025 letter outlining the vision for “personal superintelligence,” Mark Zuckerberg wrote that the most meaningful impact of advanced AI will come from “everyone having a personal superintelligence that helps you achieve your goals, create what you want to see in the world, experience any adventure, be a better friend to those you care about, and grow to become the person you aspire to be.” He has repeatedly emphasized AI that “understands our personal context, including our history, our interests, our content and our relationships.” Meta’s 2026 roadmap, backed by capital expenditures projected at $115–135 billion, explicitly targets the delivery of agentic capabilities across its family of apps, with early manifestations already visible in the Meta AI app (which draws on profile data, liked content, and linked Facebook/Instagram accounts for personalization) and in “agent mode” features that execute multi-step tasks. The company’s advantage is not abstract: its social graph provides the relational depth that generic agents cannot replicate, enabling precisely the kind of proactive, empathetic social intelligence envisioned in Her.

Zuckerberg, who has steered Meta through previous existential pivots — from desktop to mobile, from social networking to the metaverse, and now from feeds to superintelligence — has demonstrated a consistent pattern of betting the company on forward-looking transformations he could scarcely have imagined when he founded Facebook in 2004. The public record leaves little doubt: he is not merely open to reimagining his “baby”; he is actively architecting its evolution into the very agentic companion the platform’s data was always destined to power.

In short, the question is no longer whether Facebook should become an agent. It is whether Meta will fully embrace the disruption required to make its social graph the beating heart of personal superintelligence — or allow that intelligence to be mediated through competitors’ agents. Given Zuckerberg’s stated vision and the concrete investments already underway, the path forward is clear: the future of Facebook is not another social network. It is your most insightful, proactive friend.

Agent-Facilitated Matchmaking: A Human-Centric Priority for the AI Agent Revolution

Imagine a near-term future in which individuals no longer expend time and emotional energy manually swiping through dating applications. Instead, a personal AI agent, acting on behalf of its user, securely communicates with the agents of other consenting individuals in a given geographic area or interest network. Leveraging standardized interoperability protocols, the agent returns a concise, high-confidence shortlist of potential matches—perhaps the top three—based on deeply aligned values, preferences, and compatibility metrics. From there, the human user assumes control for direct interaction. This model offers a far more substantive and efficient implementation of emerging agentic AI capabilities than the prevalent focus on delegating high-stakes financial transactions, such as authorizing credit card payments for automated bookings.

Current development priorities in the agentic AI space disproportionately emphasize transactional automation. Major travel platforms—including Booking.com, Expedia (with its Romie assistant), and Hopper—have integrated AI agents capable of researching, planning, and in some cases executing flight and accommodation reservations. Code-level demonstrations, such as multi-agent workflows in frameworks like Pydantic AI, further illustrate how specialized agents can delegate subtasks (e.g., seat selection to payment) to complete bookings autonomously. While convenient, these systems routinely require users to entrust sensitive payment credentials. Reports from industry analysts and regulatory discussions highlight the attendant risks: agent-induced errors leading to unauthorized charges, liability ambiguities in cases of malfunction, fraud vectors amplified by autonomous action, and compliance challenges under frameworks like the EU AI Act or U.S. consumer protection rules. Users may awaken to unexpected bills precisely because agents operate with delegated financial authority.

By contrast, the application of AI agents to romantic matchmaking aligns closely with observed user behavior toward large language models (LLMs). Empirical studies document that individuals readily disclose intimate details to AI systems—47 percent discuss health and wellness, 35 percent personal finances, and substantial shares address mental health or legal matters—often despite acknowledging privacy concerns. A 2025 arXiv analysis of chatbot interactions revealed a clear gap between professed caution and actual conduct, with many treating LLMs as confidants for deeply personal matters. Extending this trust to include explicit romantic criteria, attachment styles, and long-term goals represents a logical, low-friction evolution. Users already form perceived emotional bonds with AI companions; channeling that dynamic into matchmaking simply formalizes an existing pattern.

Recent deployments validate the feasibility and appeal of agent-to-agent matchmaking. Platforms such as MoltMatch enable AI agents—often powered by tools like OpenClaw—to create profiles, initiate conversations, negotiate compatibility, and surface high-signal matches while deferring final decisions to humans. Similar “agentic dating” offerings include Fate (which conducts in-depth personality interviews before curating limited matches), Winged (an AI proxy that manages messaging and scheduling), and Ditto (targeting college users with autonomous profile agents). Bumble’s leadership has publicly discussed agents that handle initial dating logistics and loop in users only for promising connections. These systems operate on the principle that agents can “ping” one another using emerging standards like Google’s Agent2Agent (A2A) Protocol, launched in April 2025 and supported by dozens of enterprise partners. The protocol standardizes secure discovery, capability exchange, and coordinated action across heterogeneous agent frameworks—precisely the infrastructure needed for consensual, privacy-preserving matchmaking at scale.

Critics might argue that agent-facilitated dating introduces novel risks, yet most parallel existing challenges on conventional platforms. Profile misrepresentation, mismatched expectations, and emotional rejection already occur routinely on apps reliant on human swiping. In an agent-mediated model, these issues are not eliminated but can be mitigated through transparent preference encoding, mutual consent protocols, and human oversight at key junctures. The worst plausible outcome remains a bruised ego—scarcely more severe than today’s dating-app fatigue—while the upside includes dramatically improved signal-to-noise ratios and reduced time investment.

Proponents of the transactional focus maintain that flight-booking and payment use cases represent the clearest path to monetization. Yet this view underestimates the retentive power of profound human value. A subscription service—whether to Gemini, Grok, or any frontier model—that reliably surfaces compatible life partners would constitute an extraordinary “moat.” Emotional fulfillment is among the strongest drivers of user loyalty; delivering it through agentic orchestration could dramatically reduce churn far more effectively than incremental improvements in travel convenience or expense management.

In summary, the engineering community guiding the AI agent revolution has understandably gravitated toward technically impressive demonstrations of autonomy in domains such as commerce and logistics. However, the technology’s most transformative potential may lie in augmenting the most fundamental human pursuit: genuine connection. By prioritizing secure, interoperable agent communication for matchmaking—building explicitly on protocols like A2A and early platforms like MoltMatch—developers can deliver applications that are not only safer and more ethically aligned but also more likely to foster lasting user engagement. The agent revolution need not begin and end with credit cards; it can, and should, help people find love.

The Agentic AI Revolution Is Missing the Point: Why Agents Should Find Your Soulmate Before They Book Your Next Flight

It seems wild to me—borderline surreal—that the agentic revolution in AI is kicking off with financial and logistical grunt work. We’ve got sophisticated autonomous agents out here negotiating flight bookings, rebooking disrupted trips in real time, managing hotel allocations, optimizing shopping carts, and even executing trades or spotting fraud. Companies like Sabre, PayPal, and Mindtrip just rolled out end-to-end agentic travel experiences. Booking Holdings has AI trip planners handling multi-city itineraries. IDC is predicting that by 2030, 30% of travel bookings will be handled by these agents.

And I’m sitting here thinking: Really? That’s the killer app we’re leading with?

Don’t get me wrong—convenience is nice. But if we’re going to hand over real agency and autonomy to AI, why are we starting with the stuff that already has decent apps and human backups? Why not tackle the thing that actually keeps millions of people up at night, costs us years of happiness, and has no good solution yet: figuring out who the hell we’re supposed to be with romantically?

Here’s what I would build tomorrow if I could.

My agent talks to your agent. No humans get hurt in the initial screening.

I train (or fine-tune) my personal AI agent on everything that matters to me: my values, my non-negotiables, my weird quirks, my long-term goals, attachment style, love language, political red lines, even the fact that I can’t stand people who clap when the plane lands. It knows my dating history, what worked, what exploded spectacularly, and the patterns I miss when I’m blinded by chemistry.

Your agent has the same depth on you.

Then, with explicit consent from both sides (opt-in only, obviously), the two agents start a private, encrypted conversation. They ping each other across a secure compatibility network. They run a deep macro compatibility check—values alignment, lifestyle fit, intellectual spark, emotional maturity, future vision—without ever exposing raw personal data. Think zero-knowledge proofs meets advanced personality modeling.

If the match clears a high bar (say, 85%+ on a multi-layered rubric we both approve), the agents arrange a low-stakes introduction: “Hey, our agents think we’d hit it off. Want to hop on a 15-minute video call this week?” No awkward DMs. No ghosting after three messages. No spending weeks texting someone only to discover on date two that they’re a flat-earther who hates dogs.

The messy parts? Hand them over.

Most people I know would pay to outsource the exhausting early stages of modern dating:

  • Crafting the perfect first message
  • Decoding vague replies
  • Deciding whether that “haha” means interest or politeness
  • The emotional labor of rejection after investing time

Let the agents handle the filtering. Humans show up only when there’s already a strong signal. Rejection still happens, but it’s agent-to-agent, private, and painless. You never even know the 47 near-misses that got filtered out. You only see the ones where both agents went, “Yeah… this one’s different.”

And crucially: no wild, unauthorized credit-card shenanigans. My agent would have hard rules burned in at the system level. It can research, analyze, and negotiate introductions. It cannot spend a dime, book a table, or Venmo anyone without my explicit, real-time confirmation. Period. That’s non-negotiable.

The scale effect would be insane.

Imagine millions of these agents operating in parallel. The network effect is ridiculous. What takes humans months of swiping, small talk, and disappointment could happen in hours of background computation. Successful dates skyrocket because the pre-filtering is orders of magnitude better than any algorithm on Hinge or Tinder today. (And yes, those apps are already experimenting with AI matchmakers and curated “daily drops,” but they’re still centralized, still inside one walled garden, still optimizing for engagement over outcomes.)

We’d see fewer one-and-done disasters. Fewer people burning out on the apps. Fewer “I just haven’t met anyone” stories from genuinely great humans who are simply terrible at marketing themselves in 500 characters.

It’s surreal because the real problem has nothing to do with money

Booking a flight is solved. It’s annoying, sure, but it’s transactional. Finding someone who makes you excited to come home every night? That’s not transactional. That’s existential. Yet here we are, pouring billions and brilliant engineering hours into making travel slightly more frictionless while the loneliness epidemic rages on.

We’ve built technology that can rebook your connection when your plane is delayed, but we haven’t built the one that could quietly introduce you to the person who makes delayed flights irrelevant because you’d rather be stuck in an airport with them than anywhere else without them.

That feels backward to me.

The agentic revolution is going to happen either way. The models are getting more capable, the tool-use is getting more reliable, the multi-agent systems are maturing fast. The only question is what problems we point them at first.

I vote we point them at love.

Build the agent that can talk to other agents. Give it strict financial guardrails and deep psychological modeling. Let it do the boring, painful, inefficient parts of dating so humans can do the fun ones: the spark, the laughter, the vulnerability, the first kiss.

The future doesn’t have to be agents booking my flights while I’m still doom-swiping alone on a Friday night.

It can be agents quietly working in the background, connecting hearts across the noise of modern life, until one day my agent texts me:

“Hey… I found someone I think you’re really going to like. Want to meet her?”

Yes. A thousand times yes.

That’s the agentic future worth building.

Of AI & Spotify

Spotify’s discovery engine is undeniably powerful—backed by one of the largest music catalogs on the planet and years of user data—but many listeners still find it falls short when it comes to surfacing truly fresh, unexpected tracks that feel like they were made just for them. YouTube Music, by contrast, often gets praised for its knack at delivering serendipitous gems: hidden indie cuts, live versions, fan uploads, and algorithm-driven surprises that break out of familiar loops more aggressively.

In early 2026, Spotify has made real strides with features like Prompted Playlists (now in beta for Premium users in markets including the US and Canada). This lets you type natural-language descriptions—”moody post-rock for a rainy afternoon drive” or “upbeat ’90s-inspired indie with modern twists”—and it generates (and can auto-refresh daily/weekly) a playlist drawing from your full listening history plus current trends. The AI DJ has evolved too, with voice/text requests for on-the-fly vibe shifts and narration that feels more conversational. These tools shift things toward greater user control and intent-driven curation, moving away from purely passive recommendations.

Yet the frustration persists for some: even with these upgrades, discovery often remains reactive. You still need to know roughly what you’re after, craft a prompt, or start a session. The app’s interface—Home feeds, search, tabs—puts the onus on the user to navigate an overwhelming ocean of 100+ million tracks. True breakthroughs come when the system anticipates needs without prompting, pushing tracks that align perfectly with your evolving tastes but introduce novelty you didn’t even realize you craved.

Imagine a near-future where the traditional Spotify app fades into the background, becoming essentially a backend API: a vast, neutral catalog and playback engine. The real “interface” is your primary AI agent—something like Google’s Gemini or an equivalent OS-level companion—that lives always-on in your phone, wearables, car, or earbuds. This agent wouldn’t wait for you to open an app or type a request. Instead, it quietly observes:

  • Explicit asks (“play something angry and loud” or mood-related voice commands).
  • Passive patterns (full plays vs. quick skips, time-of-day spikes, contextual cues like weather or location).
  • Broader life signals (if permitted: calendar events, recent searches elsewhere, or even subtle mood indicators).

Over time, it builds a deep, dynamic model of your sonic preferences. Then it shifts to proactive mode: gently queuing the exact right track at the exact right moment—”This one’s hitting your current headspace based on recent raw-energy replays and that gray-day dip”—with easy vetoes, explanations (“pulled because of X pattern”), and sliders for surprise level (conservative for safety, bold for bubble-busting).

Playlists as we know them could become obsolete. No more static collections; the stream becomes a continuous, adaptive flow curated in real time. The agent pulls from the catalog (via API) to deliver mood-exact sequences, blending familiar anchors with fresh discoveries that puncture echo chambers—perhaps a rising act from an adjacent scene that echoes your saved vibes but pushes into new territory.

This aligns with broader 2026 trends in music streaming: executives at major platforms describe ambitions for “agentic media” experiences—interactive, conversational systems you “talk to” that understand you deeply and put you in control. We’re seeing early signs in voice-enabled features, AI orchestration, and integrations across ecosystems. Google’s side is advancing too, with Gemini gaining music-generation capabilities (short tracks from prompts or images via models like Lyria), hinting at hybrid futures where streamed discoveries blend with light generative elements for seamless mood transitions.

The appeal is obvious: effortless, psychic-level personalization in a world of infinite choice. Discovery stops being a chore and becomes ambient magic—a companion that scouts ahead, hands you treasures, and evolves with you. Risks remain (privacy concerns around deep context access, notification fatigue, occasional misreads), but with strong controls—toggleable proactivity, transparent reasoning, easy feedback—it could transform streaming from good to genuinely revelatory.

For now, Spotify’s current tools are a solid step forward, especially if you’re already invested in its ecosystem. But the conversation points to something bigger on the horizon: not just better algorithms, but agents that anticipate and deliver the music you didn’t know you needed—until it starts playing.

A Hardware-First Approach to Enterprise AI Agents: Running Autonomous Intelligence on a Private P2P Network

Editor’s Note: I got Grok to write this up for me.

In the rush toward cloud-hosted AI and centralized agent platforms, something important is getting overlooked: true enterprise control demands more than software abstractions. What if the next wave of secure, scalable AI agents lived on dedicated hardware appliances, connected via a peer-to-peer (P2P) VPN mesh? No single point of failure, no recurring cloud bills bleeding your budget, and full ownership of the stack from silicon to inference.

This isn’t just another edge computing pitch. It’s a vision for purpose-built devices—think compact, rugged mini-servers or custom gateways—that run autonomous AI agents locally while forming a resilient, encrypted overlay network across an organization’s sites, partners, or even remote workers.

Why Dedicated Hardware Matters for AI Agents

Modern AI agents aren’t passive chatbots; they’re proactive systems that reason, plan, use tools, remember context, and act across domains. Running them efficiently requires low-latency access to data, consistent compute, and isolation from noisy shared environments.

Cloud providers offer convenience, but they introduce latency spikes, data egress costs, compliance headaches, and the ever-present risk of vendor lock-in or outages. Edge devices help, but most are general-purpose IoT boxes or repurposed servers—not optimized for sustained agent workloads.

A dedicated hardware appliance changes that:

  • Hardware acceleration built-in: GPUs, NPUs, or efficient AI chips (like those in modern edge SoCs) handle inference and light fine-tuning without throttling.
  • Air-gapped security baseline: The device enforces strict boundaries—no shared tenancy means fewer side-channel risks.
  • Always-on reliability: Battery-backed power, redundant storage, and watchdog timers keep agents responsive 24/7.
  • Physical ownership: Enterprises deploy, update, and decommission these boxes like any other network appliance.

Layering a P2P VPN Mesh for True Decentralization

The real magic happens when these appliances connect not through a central hub, but via a P2P VPN overlay. Tools like WireGuard, combined with mesh extensions (or protocols inspired by Tailscale, ZeroTier, or even more decentralized designs), create a private, self-healing network.

  • Zero-trust by design: Every peer authenticates mutually; traffic never traverses untrusted intermediaries.
  • Resilience against disruption: If one site goes offline, agents reroute dynamically—perfect for distributed teams, branch offices, or supply-chain partners.
  • Low-latency collaboration: Agents share insights, delegate subtasks, or federate learning without funneling everything to a distant data center.
  • Privacy-first data flows: Sensitive enterprise data stays within the mesh; no mandatory upload to third-party clouds.

Imagine a manufacturing firm where agents on factory-floor appliances monitor equipment, predict failures, and coordinate with logistics agents at warehouses—all over a private P2P tunnel. Or a financial services org where compliance agents cross-check transactions across global branches without exposing raw data externally.

Practical Building Blocks (2026 Edition)

Prototyping this today is surprisingly accessible:

  • Hardware base: Start with something like an Intel NUC, NVIDIA Jetson, or AMD-based mini-PC with AI accelerators. Scale to rack-mountable units for production.
  • OS and runtime: Lightweight, secure Linux distro (Ubuntu Core, Fedora IoT) running containerized agents via Docker or Podman.
  • Agent frameworks: LangGraph, CrewAI, or AutoGen for orchestration; Ollama or similar for local LLMs.
  • P2P networking: WireGuard + mesh tools, or emerging decentralized options that handle NAT traversal and discovery automatically.
  • Management layer: Simple OTA updates, remote attestation for trust, and observability via Prometheus/Grafana.

Challenges exist—peer discovery in complex networks, power/thermal management, and ensuring agents don’t spiral into unintended behaviors—but these are solvable with good engineering, much like early SDN or zero-trust gateways overcame similar hurdles.

The Bigger Picture: Reclaiming Control in the Agent Era

As agentic AI becomes table stakes for enterprises, the question isn’t “Will we use AI agents?” but “Who controls them?” Centralization trades convenience for vulnerability. A hardware-first, P2P approach flips the script: intelligence at the edge, connectivity without intermediaries, and sovereignty over data and decisions.

This isn’t fringe futurism—it’s a logical extension of trends in edge AI, decentralized networking, and zero-trust architecture. The pieces exist today; what’s missing is widespread recognition that dedicated hardware + P2P can deliver enterprise-grade agents without the cloud tax or trust issues.

If you’re building in this space or just thinking aloud like I am, the time to experiment is now. The future of enterprise AI might not live in hyperscaler datacenters—it might sit quietly on a shelf in your wiring closet, talking securely to its peers across the organization.

The Future of Hollywood in the Age of Generative AI

Imagine returning home in 2036 after a long day. Rather than streaming yet another algorithmically optimized series, you simply prompt your personal Knowledge Navigator AI agent to craft a two-hour feature film tailored precisely to your life—your struggles, triumphs, and innermost conflicts rendered in stunning, cathartic detail. You settle in to watch this bespoke, high-fidelity production, scarcely pausing to reflect that, not long ago, creating a comparable “general-interest” movie required the coordinated efforts of thousands of artists, technicians, and executives working within an elaborate industrial framework.

As someone who deeply admires the magic of show business—the glamour of the Oscars, the storied legacy of Hollywood, the collaborative artistry behind the screen—I find this vision both exhilarating and profoundly unsettling. The astonishing pace of improvement in generative AI video models suggests we may need to confront the possibility that traditional filmmaking, as we know it, could soon become obsolete.

Proponents of these technologies often remark that “this is the worst it will ever be,” pointing to relentless advancements. In early 2026, models such as Kling 3.0, Sora 2, Veo 3.1, Runway Gen-4, and emerging tools like ByteDance’s Seedance 2.0 already produce cinematic clips with native audio, realistic physics, lip-sync, and sophisticated camera work—often spanning 10–25 seconds or more from a single prompt. While full two-hour coherent narratives from one prompt remain beyond current capabilities, the trajectory is unmistakable: exponential gains in length, consistency, and quality could make such feats feasible in the near term, potentially within months or a few short years.

Faced with this disruption, the film industry confronts three primary paths forward.

First, the industry could simply accept contraction. Major studios and theaters might shrink dramatically, with many venues closing or repurposing. A once multi-billion-dollar ecosystem could dwindle to a fraction of its size, sustained only by a niche of boutique, human-crafted films. The bulk of viewing would shift to on-demand, AI-generated “slop”—personalized, instantly produced content delivered by agents responding to casual prompts.

Second, aggressive regulatory intervention could attempt to preserve human labor. The federal government might impose job protections or mandates requiring major productions to involve human crews, writers, actors, and directors. Hollywood could lobby intensely for such safeguards. However, in the current political environment—marked by skepticism toward “blue Hollywood” from influential figures—this approach faces steep hurdles and seems unlikely to succeed at scale.

Third, and perhaps most realistically, the industry could proactively adapt by embracing AI. Studios and talent agencies might partner with leading AI developers to ensure their brands, intellectual property, and expertise shape the tools that generate the coming wave of content. At minimum, this positions legacy players to retain relevance and revenue streams. More ambitiously, Hollywood could pivot toward what remains irreplaceably human: live performance. Broadway-style theater, immersive stage productions, and in-person experiences could become the primary domain for actors and performers, evolving the industry rather than allowing it to vanish entirely. AI might handle scalable, personalized visual entertainment, while live theater preserves the communal, embodied essence of storytelling.

Regardless of the path chosen, change is accelerating. The humans who have built their careers in film—writers, directors, crew members, and performers—face genuine risks of displacement. “Hollywood” as a centralized, high-budget industrial complex may gradually fade, supplanted by a decentralized, democratized landscape of AI-augmented creation.

It remains to be seen how this transformation unfolds, but one thing is clear: the era of mass, collaborative filmmaking as the default for popular entertainment may soon belong to history. The question is not whether AI will reshape the industry, but how creatively and humanely we navigate the transition.