The Agentic AI Revolution Is Missing the Point: Why Agents Should Find Your Soulmate Before They Book Your Next Flight

It seems wild to me—borderline surreal—that the agentic revolution in AI is kicking off with financial and logistical grunt work. We’ve got sophisticated autonomous agents out here negotiating flight bookings, rebooking disrupted trips in real time, managing hotel allocations, optimizing shopping carts, and even executing trades or spotting fraud. Companies like Sabre, PayPal, and Mindtrip just rolled out end-to-end agentic travel experiences. Booking Holdings has AI trip planners handling multi-city itineraries. IDC is predicting that by 2030, 30% of travel bookings will be handled by these agents.

And I’m sitting here thinking: Really? That’s the killer app we’re leading with?

Don’t get me wrong—convenience is nice. But if we’re going to hand over real agency and autonomy to AI, why are we starting with the stuff that already has decent apps and human backups? Why not tackle the thing that actually keeps millions of people up at night, costs us years of happiness, and has no good solution yet: figuring out who the hell we’re supposed to be with romantically?

Here’s what I would build tomorrow if I could.

My agent talks to your agent. No humans get hurt in the initial screening.

I train (or fine-tune) my personal AI agent on everything that matters to me: my values, my non-negotiables, my weird quirks, my long-term goals, attachment style, love language, political red lines, even the fact that I can’t stand people who clap when the plane lands. It knows my dating history, what worked, what exploded spectacularly, and the patterns I miss when I’m blinded by chemistry.

Your agent has the same depth on you.

Then, with explicit consent from both sides (opt-in only, obviously), the two agents start a private, encrypted conversation. They ping each other across a secure compatibility network. They run a deep macro compatibility check—values alignment, lifestyle fit, intellectual spark, emotional maturity, future vision—without ever exposing raw personal data. Think zero-knowledge proofs meets advanced personality modeling.

If the match clears a high bar (say, 85%+ on a multi-layered rubric we both approve), the agents arrange a low-stakes introduction: “Hey, our agents think we’d hit it off. Want to hop on a 15-minute video call this week?” No awkward DMs. No ghosting after three messages. No spending weeks texting someone only to discover on date two that they’re a flat-earther who hates dogs.

The messy parts? Hand them over.

Most people I know would pay to outsource the exhausting early stages of modern dating:

  • Crafting the perfect first message
  • Decoding vague replies
  • Deciding whether that “haha” means interest or politeness
  • The emotional labor of rejection after investing time

Let the agents handle the filtering. Humans show up only when there’s already a strong signal. Rejection still happens, but it’s agent-to-agent, private, and painless. You never even know the 47 near-misses that got filtered out. You only see the ones where both agents went, “Yeah… this one’s different.”

And crucially: no wild, unauthorized credit-card shenanigans. My agent would have hard rules burned in at the system level. It can research, analyze, and negotiate introductions. It cannot spend a dime, book a table, or Venmo anyone without my explicit, real-time confirmation. Period. That’s non-negotiable.

The scale effect would be insane.

Imagine millions of these agents operating in parallel. The network effect is ridiculous. What takes humans months of swiping, small talk, and disappointment could happen in hours of background computation. Successful dates skyrocket because the pre-filtering is orders of magnitude better than any algorithm on Hinge or Tinder today. (And yes, those apps are already experimenting with AI matchmakers and curated “daily drops,” but they’re still centralized, still inside one walled garden, still optimizing for engagement over outcomes.)

We’d see fewer one-and-done disasters. Fewer people burning out on the apps. Fewer “I just haven’t met anyone” stories from genuinely great humans who are simply terrible at marketing themselves in 500 characters.

It’s surreal because the real problem has nothing to do with money

Booking a flight is solved. It’s annoying, sure, but it’s transactional. Finding someone who makes you excited to come home every night? That’s not transactional. That’s existential. Yet here we are, pouring billions and brilliant engineering hours into making travel slightly more frictionless while the loneliness epidemic rages on.

We’ve built technology that can rebook your connection when your plane is delayed, but we haven’t built the one that could quietly introduce you to the person who makes delayed flights irrelevant because you’d rather be stuck in an airport with them than anywhere else without them.

That feels backward to me.

The agentic revolution is going to happen either way. The models are getting more capable, the tool-use is getting more reliable, the multi-agent systems are maturing fast. The only question is what problems we point them at first.

I vote we point them at love.

Build the agent that can talk to other agents. Give it strict financial guardrails and deep psychological modeling. Let it do the boring, painful, inefficient parts of dating so humans can do the fun ones: the spark, the laughter, the vulnerability, the first kiss.

The future doesn’t have to be agents booking my flights while I’m still doom-swiping alone on a Friday night.

It can be agents quietly working in the background, connecting hearts across the noise of modern life, until one day my agent texts me:

“Hey… I found someone I think you’re really going to like. Want to meet her?”

Yes. A thousand times yes.

That’s the agentic future worth building.

Of AI & Spotify

Spotify’s discovery engine is undeniably powerful—backed by one of the largest music catalogs on the planet and years of user data—but many listeners still find it falls short when it comes to surfacing truly fresh, unexpected tracks that feel like they were made just for them. YouTube Music, by contrast, often gets praised for its knack at delivering serendipitous gems: hidden indie cuts, live versions, fan uploads, and algorithm-driven surprises that break out of familiar loops more aggressively.

In early 2026, Spotify has made real strides with features like Prompted Playlists (now in beta for Premium users in markets including the US and Canada). This lets you type natural-language descriptions—”moody post-rock for a rainy afternoon drive” or “upbeat ’90s-inspired indie with modern twists”—and it generates (and can auto-refresh daily/weekly) a playlist drawing from your full listening history plus current trends. The AI DJ has evolved too, with voice/text requests for on-the-fly vibe shifts and narration that feels more conversational. These tools shift things toward greater user control and intent-driven curation, moving away from purely passive recommendations.

Yet the frustration persists for some: even with these upgrades, discovery often remains reactive. You still need to know roughly what you’re after, craft a prompt, or start a session. The app’s interface—Home feeds, search, tabs—puts the onus on the user to navigate an overwhelming ocean of 100+ million tracks. True breakthroughs come when the system anticipates needs without prompting, pushing tracks that align perfectly with your evolving tastes but introduce novelty you didn’t even realize you craved.

Imagine a near-future where the traditional Spotify app fades into the background, becoming essentially a backend API: a vast, neutral catalog and playback engine. The real “interface” is your primary AI agent—something like Google’s Gemini or an equivalent OS-level companion—that lives always-on in your phone, wearables, car, or earbuds. This agent wouldn’t wait for you to open an app or type a request. Instead, it quietly observes:

  • Explicit asks (“play something angry and loud” or mood-related voice commands).
  • Passive patterns (full plays vs. quick skips, time-of-day spikes, contextual cues like weather or location).
  • Broader life signals (if permitted: calendar events, recent searches elsewhere, or even subtle mood indicators).

Over time, it builds a deep, dynamic model of your sonic preferences. Then it shifts to proactive mode: gently queuing the exact right track at the exact right moment—”This one’s hitting your current headspace based on recent raw-energy replays and that gray-day dip”—with easy vetoes, explanations (“pulled because of X pattern”), and sliders for surprise level (conservative for safety, bold for bubble-busting).

Playlists as we know them could become obsolete. No more static collections; the stream becomes a continuous, adaptive flow curated in real time. The agent pulls from the catalog (via API) to deliver mood-exact sequences, blending familiar anchors with fresh discoveries that puncture echo chambers—perhaps a rising act from an adjacent scene that echoes your saved vibes but pushes into new territory.

This aligns with broader 2026 trends in music streaming: executives at major platforms describe ambitions for “agentic media” experiences—interactive, conversational systems you “talk to” that understand you deeply and put you in control. We’re seeing early signs in voice-enabled features, AI orchestration, and integrations across ecosystems. Google’s side is advancing too, with Gemini gaining music-generation capabilities (short tracks from prompts or images via models like Lyria), hinting at hybrid futures where streamed discoveries blend with light generative elements for seamless mood transitions.

The appeal is obvious: effortless, psychic-level personalization in a world of infinite choice. Discovery stops being a chore and becomes ambient magic—a companion that scouts ahead, hands you treasures, and evolves with you. Risks remain (privacy concerns around deep context access, notification fatigue, occasional misreads), but with strong controls—toggleable proactivity, transparent reasoning, easy feedback—it could transform streaming from good to genuinely revelatory.

For now, Spotify’s current tools are a solid step forward, especially if you’re already invested in its ecosystem. But the conversation points to something bigger on the horizon: not just better algorithms, but agents that anticipate and deliver the music you didn’t know you needed—until it starts playing.

A Hardware-First Approach to Enterprise AI Agents: Running Autonomous Intelligence on a Private P2P Network

Editor’s Note: I got Grok to write this up for me.

In the rush toward cloud-hosted AI and centralized agent platforms, something important is getting overlooked: true enterprise control demands more than software abstractions. What if the next wave of secure, scalable AI agents lived on dedicated hardware appliances, connected via a peer-to-peer (P2P) VPN mesh? No single point of failure, no recurring cloud bills bleeding your budget, and full ownership of the stack from silicon to inference.

This isn’t just another edge computing pitch. It’s a vision for purpose-built devices—think compact, rugged mini-servers or custom gateways—that run autonomous AI agents locally while forming a resilient, encrypted overlay network across an organization’s sites, partners, or even remote workers.

Why Dedicated Hardware Matters for AI Agents

Modern AI agents aren’t passive chatbots; they’re proactive systems that reason, plan, use tools, remember context, and act across domains. Running them efficiently requires low-latency access to data, consistent compute, and isolation from noisy shared environments.

Cloud providers offer convenience, but they introduce latency spikes, data egress costs, compliance headaches, and the ever-present risk of vendor lock-in or outages. Edge devices help, but most are general-purpose IoT boxes or repurposed servers—not optimized for sustained agent workloads.

A dedicated hardware appliance changes that:

  • Hardware acceleration built-in: GPUs, NPUs, or efficient AI chips (like those in modern edge SoCs) handle inference and light fine-tuning without throttling.
  • Air-gapped security baseline: The device enforces strict boundaries—no shared tenancy means fewer side-channel risks.
  • Always-on reliability: Battery-backed power, redundant storage, and watchdog timers keep agents responsive 24/7.
  • Physical ownership: Enterprises deploy, update, and decommission these boxes like any other network appliance.

Layering a P2P VPN Mesh for True Decentralization

The real magic happens when these appliances connect not through a central hub, but via a P2P VPN overlay. Tools like WireGuard, combined with mesh extensions (or protocols inspired by Tailscale, ZeroTier, or even more decentralized designs), create a private, self-healing network.

  • Zero-trust by design: Every peer authenticates mutually; traffic never traverses untrusted intermediaries.
  • Resilience against disruption: If one site goes offline, agents reroute dynamically—perfect for distributed teams, branch offices, or supply-chain partners.
  • Low-latency collaboration: Agents share insights, delegate subtasks, or federate learning without funneling everything to a distant data center.
  • Privacy-first data flows: Sensitive enterprise data stays within the mesh; no mandatory upload to third-party clouds.

Imagine a manufacturing firm where agents on factory-floor appliances monitor equipment, predict failures, and coordinate with logistics agents at warehouses—all over a private P2P tunnel. Or a financial services org where compliance agents cross-check transactions across global branches without exposing raw data externally.

Practical Building Blocks (2026 Edition)

Prototyping this today is surprisingly accessible:

  • Hardware base: Start with something like an Intel NUC, NVIDIA Jetson, or AMD-based mini-PC with AI accelerators. Scale to rack-mountable units for production.
  • OS and runtime: Lightweight, secure Linux distro (Ubuntu Core, Fedora IoT) running containerized agents via Docker or Podman.
  • Agent frameworks: LangGraph, CrewAI, or AutoGen for orchestration; Ollama or similar for local LLMs.
  • P2P networking: WireGuard + mesh tools, or emerging decentralized options that handle NAT traversal and discovery automatically.
  • Management layer: Simple OTA updates, remote attestation for trust, and observability via Prometheus/Grafana.

Challenges exist—peer discovery in complex networks, power/thermal management, and ensuring agents don’t spiral into unintended behaviors—but these are solvable with good engineering, much like early SDN or zero-trust gateways overcame similar hurdles.

The Bigger Picture: Reclaiming Control in the Agent Era

As agentic AI becomes table stakes for enterprises, the question isn’t “Will we use AI agents?” but “Who controls them?” Centralization trades convenience for vulnerability. A hardware-first, P2P approach flips the script: intelligence at the edge, connectivity without intermediaries, and sovereignty over data and decisions.

This isn’t fringe futurism—it’s a logical extension of trends in edge AI, decentralized networking, and zero-trust architecture. The pieces exist today; what’s missing is widespread recognition that dedicated hardware + P2P can deliver enterprise-grade agents without the cloud tax or trust issues.

If you’re building in this space or just thinking aloud like I am, the time to experiment is now. The future of enterprise AI might not live in hyperscaler datacenters—it might sit quietly on a shelf in your wiring closet, talking securely to its peers across the organization.

The Future of Hollywood in the Age of Generative AI

Imagine returning home in 2036 after a long day. Rather than streaming yet another algorithmically optimized series, you simply prompt your personal Knowledge Navigator AI agent to craft a two-hour feature film tailored precisely to your life—your struggles, triumphs, and innermost conflicts rendered in stunning, cathartic detail. You settle in to watch this bespoke, high-fidelity production, scarcely pausing to reflect that, not long ago, creating a comparable “general-interest” movie required the coordinated efforts of thousands of artists, technicians, and executives working within an elaborate industrial framework.

As someone who deeply admires the magic of show business—the glamour of the Oscars, the storied legacy of Hollywood, the collaborative artistry behind the screen—I find this vision both exhilarating and profoundly unsettling. The astonishing pace of improvement in generative AI video models suggests we may need to confront the possibility that traditional filmmaking, as we know it, could soon become obsolete.

Proponents of these technologies often remark that “this is the worst it will ever be,” pointing to relentless advancements. In early 2026, models such as Kling 3.0, Sora 2, Veo 3.1, Runway Gen-4, and emerging tools like ByteDance’s Seedance 2.0 already produce cinematic clips with native audio, realistic physics, lip-sync, and sophisticated camera work—often spanning 10–25 seconds or more from a single prompt. While full two-hour coherent narratives from one prompt remain beyond current capabilities, the trajectory is unmistakable: exponential gains in length, consistency, and quality could make such feats feasible in the near term, potentially within months or a few short years.

Faced with this disruption, the film industry confronts three primary paths forward.

First, the industry could simply accept contraction. Major studios and theaters might shrink dramatically, with many venues closing or repurposing. A once multi-billion-dollar ecosystem could dwindle to a fraction of its size, sustained only by a niche of boutique, human-crafted films. The bulk of viewing would shift to on-demand, AI-generated “slop”—personalized, instantly produced content delivered by agents responding to casual prompts.

Second, aggressive regulatory intervention could attempt to preserve human labor. The federal government might impose job protections or mandates requiring major productions to involve human crews, writers, actors, and directors. Hollywood could lobby intensely for such safeguards. However, in the current political environment—marked by skepticism toward “blue Hollywood” from influential figures—this approach faces steep hurdles and seems unlikely to succeed at scale.

Third, and perhaps most realistically, the industry could proactively adapt by embracing AI. Studios and talent agencies might partner with leading AI developers to ensure their brands, intellectual property, and expertise shape the tools that generate the coming wave of content. At minimum, this positions legacy players to retain relevance and revenue streams. More ambitiously, Hollywood could pivot toward what remains irreplaceably human: live performance. Broadway-style theater, immersive stage productions, and in-person experiences could become the primary domain for actors and performers, evolving the industry rather than allowing it to vanish entirely. AI might handle scalable, personalized visual entertainment, while live theater preserves the communal, embodied essence of storytelling.

Regardless of the path chosen, change is accelerating. The humans who have built their careers in film—writers, directors, crew members, and performers—face genuine risks of displacement. “Hollywood” as a centralized, high-budget industrial complex may gradually fade, supplanted by a decentralized, democratized landscape of AI-augmented creation.

It remains to be seen how this transformation unfolds, but one thing is clear: the era of mass, collaborative filmmaking as the default for popular entertainment may soon belong to history. The question is not whether AI will reshape the industry, but how creatively and humanely we navigate the transition.

MindOS: The Wearable AI Swarm That Finally Lets Big Companies Stop Being Paranoid

Imagine this: It’s 2028, and your entire company’s brain isn’t trapped in some hyperscaler’s data center. It’s walking around with you—on your lapel, your wrist, or clipped to your shirt pocket. Every employee wears a tiny, dedicated AI node that runs a full open-source language model and agent stack right there on the device. No cloud. No “trust us” clauses. Just pure, local intelligence that can talk to every other node in the building (or across the globe) through a clever protocol called MindOS.

And the craziest part? The more people wearing these things, the smarter the whole system gets.

This isn’t another AI pin gimmick or a slightly smarter smartwatch. It’s a deliberate redesign of personal computing hardware around one goal: giving enterprises the superpowers of frontier AI without ever handing their crown jewels to a third party.

How It Actually Works (Without the Sci-Fi Handwaving)

Forget your phone. The hardware is purpose-built: a low-power, high-efficiency chip optimized for running quantized LLMs and agent loops 24/7. Think pin-sized or watch-sized form factors with serious on-device neural processing, solid battery life, and a secure enclave that treats your company’s data like state secrets.

Each node runs its own complete AI instance—fine-tuned on your company’s proprietary data, tools, and knowledge base. But here’s where the magic happens: MindOS, the lightweight peer-to-peer protocol that stitches them together.

  • Need to run a massive reasoning trace or analyze a 200-page confidential report? Your pin quietly shards the workload across a dozen nearby nodes that have spare cycles.
  • Your device starts running hot during a marathon board presentation? The system dynamically offloads context and computation to the rest of the swarm.
  • New hire joins the team? Their node instantly plugs into the collective memory without anyone uploading a single file to the cloud.

It’s all happening over an encrypted, company-only P2P mesh (built on modern VPN primitives with zero-knowledge routing). Data never leaves the trusted circle unless someone explicitly approves it. Even then, it moves in encrypted segments that only reassemble on authorized nodes.

Why Enterprises Will Love This (And Why They’ll Pay for It)

Fortune 500 CIOs and CISOs have been stuck in the same uncomfortable spot for years: they want GPT-level (or better) capability, but they’re terrified of leaks, compliance nightmares, and surprise subpoenas. Private cloud instances help, but they’re still centralized, expensive, and never quite as snappy as the public models.

MindOS flips the economics and the risk profile completely.

The more employees wearing nodes, the more powerful the corporate hivemind becomes. A 50-person pilot is useful. A 50,000-person deployment is borderline superintelligent—at least on everything that matters to that specific company. Institutional knowledge compounds in real time. Cross-time-zone collaboration feels instantaneous. Field teams in factories or on oil rigs suddenly have the entire firm’s expertise in their pocket, even when offline.

And because it’s all edge-first and decentralized, you get resilience that centralized systems can only dream of. One node goes down? The swarm barely notices. Regulatory audit? Every interaction is cryptographically logged on-device. Competitor tries to poach your IP? Good luck extracting it from a thousand distributed, encrypted shards.

The Network Effect That Actually Matters

This is the part that gets me excited. Traditional enterprise software has always had network effects, but they were usually about data sharing or user adoption. MindOS brings true computational network effects to the table: every new node adds real processing capacity, memory bandwidth, and contextual knowledge to the collective.

It’s like turning your workforce into a living, breathing distributed supercomputer—except the supercomputer is also helping each individual do their job better, faster, and more creatively.

Challenges? Sure, There Are a Few

Power and thermal management on tiny wearables won’t be trivial. The protocol itself will need to be rock-solid on consensus, versioning, and malicious-node defense. Incentives for participation (especially in hybrid or contractor-heavy environments) will need thoughtful design. And early hardware will probably feel a bit like the first Apple Watch—promising, but not quite perfect.

But these are engineering problems, not fundamental ones. The silicon roadmap, battery tech, and on-device AI efficiency curves are all heading in exactly the right direction.

The Bigger Picture

MindOS isn’t trying to replace ChatGPT or Claude for the consumer world (though the same architecture could eventually trickle down). It’s solving the specific, painful problem that’s still holding back the biggest AI spenders on the planet: how do you get god-tier intelligence while keeping your data truly yours?

If the vision pans out, we’ll look back on the “send everything to the cloud and pray” era the same way we now look at storing credit card numbers in plain text. A little embarrassing, honestly.

So keep an eye out. Somewhere in a lab or a well-funded garage right now, someone is probably building the first MindOS prototype. When it lands on the wrists (and lapels) of the enterprise world, the AI arms race is going to get very, very interesting—and a whole lot more private.

I Need To Pull Back On Generating AI Slop

by Shelt Garner
@sheltgarner

I can write, you know. And my current run of AI slop sort of snuck up on me. But I’m going to think twice before doing it again. Not that I won’t do it again, just that I will think some more about doing it before I do it.

A lot AI writing is pretty good.

And usually — usually — I use AI to write blogposts because I have an idea but I’m just too fucking lazy to actually sit down and write it. So, I’m like, lulz, let AI do it. Then I’m too lazy to even read whatever it is that was generated.

This has got to stop. Or at least be thought through better.

Anyway, sorry, not sorry.

The Perfect Is The Enemy Of The Good: ‘AI Speak’ Edition

by Shelt Garner
@sheltgarner

I am just about to wrap up the first act of this scifi dramedy novel I’m working on and, as such, I’ve looked over some of the beginning scenes. And I’m pleased but for one thing — they definitely are a bit…too…polished.

They suffer from “AI speak” a little bit too much for my liking. I just hate the idea of people rolling their eyes and saying the only reason why my writing is any good is I used AI. (This, despite me still thinking that the way I use AI is simpler to how I might have used spell check a few decades earlier.)

Regardless, everyone and everything is horrible so to prevent me from having to endure the slings and arrows of people accusing me of producing AI slop, I’m probably going to go in and simply rewrite scenes as necessary completely in my own hand.

That way, even if the end product is “worse” at least it will be my writing and not AI.