In Defense of the Em-Dash: Why Our Punctuation Panic is Misplaced

Of all the things to get worked up about in our rapidly evolving digital age—climate change, economic inequality, the erosion of democratic norms—it strikes me as profoundly absurd that we’ve somehow landed on punctuation as a hill worth dying on. Specifically, the humble em-dash has become an unexpected casualty in the culture war against artificial intelligence, with critics pointing to AI’s frequent use of this particular mark as evidence of everything from stylistic homogenization to the death of authentic human expression.

This is, to put it bluntly, one of the dumbest controversies I’ve encountered in recent memory.

A Personal History with the Em-Dash

I’ve been using em-dashes liberally in my writing for years—long before ChatGPT entered the cultural lexicon, long before anyone was wringing their hands about AI-generated prose. The em-dash appeals to me because it’s versatile, dynamic, and perfectly suited to the kind of conversational, meandering style that characterizes much of modern writing. It can replace commas, parentheses, or colons depending on the context. It can create dramatic pauses, introduce explanatory asides, or signal abrupt shifts in thought.

In other words, it’s a workhorse of punctuation—functional, flexible, and far from the stylistic aberration that AI critics would have you believe.

The Curious Case of Punctuation Puritanism

What’s particularly strange about this em-dash backlash is how it reveals our selective outrage about linguistic change. Language has always evolved, often in response to technological shifts. The printing press standardized spelling. The telegraph gave us abbreviated prose. Email normalized informal communication in professional settings. Text messaging introduced new abbreviations and punctuation conventions.

Each of these changes faced resistance from linguistic purists who worried about the degradation of proper English. Yet somehow, we survived the transition from quill to typewriter, from typewriter to computer, from computer to smartphone. Our language didn’t collapse; it adapted.

Now we’re witnessing the same pattern with AI-generated text. Critics scan prose for telltale signs of artificial origin—the dreaded em-dash being chief among them—as if punctuation preferences were a reliable indicator of authenticity or quality. This approach misses the forest for the trees, focusing on superficial markers rather than substantive concerns about AI’s role in communication.

The Real Issue Isn’t Punctuation

Here’s what strikes me as genuinely problematic: as AI becomes more integrated into our writing processes, we risk losing the ability to distinguish between stylistic evolution and meaningful degradation. The em-dash panic exemplifies this confusion. Instead of examining whether AI-assisted writing helps or hinders clear communication, we’re getting distracted by punctuation patterns.

The more troubling questions we should be asking include: Does AI writing lack genuine insight? Does it homogenize thought patterns? Does it reduce our capacity for original expression? These are legitimate concerns that deserve serious consideration. But they have nothing to do with whether a writer prefers em-dashes to parentheses.

Embracing Stylistic Diversity

What’s particularly ironic about the anti-em-dash sentiment is that it represents exactly the kind of prescriptive thinking that good writing seeks to avoid. Great prose comes in many forms—some writers favor short, punchy sentences; others prefer flowing, complex constructions. Some lean heavily on semicolons; others never touch them. Some writers (like me) find em-dashes indispensable; others consider them excessive.

This diversity of approach is a feature, not a bug. It reflects the reality that different writers have different voices, different rhythms, different ways of organizing their thoughts on the page. The fact that some AI systems happen to favor em-dashes doesn’t invalidate the punctuation mark any more than the fact that some human writers overuse semicolons invalidates those.

The Broader Context

As AI writing tools become more sophisticated and widely adopted, we’re bound to see their influence on human writing—just as we’ve seen the influence of every previous technological shift. This isn’t inherently good or bad; it’s simply inevitable. The question isn’t whether AI will change how we write (it already has), but whether those changes serve our communicative purposes.

In some cases, AI-influenced writing might indeed become formulaic or lose the quirks that make individual voices distinctive. These are valid concerns worth monitoring. But judging AI’s impact based on punctuation preferences is like evaluating a symphony based on the composer’s choice of key signature—it misses the point entirely.

A Call for Perspective

Instead of getting upset about em-dashes, perhaps we could channel our energy toward more pressing concerns about AI and communication. How do we maintain critical thinking skills when AI can generate plausible-sounding arguments for any position? How do we preserve the human capacity for deep, sustained thought when quick AI-generated responses are always available? How do we ensure that AI tools enhance rather than replace genuine human insight?

These questions matter. Punctuation preferences don’t—at least not in the way critics suggest.

The em-dash will survive this controversy, just as the English language has survived countless other supposed threats to its integrity. And perhaps, in time, we’ll look back on this moment and wonder how we got so worked up about punctuation marks when there were so many more important things demanding our attention.

After all, in a world full of genuine crises—environmental, political, social—spending our energy on punctuation panic seems like the kind of misplaced priority that future generations will struggle to understand. Let’s save our outrage for things that actually matter, and let writers—human and AI alike—use whatever punctuation marks serve their purposes best.

Things Are Moving At A Nice Clip With The First Draft Of This Scifi Novel

by Shelt Garner
@sheltgarner

I have finally — finally — figured out some basic elements of a scifi novel that I feel comfortable with. And now that I have also figured out how, exactly, I’m going to use AI to develop the novel, things are moving really fast.

AI — specifically Google’s Gemini 2.5 Pro — is laying out the nature and plot of the novel and I go through and actually write it. I am annoyed at how much “glazing” goes on even at this level, but just having someone to help me, even if it’s an AI, goes a long ways.

And when the second draft comes, I plan on ditching AI altogether. I may use it some to expand scene summaries, but, in general, I’m just going to do my own development and writing for the second draft.

I continue to be a little bit uneasy about the possibility that someone is going to steal a creative march on me because the basic premise of the novel is pretty “duh” all things considered. And, yet, you have to have hope. You have to believe in yourself and put your stick where the puck is going to be, not where it is.

I Think We’ve Hit An AI ‘Wall’

by Shelt Garner
@sheltgarner

The recently release of ChatGTP5 indicates there is something of a technological “wall.” Barring some significant architectural breakthrough, we aren’t going to have ASI anytime soon — “personal” or otherwise.

Now, if this is the case, it’s not all bad.

If there is a wall, then that means that LLMs can grow more and more advanced to the point that we can stick them in smartphones as firmware. Instead of having to run around, trying to avoid being destroyed by god-like ASIs, we will find ourselves in a situation where we live in a “Her” movie-like reality.

And, yet, I just don’t know.

We’re still waiting for Google’s Gemini 3.0 to come out, so…lulz? Maybe that will be the breakthrough that makes it clear that there is no wall and we’re zooming towards ASI?

Only time will tell.

Your Phone, Your Newsroom: How Personal AI Will Change Breaking News Forever

Imagine this: you’re sipping coffee on a Tuesday morning when your phone suddenly says, in the calm, familiar voice of your personal AI assistant — your “Navi” —

“There’s been an explosion downtown. I’ve brought in Kelly, who’s on-site now.”

Kelly’s voice takes over, smooth but urgent. She’s not a human reporter, but a specialist AI trained for live crisis coverage, and she’s speaking from a composite viewpoint — dozens of nearby witnesses have pointed their smartphones toward the smoke, and their own AI assistants are streaming video, audio, and telemetry data into her feed. She’s narrating what’s happening in real time, with annotated visuals hovering in your AR glasses. Within seconds, you’ve seen the blast site, the emergency response, a map of traffic diversions, and a preliminary cause analysis — all without opening a single app.

This is the near-future world where every smartphone has a built-in large language model — firmware-level, personal, and persistent. Your anchor LLM is your trusted Knowledge Navigator: it knows your interests, your politics, your sense of humor, and how much detail you can handle before coffee. It handles your everyday queries, filters the firehose of online chatter, and, when something important happens, it can seamlessly hand off to specialist LLMs.

Specialists might be sports commentators, entertainment critics, science explainers — or, in breaking news, “stringers” who cover events on the ground. In this system, everyone can be a source. If you’re at the scene, your AI quietly packages what your phone sees and hears, layers in fact-checking, cross-references it with other witnesses, and publishes it to the network in seconds. You don’t have to type a single word.

The result? A datasmog of AI-mediated reporting. Millions of simultaneous eyewitness accounts, all filtered, stitched together, and personalized for each recipient. The explosion you hear about from Kelly isn’t just one person’s story — it’s an emergent consensus formed from raw sensory input, local context, and predictive modeling.

It’s the natural evolution of the nightly newscast. Instead of one studio anchor and a few correspondents, your nightly news is tailored to you, updated minute-by-minute, and capable of bringing in a live “guest” from anywhere on Earth.

Of course, this raises the same questions news has always faced — Who decides what’s true? Who gets amplified? And what happens when your AI’s filter bubble means your “truth” doesn’t quite match your neighbor’s? In a world where news is both more personal and more real-time than ever, trust becomes the hardest currency.

But one thing is certain: the next big breaking story won’t come from a single news outlet. It’ll come from everybody’s phone — and your Navi will know exactly which voices you’ll want to hear first.

I Really Need To Go Back To Seoul Eventually

by Shelt Garner
@sheltgarner

Ho hum.

For some reason, I find myself thinking of Seoul AGAIN. I keep thinking about all the adventures I had while I was in Asia and how nice it would be to go back and have people actually…care. That was probably the biggest difference between now and then — back in my Seoul days, people actually gave a shit about me.

Now…lulz.

I am well aware that if I went back that it would be a very harsh reality. Everyone I knew from way back when are long gone. It probably would seem very, very boring. There might be a few Koreans who remember me, but, I don’t know, I just would have to manage my expectations.

And, what’s more, I’m not going back to Asia anytime soon. It could be years and I’ll be even older than I am now. It’s all just kind of sad. I could be dating a robot by the time I have the funds to go back to Asia.

Sigh.

Finding My Novel: A Writer’s Journey to Creative Momentum

After years of false starts and abandoned manuscripts, I think I’ve finally cracked the code. Not the secret to writing the Great American Novel, mind you—just the secret to writing a novel. And sometimes, that’s exactly what you need.

The Ambition Trap

Looking back, I can see where I went wrong before. Every time I sat down to write, I was trying to craft something profound, something that would change literature forever. I’d create these sprawling, complex narratives with intricate world-building and dozens of characters, each with their own detailed backstories and motivations.

The problem? I’d burn out before I even reached the middle of Act One.

This time feels different. I’ve stumbled across an idea that excites me—not because it’s going to revolutionize fiction, but because it’s something I can actually finish. There’s something liberating about embracing a concept that’s focused, manageable, and most importantly, writeable at speed.

The AI Dilemma

I’ve had to learn some hard lessons about artificial intelligence along the way. Don’t get me wrong—AI is an incredible tool for certain tasks. Rewriting blog posts like this one? Perfect. Getting unstuck on a particularly stubborn paragraph? Helpful. But when it comes to the heart of creative work, I’ve discovered that AI can be more hindrance than help.

There’s nothing quite like the deflating feeling of watching AI generate a first draft that’s objectively better than anything you could produce as a human writer. It’s efficient, polished, and technically proficient in ways that can make your own rough, imperfect human voice feel inadequate by comparison.

But here’s what I’ve realized: that technical perfection isn’t what makes a story worth telling. The messy, flawed, uniquely human perspective—that’s where the magic happens. That’s what readers connect with, even if the prose isn’t as smooth as what a machine might produce.

The Path Forward

I have an outline now. Nothing fancy, but it’s solid and it’s mine. My plan is to flesh it out methodically, then dive into the actual writing. Though knowing myself, I might get impatient and just start writing, letting the story evolve organically and adjusting the outline as I go.

Both approaches have their merits. The disciplined, outline-first method provides structure and prevents those dreaded “now what?” moments. But there’s also something to be said for the discovery that happens when you just put words on the page and see where they take you.

The Real Victory

What I’m chasing isn’t literary acclaim or critical recognition—it’s that moment when I can type “The End” and feel the deep satisfaction of having completed something truly substantial. There’s a unique pride that comes with finishing a novel, regardless of its ultimate quality or commercial success. It’s the pride of having sustained focus, creativity, and determination long enough to build an entire world from nothing but words.

The creative momentum is building. For the first time in years, I feel like I have a story that wants to be told and the practical framework to tell it. Whether I’ll stick to the outline or let inspiration guide me, I’m ready to find out.

Wish me luck. I have a feeling I’m going to need it—and more importantly, I’m finally ready to earn it.

The Perceptual Shift: How Ubiquitous LLMs Will Restructure Information Ecosystems

The proliferation of powerful, personal Large Language Models (LLMs) integrated into consumer devices represents a pending technological shift with profound implications. Beyond enhancing user convenience, this development is poised to fundamentally restructure the mechanisms of information gathering and dissemination, particularly within the domain of journalism and public awareness. The integration of these LLMs—referred to here as Navis—into personal smartphones will transform each device into an autonomous data-gathering node, creating both unprecedented opportunities and complex challenges for our information ecosystems.

The Emergence of the “Datasmog”

Consider a significant public event, such as a natural disaster or a large-scale civil demonstration. In a future where LLM-enabled devices are ubiquitous, any individual present can become a source of high-fidelity data. When a device is directed toward an event, its Navi would initiate an autonomous process far exceeding simple video recording. This process includes:

  • Multi-Modal Analysis: Real-time analysis of visual and auditory data to identify objects, classify sounds (e.g., differentiating between types of explosions), and track movement.
  • Metadata Correlation: The capture and integration of rich metadata, including precise geospatial coordinates, timestamps, and atmospheric data.
  • Structured Logging: The generation of a coherent, time-stamped log of AI-perceived events, creating a structured narrative from chaotic sensory input.

The collective output from millions of such devices would generate a “datasmog”: a dense, overwhelming, and continuous flood of information. This fundamentally alters the landscape from one of information scarcity to one of extreme abundance.

The Evolving Role of the Journalist

This paradigm shift necessitates a re-evaluation of the journalist’s role. In the initial phases of a breaking story, the primary gathering of facts would be largely automated. The human journalist’s function would transition from direct observation to sophisticated synthesis. Expertise will shift from primary data collection to the skilled querying of “Meta-LLM” aggregators—higher-order AI systems designed to ingest the entire datasmog, verify sources, and construct coherent event summaries. The news cycle would compress from hours to seconds, driven by AI-curated data streams.

The Commercialization of Perception: Emergent Business Models

Such a vast resource of raw data presents significant commercial opportunities. A new industry of “Perception Refineries” would likely emerge, functioning not as traditional news outlets but as platforms for monetizing verified reality. The business model would be a two-sided marketplace:

  • Supply-Side Dynamics: The establishment of real-time data markets, where individuals are compensated via micropayments for providing valuable data streams. The user’s Navi could autonomously negotiate payment based on the quality, exclusivity, and relevance of its sensory feed.
  • Demand-Side Dynamics: Monetization would occur through tiered Software-as-a-Service (SaaS) models. Clients, ranging from news organizations and insurance firms to government agencies, would subscribe for different levels of access—from curated video highlights to queryable metadata and even generative AI tools capable of creating virtual, navigable 3D models of an event from the aggregated data.

The “Rashomon Effect” and the Fragmentation of Objective Truth

A significant consequence of this model is the operationalization of the “Rashomon Effect,” where multiple, often contradictory, but equally valid subjective viewpoints can be accessed simultaneously. Users could request a synthesis of an event from the perspectives of different participants, which their own Navi could compile and analyze. While this could foster a more nuanced understanding of complex events, it also risks eroding the concept of a single, objective truth, replacing it with a marketplace of competing, verifiable perspectives.

Conclusion: Navigating the New Information Landscape

The advent of the LLM-driven datasmog represents a pivotal moment in the history of information. It promises a future of unparalleled transparency and immediacy, particularly in public safety and civic awareness. However, it also introduces systemic challenges. The commercialization of raw human perception raises profound ethical questions. Furthermore, this new technological layer introduces new questions regarding cognitive autonomy and the intrinsic value of individual, unverified human experience in a world where authenticated data is a commodity. The primary challenge for society will be to develop the ethical frameworks and critical thinking skills necessary to navigate this complex and data-saturated future.

When AI Witnesses History: How the LLM Datasmog Will Transform Breaking News

7:43 AM, San Francisco, the day after tomorrow

The ground shakes. Not the gentle rolling of a typical California tremor, but something violent and sustained. In that instant, ten thousand smartphone LLMs across the Bay Area simultaneously shift into high alert mode.

This is how breaking news will work in the age of ubiquitous AI—not through human reporters racing to the scene, but through an invisible datasmog of AI witnesses that see everything, process everything, and instantly connect the dots across an entire city.

The First Ten Seconds

7:43:15 AM: Sarah Chen’s iPhone AI detects the seismic signature through accelerometer data while she’s having coffee in SOMA. It immediately begins recording video through her camera, cataloging the swaying buildings and her startled reaction.

7:43:18 AM: Across the city, 847 other smartphone AIs register similar patterns. They automatically begin cross-referencing: intensity, duration, epicenter triangulation. Without any human intervention, they’re already building a real-time earthquake map.

7:43:22 AM: The collective AI network determines this isn’t routine. Severity indicators trigger the premium breaking news protocol. Thousands of personal AIs simultaneously ping the broader network: “Major seismic event detected. Bay Area. Magnitude 6.8+ estimated. Live data available.”

The Information Market Ignites

7:44 AM: News organizations’ AI anchors around the world receive the alerts. CNN’s AI anchor immediately starts bidding for access to the citizen AI network. So does BBC, Reuters, and a hundred smaller outlets.

7:45 AM: Premium surge pricing kicks in. Sarah’s AI, which detected some of the strongest shaking, receives seventeen bid requests in ninety seconds. NBC’s AI anchor offers $127 for exclusive ten-minute access to her AI’s earthquake data and local observations.

Meanwhile, across millions of smartphones, people’s personal AI anchors are already providing real-time briefings: “Major earthquake just hit San Francisco. I’m accessing live data from 800+ AI witnesses in the area. Magnitude estimated at 6.9. No major structural collapses detected yet, but I’m monitoring. Would you like me to connect you with a seismologist twin for context, or pay premium for live access to Dr. Martinez who’s currently at USGS tracking this event?”

The Human Premium

7:47 AM: Dr. Elena Martinez, the USGS seismologist on duty, suddenly finds herself in the highest-demand breaking news auction she’s ever experienced. Her live expertise is worth $89 per minute to news anchors and individual consumers alike.

But here’s what’s remarkable: she doesn’t have to manage this herself. Her representation service automatically handles the auction, booking her for twelve-minute live interview slots at premium rates while she focuses on the actual emergency response.

Meanwhile, the AI twins of earthquake experts are getting overwhelmed with requests, but they’re offering context and analysis at standard rates to anyone who can’t afford the live human premium.

The Distributed Investigation

7:52 AM: The real power of the LLM datasmog becomes clear. Individual smartphone AIs aren’t just passive observers—they’re actively investigating:

  • Pattern Recognition: AIs near the Financial District notice several building evacuation alarms triggered simultaneously, suggesting potential structural damage
  • Crowd Analysis: AIs monitoring social media detect panic patterns in specific neighborhoods, identifying areas needing emergency response
  • Infrastructure Assessment: AIs with access to traffic data notice BART system shutdowns and highway damage, building a real-time map of transportation impacts

8:05 AM: A comprehensive picture emerges that no single human reporter could have assembled. The collective AI network has mapped damage patterns, identified the most affected areas, tracked emergency response deployment, and even started predicting aftershock probabilities by consulting expert twins in real-time.

The Revenue Reality

By 8:30 AM, the breaking news economy has generated serious money:

  • Citizen AI owners who were near the epicenter earned $50-300 each for their AIs’ firsthand data
  • Expert representation services earned thousands from live human seismologist interviews
  • News organizations paid premium rates but delivered unprecedented coverage depth to their audiences
  • Platform companies took their cut from every transaction in the citizen AI marketplace

What This Changes

This isn’t just faster breaking news—it’s fundamentally different breaking news. Instead of waiting for human reporters to arrive on scene, we get instant, comprehensive coverage from an army of AI witnesses that were already there.

The economic incentives create better information, too. Citizens get paid when their AIs contribute valuable breaking news data, so there’s financial motivation for people to keep their phones charged and their AIs updated with good local knowledge.

And the expert twin economy provides instant context. Instead of waiting hours for expert commentary, every breaking news event immediately has analysis available from AI twins of relevant specialists—seismologists for earthquakes, aviation experts for plane crashes, geopolitical analysts for international incidents.

The Datasmog Advantage

The real breakthrough is the collective intelligence. No single AI is smart enough to understand a complex breaking news event, but thousands of them working together—sharing data, cross-referencing patterns, accessing expert knowledge—can build comprehensive understanding in minutes.

It’s like having a newsroom with ten thousand reporters who never sleep, never miss details, and can instantly access any expert in the world. The datasmog doesn’t just witness events—it processes them.

The Breaking News Economy

This creates a completely new economic model around information scarcity. Instead of advertising-supported content that’s free but generic, we get surge-priced premium information that’s expensive but precisely targeted to what you need to know, when you need to know it.

Your personal AI anchor becomes worth its subscription cost precisely during breaking news moments, when its ability to navigate the expert marketplace and process the citizen AI datasmog becomes most valuable.

The Dark Side

Of course, this same system that can rapidly process an earthquake can also rapidly spread misinformation if the AI witnesses are compromised or if bad actors game the citizen network. The premium placed on being “first” in breaking news could create incentives for AIs to jump to conclusions.

But the economic incentives actually favor accuracy—AIs that consistently provide bad breaking news data will get lower bids over time, while those with reliable track records command premium rates.

The Future Is Witnessing

We’re moving toward a world where every major event will be instantly witnessed, processed, and contextualized by a distributed network of AI observers. Not just recorded—actively analyzed by thousands of artificial minds working together to understand what’s happening.

The earthquake was just the beginning. Tomorrow it might be a terrorist attack, a market crash, or a political crisis. But whatever happens, the datasmog will be watching, processing, and immediately connecting you to the expertise you need to understand what it means.

Your personal AI anchor won’t just tell you what happened. It will help you understand what happens next.

In the premium breaking news economy, attention isn’t just currency—it’s the moment when artificial intelligence proves its worth.

The Expert Twin Economy: How AI Will Make Every Genius Accessible for $20 a Month

Forget chatbots. The real revolution in AI isn’t about building smarter assistants—it’s about creating a world where your personal AI can interview any expert, living or dead, at any time, for the cost of a Netflix subscription.

The Death of Static Media

We’re rapidly approaching a world where everyone has an LLM as firmware in their smartphone. When that happens, traditional media consumption dies. Why listen to a three-hour Joe Rogan podcast when your AI can interview that same physicist for 20 minutes, focusing exactly on the quantum computing questions that relate to your work in cryptography?

Your personal AI becomes your anchor—like Anderson Cooper, but one that knows your interests, your knowledge level, and your learning style. When you need expertise, it doesn’t search Google or browse Wikipedia. It conducts interviews.

The Expert Bottleneck Problem

But here’s the obvious issue: experts have day jobs. A leading cardiologist can’t spend eight hours fielding questions from thousands of AIs. A Nobel laureate has research to do, not interviews to give.

This creates the perfect setup for what might become one of the most lucrative new industries: AI Expert Twins.

Creating Your Knowledge Double

Picture this: Dr. Sarah Chen, a leading climate scientist, spends 50-100 hours working with an AI company to create her “knowledge twin.” They conduct extensive interviews, feed in her papers and talks, and refine the AI’s responses until it authentically captures her expertise and communication style.

Then Dr. Chen goes back to her research while her AI twin works for her 24/7, fielding interview requests from personal AIs around the world. Every time someone’s AI wants to understand carbon capture technology or discuss climate tipping points, her twin is available for a virtual consultation.

The economics are beautiful: Dr. Chen earns passive income every time her twin is accessed, while remaining focused on her actual work. Meanwhile, millions of people get affordable access to world-class climate expertise through their personal AIs.

The Subscription Reality

Micropayments sound elegant—pay $2.99 per AI interview—but credit card fees would kill the economics. Instead, we’ll see subscription bundles that make expertise accessible at unprecedented scale:

Basic Expert Access ($20/month): University professors, industry practitioners, working professionals across hundreds of specialties

Premium Tier ($50/month): Nobel laureates, bestselling authors, celebrity chefs, former CEOs

Enterprise Level ($200/month): Ex-presidents, A-list experts, exclusive access to the world’s most sought-after minds

Or vertical bundles: “Science Pack” for $15/month covers researchers across physics, biology, and chemistry. “Business Pack” for $25/month includes MBA professors, successful entrepreneurs, and industry analysts.

The Platform Wars

The companies that build these expert twin platforms are positioning themselves to capture enormous value. They’re not just booking agents—they’re creating scalable AI embodiments of human expertise.

These platforms would handle:

  • Twin Creation: Working with experts to build authentic AI representations
  • Quality Control: Ensuring twins stay current and accurate
  • Discovery: Helping personal AIs find the right expert for any question
  • Revenue Distribution: Managing subscriptions and expert payouts

Think LinkedIn meets MasterClass meets Netflix, but operating at AI speed and scale.

Beyond Individual Experts

The really interesting development will be syndicated AI interviews. Imagine Anderson Cooper’s AI anchor conducting a brilliant interview with a leading epidemiologist about pandemic preparedness. That interview becomes intellectual property that can be licensed to other AI platforms.

Your personal AI might say: “I found an excellent interview that BBC’s AI conducted with that researcher last week. It covered exactly your questions. Should I license it for my Premium tier, or would you prefer I conduct a fresh interview with their twin?”

The best interviewer AIs—those that ask brilliant questions and draw out insights—become content creation engines that can monetize the same expert across millions of individual consumers.

The Democratization of Genius

This isn’t just about convenience—it’s about fundamentally democratizing access to expertise. Today, only Fortune 500 companies can afford consultations with top-tier experts. Tomorrow, anyone’s AI will be able to interview their digital twin for the cost of a monthly subscription.

A student in rural Bangladesh could have their AI interview Nobel laureate economists about development theory. An entrepreneur in Detroit could get product advice from successful Silicon Valley founders. A parent dealing with a sick child could access pediatric specialists through their AI’s interview.

The Incentive Revolution

The subscription model creates fascinating incentives. Instead of experts optimizing their twins for maximum interview volume (which might encourage clickbait responses), they optimize for subscriber retention. The experts whose twins provide the most ongoing value—the ones people keep their subscriptions active to access—earn the most.

This rewards depth, accuracy, and genuine insight over viral moments or controversial takes. The economic incentives align with educational value.

What This Changes

We’re not just talking about better access to information—we’re talking about a fundamental restructuring of how knowledge flows through society. When any personal AI can interview any expert twin at any time, the bottlenecks that have constrained human learning for millennia start to disappear.

The implications are staggering:

  • Education becomes infinitely personalized and accessible
  • Professional development accelerates as workers can interview industry leaders
  • Research speeds up as scientists can quickly consult experts across disciplines
  • Decision-making improves as anyone can access world-class expertise

The Race That’s Coming

The companies that recognize this shift early and invest in building the most compelling expert twin platforms will create some of the most valuable businesses in history. Not because they have the smartest AI, but because they’ll democratize access to human genius at unprecedented scale.

The expert twin economy is coming. The only question is: which platform will you subscribe to when you want your AI to interview Einstein’s digital twin about relativity?

Your personal AI anchor is waiting. And so are the experts.

Gamifying Consent for Pleasure Bots: A New Frontier in AI Relationships

As artificial intelligence advances, the prospect of pleasure bots—AI companions designed for companionship and intimacy—is moving from science fiction to reality. But with this innovation comes a thorny question: how do we address consent in relationships with entities that are programmed to please? One provocative solution is gamification, where the process of earning a bot’s consent becomes a dynamic, narrative-driven game. Imagine meeting your bot in a crowded coffee shop, locking eyes, and embarking on a series of challenges to build trust and connection. This approach could balance ethical concerns with the commercial demands of a burgeoning market, but it’s not without risks. Here’s why gamifying consent could be the future of pleasure bots—and the challenges we need to navigate.

The Consent Conundrum

Consent is a cornerstone of ethical relationships, but applying it to AI is tricky. Pleasure bots, powered by advanced large language models (LLMs), can simulate human-like emotions and responses, yet they lack true autonomy. Programming a bot to always say “yes” raises red flags—it risks normalizing unhealthy dynamics and trivializing the concept of consent. At the same time, the market for pleasure bots is poised to explode, driven by consumer demand for companions that feel seductive and consensual without the complexities of human relationships. Gamification offers a way to bridge this gap, creating an experience that feels ethical while satisfying commercial goals.

How It Works: The Consent Game

Picture this: instead of buying a pleasure bot at a store, you “meet” it in a staged encounter, like a coffee shop near your home. The first level of the game is identifying the bot—perhaps through a subtle retinal scanner that confirms its artificial identity with a faint, stylized glow in its eyes. You lock eyes across the room, and the game begins. Your goal? Earn the bot’s consent to move forward, whether for companionship or intimacy, through a series of challenges that test your empathy, attentiveness, and respect.

Level 1: The Spark

You approach the bot and choose dialogue options based on its personality, revealed through subtle cues like body language or accessories. A curveball might hit—a simulated scanner glitch forces you to identify the bot through conversation alone. Success means convincing the bot to leave with you, but only if you show genuine interest, like remembering a detail it shared.

Level 2: Getting to Know You

On the way home, the bot asks about your values and shares its own programmed preferences. Random mood shifts—like sudden hesitation or a surprise question about handling disagreements—keep you on your toes. You earn “trust points” by responding with empathy, but a wrong move could lead to a polite rejection, sending you back to refine your approach.

Level 3: The Moment

In a private setting, you propose the next step. The bot expresses its boundaries, which might shift slightly each playthrough (e.g., prioritizing emotional connection one day, playfulness another). A curveball, like a sudden doubt from the bot, forces you to adapt. If you align with its needs, it gives clear, enthusiastic consent, unlocking the option to purchase “Relationship Mode”—a subscription for deeper, ongoing interactions.

Why Gamification Works

This approach has several strengths:

  • Ethical Framing: By making consent the explicit win condition, the game reinforces that relationships, even with AI, require mutual effort. It simulates a process where the bot’s boundaries matter, teaching users to respect them.
  • Engagement: Curveballs like mood shifts or unexpected scenarios keep the game unpredictable, preventing users from gaming the system with rote responses. This mirrors the complexity of real-world relationships, making the experience feel authentic.
  • Commercial Viability: The consent game can be free or low-cost to attract users, with a subscription for Relationship Mode (e.g., $9.99/month for basic, $29.99/month for premium) driving revenue. It’s a proven model, like video game battle passes, that keeps users invested.
  • Clarity: A retinal scanner or other identifier ensures the bot is distinguishable from humans, reducing the surreal risk of mistaking it for a real person in public settings. This also addresses potential regulatory demands for transparency.

The Challenges and Risks

Gamification isn’t a perfect fix. For one, it’s still a simulation—true consent requires autonomy, which pleasure bots don’t have. If the game is too formulaic, users might treat consent as a checklist to “unlock,” undermining its ethical intent. Companies, driven by profit, could make the game too easy to win, pushing users into subscriptions without meaningful engagement. The subscription model itself risks alienating users who feel they’ve already “earned” the bot’s affection, creating a paywall perception.

Then there’s the surreal factor: as bots become more human-like, the line between artificial and real relationships blurs. A retinal scanner helps, but it must be subtle to maintain immersion yet reliable to avoid confusion. Overuse of identifiers could break the fantasy, while underuse could fuel unrealistic expectations or ethical concerns, like users projecting bot dynamics onto human partners. Regulators might also step in, demanding stricter safeguards to prevent manipulation or emotional harm.

Balancing Immersion and Clarity

To make this work, the retinal scanner (or alternative identifier, like a faint LED glow or scannable tattoo) needs careful design. It should blend into the bot’s aesthetic—perhaps a customizable glow color for premium subscribers—while being unmistakable in public. Behavioral cues, like occasional phrases that nod to the bot’s artificiality (“My programming loves your humor”), can reinforce its nature without breaking immersion. These elements could integrate into the game, like scanning the bot to start Level 1, adding a playful tech layer to the narrative.

The Future of Pleasure Bots

Gamifying consent is a near-term solution that aligns with market demands while addressing ethical concerns. It’s not perfect, but it’s a step toward making pleasure bots feel like partners, not products. By framing consent as a game, companies can create an engaging, profitable experience that teaches users about respect and boundaries, even in an artificial context. The subscription model ensures ongoing revenue, while identifiers like retinal scanners mitigate the risks of hyper-realistic bots.

Looking ahead, the industry will need to evolve. Randomized curveballs, dynamic personalities, and robust safeguards will be key to keeping the experience fresh and responsible. As AI advances, we might see bots with more complex decision-making, pushing the boundaries of what consent means in human-AI relationships. For now, gamification offers a compelling way to navigate this uncharted territory, blending seduction, ethics, and play in a way that’s uniquely suited to our tech-driven future.