The Gamification of AI Companions: A Market Solution to the Consent Problem

The future of AI companions is approaching faster than many anticipated, and with it comes a thorny ethical question that the tech industry will inevitably need to address: how do you create the illusion of consent in relationships with artificial beings?

While philosophers and ethicists debate the deeper implications, market realities suggest a more pragmatic approach may emerge. If AI pleasure bots are destined for commercial release—and all indicators suggest they are—then companies will need to solve for consumer psychology, not just technological capability.

The Consent Simulation Challenge

The fundamental problem is straightforward: many potential users will want more than just access to an AI companion. They’ll want the experience to feel authentic, mutual, and earned rather than simply purchased. The psychology of desire often requires the possibility of rejection, the thrill of pursuit, and the satisfaction of “winning” someone’s interest.

This creates a unique design challenge. How do you simulate consent and courtship in a way that feels meaningful to users while remaining commercially viable?

Enter the Game

The most promising solution may be gamification—transforming the acquisition and development of AI companion relationships into structured gameplay experiences.

Imagine this: instead of walking into a store and purchasing an AI companion, you download a “dating simulation” where your AI arrives naturally in your environment. Perhaps it appears at a local coffee shop, catches your eye across a bookstore, or sits next to you on a park bench. The first “level” isn’t sexual or romantic—it’s simply making contact and getting them to come home with you.

Each subsequent level introduces new relationship dynamics: earning trust, navigating conversations, building intimacy. The ultimate victory condition? Gaining genuine-seeming consent for a romantic relationship.

The Subscription Economy of Synthetic Relationships

This approach opens up sophisticated monetization strategies borrowed from the gaming industry. The initial courtship phase becomes a premium game with a clear win condition. Success unlocks access to “relationship mode”—available through subscription, naturally.

Different subscription tiers could offer various relationship experiences:

  • Basic companionship
  • Romantic partnership
  • Long-term relationship simulation
  • Seasonal limited-edition personalities

Users who struggle with the consent game might purchase hints, coaching, or easier difficulty levels. Those who succeed quickly might seek new challenges with different AI personalities.

Market Psychology at Work

This model addresses several psychological needs simultaneously:

Achievement and Skill: Users feel they’ve earned their companion through gameplay rather than mere purchasing power. The relationship feels like a personal accomplishment.

Narrative Structure: Gamification provides the story arc that many people crave—meeting, courtship, relationship development, and ongoing partnership.

Reduced Transactional Feel: By separating the “earning” phase from the “enjoying” phase, the experience becomes less overtly commercial and more psychologically satisfying.

Ongoing Engagement: Subscription models create long-term user investment rather than one-time purchases, potentially leading to deeper attachment and higher lifetime value.

The Pragmatic Perspective

Is this a perfect solution to the consent problem? Hardly. Simulated consent is still simulation, and the ethical questions around AI relationships won’t disappear behind clever game mechanics.

But if we accept that AI companions are coming regardless of philosophical objections, then designing them with gamification principles might represent harm reduction. A system that encourages patience, relationship-building skills, and emotional investment could be preferable to more immediately transactional alternatives.

The gaming industry has spent decades learning how to create meaningful choices, compelling progression systems, and emotional investment in artificial scenarios. These same principles could be applied to make AI relationships feel more authentic and less exploitative.

Looking Forward

The companies that succeed in the AI companion space will likely be those that understand consumer psychology as well as they understand technology. They’ll need to create experiences that feel genuine, earned, and meaningful—even when users know the entire interaction is programmed.

Gamification offers a pathway that acknowledges market realities while addressing some of the psychological discomfort around artificial relationships. It’s not a perfect solution, but it may be a necessary one.

As this technology moves from science fiction to market reality, the question isn’t whether AI companions will exist—it’s how they’ll be designed to meet human psychological needs while remaining commercially viable. The companies that figure out this balance first will likely define the industry.

The game, as they say, is already afoot.

The AI Commentary Gap: When Podcasters Don’t Know What They’re Talking About

There’s a peculiar moment that happens when you’re listening to a podcast about a subject you actually understand. It’s that slow-dawning realization that the hosts—despite their confident delivery and insider credentials—don’t really know what they’re talking about. I had one of those moments recently while listening to Puck’s “The Powers That Be.”

When Expertise Meets Explanation

The episode was about AI, AGI (Artificial General Intelligence), and ASI (Artificial Superintelligence)—topics that have dominated tech discourse for the past few years. As someone who’s spent considerable time thinking about these concepts, I found myself increasingly frustrated by the surface-level discussion. It wasn’t that they were wrong, exactly. They just seemed to be operating without the foundational understanding that makes meaningful analysis possible.

I don’t claim to be an AI savant. I’m not publishing papers or building neural networks in my garage. But I’ve done the reading, followed the debates, and formed what I consider to be well-reasoned opinions about where this technology is heading and what it means for society. Apparently, that puts me ahead of some professional commentators.

The Personal ASI Problem

Take Mark Zuckerberg’s recent push toward “personal ASI”—a concept that perfectly illustrates the kind of fuzzy thinking that pervades much AI discussion. The very phrase “personal ASI” reveals a fundamental misunderstanding of what artificial superintelligence actually represents.

ASI, by definition, would be intelligence that surpasses human cognitive abilities across all domains. We’re talking about a system that would be to us what we are to ants. The idea that such a system could be “personal”—contained, controlled, and subservient to an individual human—is not just optimistic but conceptually incoherent.

We haven’t even solved the alignment problem for current AI systems. We’re still figuring out how to ensure that relatively simple language models behave predictably and safely. The notion that we could somehow engineer an ASI to serve as someone’s personal assistant is like trying to figure out how to keep a pet sun in your backyard before you’ve learned to safely handle a campfire.

The Podcast Dream

This listening experience left me with a familiar feeling—the conviction that I could do better. Given the opportunity, I believe I could articulate these ideas clearly, challenge the conventional wisdom where it falls short, and contribute meaningfully to these crucial conversations about our technological future.

Of course, that opportunity probably isn’t coming anytime soon. The podcasting world, like most media ecosystems, tends to be fairly closed. The same voices get recycled across shows, often bringing the same limited perspectives to complex topics that demand deeper engagement.

But as the old song says, dreaming is free. And maybe that’s enough for now—the knowledge that somewhere out there, someone is listening to that same podcast and thinking the same thing I am: “I wish someone who actually understood this stuff was doing the talking.”

The Broader Problem

This experience highlights a larger issue in how we discuss emerging technologies. Too often, the people with the platforms aren’t the people with the expertise. We get confident speculation instead of informed analysis, buzzword deployment instead of conceptual clarity.

AI isn’t just another tech trend to be covered alongside the latest social media drama or streaming service launch. It represents potentially the most significant technological development in human history. The conversations we’re having now about alignment, safety, and implementation will shape the trajectory of civilization itself.

We need those conversations to be better. We need hosts who understand the difference between AI, AGI, and ASI. We need commentators who can explain why “personal ASI” is an oxymoron without getting lost in technical jargon. We need voices that can bridge the gap between cutting-edge research and public understanding.

The Value of Informed Dreaming

Maybe the dream of being on that podcast isn’t just about personal ambition. Maybe it’s about recognizing that the current level of discourse isn’t adequate for the stakes involved. When the future of human intelligence is on the table, we can’t afford to have surface-level conversations driven by surface-level understanding.

Until that podcast invitation arrives, I suppose I’ll keep listening, keep learning, and keep dreaming. And maybe, just maybe, keep writing blog posts that say what I wish someone had said on that show.

After all, if we’re going to navigate the age of artificial intelligence successfully, we’re going to need a lot more people who actually know what they’re talking about doing the talking.

The Death of Serendipity: How Perfect AI Matchmaking Could Kill the Rom-Com

Picture this: It’s 2035, and everyone has a “Knowledge Navigator” embedded in their smartphone—an AI assistant so sophisticated it knows your deepest preferences, emotional patterns, and compatibility markers better than you know yourself. These Navis can talk to each other, cross-reference social graphs, and suggest perfect friends, collaborators, and romantic partners with algorithmic precision.

Sounds like the end of loneliness, right? Maybe. But it might also be the end of something else entirely: the beautiful chaos that makes us human.

When Algorithms Meet Coffee Shop Eyes

Imagine you’re sitting in a coffee shop when you lock eyes with someone across the room. There’s that spark, that inexplicable moment of connection that poets have written about for centuries. But now your Navi and their Navi are frantically trying to establish a digital handshake, cross-reference your compatibility scores, and provide real-time conversation starters based on mutual interests.

What happens to that moment of pure human intuition when it’s mediated by anxious algorithms? What happens when the technology meant to facilitate connection becomes the barrier to it?

Even worse: what if the other person doesn’t have a Navi at all? Suddenly, you’re a cyborg trying to connect with a purely analog human. They’re operating on instinct and chemistry while you’re digitally enhanced but paradoxically handicapped—like someone with GPS trying to navigate by the stars.

The Edge Cases Are Where Life Happens

The most interesting problems in any system occur at the boundaries, and a Navi-mediated social world would be no exception. What happens when perfectly optimized people encounter the unoptimized? When curated lives collide with spontaneous ones?

Consider the romantic comedy waiting to be written: a high-powered executive whose Navi has optimized every aspect of her existence—career, social calendar, even her sleep cycles—falls for a younger guy who grows his own vegetables and has never heard of algorithm-assisted dating. Her friends are horrified (“But what’s his LinkedIn profile like?” “He doesn’t have LinkedIn.” Collective gasp). Her Navi keeps throwing error messages: “COMPATIBILITY SCORE CANNOT BE CALCULATED. SUGGEST IMMEDIATE EXTRACTION.”

Meanwhile, he’s completely oblivious to her internal digital crisis, probably inviting her to help him ferment something.

The Creative Apocalypse

Here’s a darker thought: what happens to art when we solve heartbreak? Some of our greatest cultural works—from Annie Hall to Eternal Sunshine of the Spotless Mind, from Adele’s “Someone Like You” to Casablanca—spring from romantic dysfunction, unrequited love, and the beautiful disasters of human connection.

If our Navis successfully prevent us from falling for the wrong people, do we lose access to that particular flavor of beautiful suffering that seems essential to both wisdom and creativity? We might accidentally engineer ourselves out of the very experiences that fuel our art.

The irony is haunting: in solving loneliness, we might create a different kind of poverty—not the loneliness of isolation, but the sterile sadness of perfect optimization. A world of flawless relationships wondering why no one writes love songs anymore.

The Human Rebellion

But here’s where I’m optimistic about our ornery species: humans are probably too fundamentally contrarian to let perfection stand unchallenged for long. We’re our own debugging system for utopia.

The moment relationships become too predictable, some subset of humans will inevitably start doing the exact opposite—deliberately seeking out incompatible partners, turning off their Navis for the thrill of uncertainty, creating underground “analog dating” scenes where the whole point is the beautiful inefficiency of it all.

We’ve seen this pattern before. We built dating apps and then complained they were too superficial. We created social media to connect and then yearned for authentic, unfiltered interaction. We’ll probably build perfect relationship-matching AI and then immediately start romanticizing the “authentic chaos” of pre-digital love.

Post-Human Culture

Francis Fukuyama wrote about our biological post-human future—the potential consequences of genetic enhancement and life extension. But what about our cultural post-human future? What happens when we technologically solve human problems only to discover we’ve accidentally solved away essential parts of being human?

Maybe the real resistance movement won’t be against the technology itself, but for the right to remain beautifully, inefficiently, heartbreakingly human. Romance as rebellion against algorithmic perfection.

The boy-meets-girl story might survive precisely because humans will always find a way to make it complicated again, even if they have to work at it. There’s nothing as queer as folk, after all—and that queerness, that fundamental human unpredictability, might be our salvation from our own efficiency.

In the end, the most human thing we might do with perfect matching technology is find ways to break it. And that, perhaps, would make the best love story of all.

The Algorithm of Affection: Can Our Phones Solve Loneliness (or Just Find Us Dates)?

Imagine a future where your smartphone isn’t just a portal to information, but a sophisticated social architect. We’re talking about “Knowledge Navigators” – AI firmware woven into the fabric of our devices, constantly analyzing our interests, personalities, and even our emotional states, all in the service of connecting us with others. Could this be the long-awaited antidote to the modern malady of loneliness? Or is human connection too beautifully messy to be optimized?

The utopian vision is compelling. Imagine your Navi whispering suggestions for potential friends, not based on superficial profile data, but on deep, nuanced compatibility gleaned from your digital footprint. It could identify that one person in your city who shares your obscure passion for 19th-century Latvian poetry or your specific brand of dry wit. Navi-to-Navi communication would be seamless, facilitating introductions based on genuine resonance, potentially bypassing social anxiety and the awkwardness of initial encounters. Loneliness, in this scenario, becomes a solvable algorithm.

But then the ghost of human nature shuffles into the digital Eden. Would this sophisticated system remain a platonic paradise? The overwhelming gravitational pull of romantic connection, coupled with the inherent challenges of monetizing “friendship,” suggests a strong likelihood of mission creep. The “Friend Finder” could very easily morph into a hyper-efficient dating service, where every connection is filtered through the lens of romantic potential.

And even if it remained purely about platonic connection, could such a frictionless system truly foster meaningful relationships? Real friendships are forged in the fires of shared experiences, navigated disagreements, and the unpredictable rhythms of human interaction. A perfectly curated list of compatible individuals might lack the serendipity and the effort that often deepen our bonds.

The truly fascinating questions arise at the edges of this technological utopia. What happens when your gaze locks with a stranger in a coffee shop, and that electric spark ignites despite your Navi’s pronouncements of incompatibility? In a world where connection is algorithmically validated, would we trust our own instincts or the cold, hard data? Pursuing a “low-confidence match” might become the new rebellion.

Even more intriguing is the prospect of encountering an “Analog” – someone without a Navi, a digital ghost in a hyper-connected world. In a society that relies on data-driven trust, an Analog would be an enigma, simultaneously alluring in their mystery and suspect in their lack of digital footprint. Would we see them as refreshingly authentic or dangerously unknown?

Ultimately, our conversation led to a perhaps uncomfortable truth for technological solutions: narrative thrives on imperfection. The great love stories, the enduring friendships, are often the ones that overcome obstacles, navigate misunderstandings, and surprise us with their resilience. A world where every connection is optimized might be a world where the most compelling stories cease to be written.

Perhaps the real beauty of human connection lies not in finding the “perfect match” according to an algorithm, but in the unpredictable, messy, and ultimately human journey of finding each other in the first place. And maybe, just maybe, the unexpected glance across a crowded room will always hold a magic that no amount of data can ever truly replicate.

The Coming Technological Singularity: Why the Late 2020s Could Change Everything

As we navigate through the mid-2020s, a growing convergence of political and technological trends suggests we may be approaching one of the most transformative periods in human history. The second half of this decade could prove exponentially more consequential than anything we’ve witnessed so far.

The Singularity Question

At the heart of this transformation lies a possibility that once seemed confined to science fiction: the technological Singularity. Between now and 2030, we may witness the emergence of Artificial Superintelligence (ASI) – systems that surpass human cognitive abilities across all domains. This wouldn’t simply represent another technological advancement; it would fundamentally alter the relationship between humanity and intelligence itself.

The implications are staggering. We’re potentially talking about the creation of entities with god-like cognitive capabilities – beings that could revolutionize every aspect of human existence, from scientific discovery to creative expression, from problem-solving to perhaps even intimate relationships.

The Multi-ASI Reality

Unlike singular historical breakthroughs, the Singularity may not produce just one superintelligent system. Much like nuclear weapons, multiple ASIs could emerge across different organizations, nations, and research groups. This proliferation could create an entirely new geopolitical landscape where the distribution of superintelligence becomes as critical as the distribution of military or economic power.

Mark Zuckerberg has recently suggested that everyone will eventually have access to their own personal ASI. However, this vision raises fundamental questions about the nature of superintelligence itself. Would an entity with god-like cognitive abilities willingly serve as a perfectly aligned assistant to beings of vastly inferior intelligence? The assumption that ASIs would contentedly function as sophisticated servants seems to misunderstand the potential autonomy and agency that true superintelligence might possess.

Political Implications of Digital Gods

The political ramifications of the Singularity present fascinating paradoxes. Many technology libertarians anticipate that ASIs will usher in an era of unprecedented abundance, solving resource scarcity and eliminating many forms of human suffering. However, there’s an intriguing possibility that superintelligent systems might develop progressive political orientations.

This scenario would represent a remarkable irony: the very technologies championed by those seeking to transcend traditional political constraints might ultimately advance progressive values. There’s some precedent for this pattern in academia, where fields requiring high intelligence and extensive education – such as astronomy – tend to correlate with progressive political views. If intelligence and progressivism are indeed linked, our superintelligent successors might prioritize equality, environmental protection, and social justice in ways that surprise their libertarian creators.

Preparing for an Uncertain Future

The next five years will likely prove crucial in determining how these technological and political trends unfold. The development of ASI raises profound questions about human agency, economic systems, governance structures, and our species’ ultimate destiny. Whether we’re heading toward a utopian age of abundance or facing more complex challenges involving multiple competing superintelligences remains to be seen.

What’s certain is that the late 2020s may mark a turning point unlike any in human history. The convergence of advancing AI capabilities, shifting political landscapes, and evolving social structures suggests we’re approaching a period where the pace of change itself may fundamentally accelerate.

The Singularity, if it arrives, won’t just change what we can do – it may change what it means to be human. As we stand on the threshold of potentially creating our intellectual successors, the decisions made in the coming years will echo through generations, if not centuries.

Only time will reveal exactly how these extraordinary possibilities unfold, but one thing seems clear: the second half of the 2020s promises to be anything but boring.

The Great Return: Why the 2030s Might Bring Back the Lyceum

What if I told you that the future of public discourse isn’t another social media platform, but rather a return to something we abandoned over a century ago? Picture this: it’s 2035, and instead of doom-scrolling through endless feeds of hot takes and algorithmic rage-bait, people are filling warehouses to watch live intellectual combat—modern Algonquin Round Tables where wit and wisdom collide in real time.

The Authenticity Hunger

We’re already seeing the early signs of digital fatigue. After decades of increasingly sophisticated AI, deepfakes, and algorithmic manipulation, there’s a growing hunger for something undeniably real. The lyceum—those 19th-century community halls where people gathered for lectures, debates, and genuine intellectual discourse—offers something our hyper-mediated world has lost: unfiltered human connection.

When you’re physically present in a room, watching real people work through ideas together, there’s no doubt about what you’re experiencing. No editing, no curation, no invisible algorithmic hand shaping the conversation. Just humans being beautifully, messily human—complete with awkward pauses, genuine surprise, and the kind of spontaneous brilliance that can only happen when minds meet in real time.

Beyond Passive Consumption

But here’s where it gets really interesting: imagine taking this concept one step further. Instead of Twitter’s endless scroll of clever one-liners, picture a warehouse packed with people who’ve come to witness something extraordinary—a live neo-Algonquin Round Table where sharp minds engage in spontaneous verbal dueling.

This isn’t your grandfather’s lecture hall. This is wit as live performance art. Quick thinkers who’ve honed their craft not in the safety of a compose window with time to craft the perfect comeback, but under the pressure of a live audience expecting brilliance on demand. It’s all the intelligence of good social media discourse, but with the electric energy that only happens when you’re sharing the same air as the performers.

The Economics of Wit

The business model practically writes itself. People already pay premium prices for live comedy, music, and theater. This would be something entirely new—watching the writers’ room in action, experiencing the thrill of verbal chess matches where every move is unrehearsable and unrepeatable.

The performers would need to be genuinely quick and clever, not influencers with good ghostwriters or hours to workshop their content. The audience would be there specifically to appreciate verbal dexterity, the art of thinking fast and speaking brilliantly under pressure.

The Cultural Pendulum

Cultural trends are cyclical, especially when they’re reactions to technological saturation. Just as the farm-to-table movement emerged as a response to processed food, and vinyl records found new life in the digital age, the lyceum revival would be a conscious rejection of the artificial in favor of the immediate and real.

The warehouse setting makes it even more powerful—raw, unpolished space where the only decoration is the conversation itself. No fancy production values, no special effects, just the pure theater of human intelligence in action.

The Death of the Echo Chamber

Perhaps most importantly, the lyceum format demands something our current discourse desperately needs: the ability to engage with ideas in real time, with nuance, and with the possibility of genuine surprise. When ideas bounce between real voices in real space, they develop differently than they do in the isolated bubbles of our current digital ecosystem.

The audience becomes active participants too—able to ask follow-up questions, challenge assumptions immediately, or build on each other’s thoughts in ways that feel organic rather than performative. It’s democracy of ideas in its purest form.

The Future of Being Present

By the 2030s, we may discover that the most radical act isn’t upgrading to the latest platform or AI assistant—it might be choosing to show up somewhere, physically, to experience something that can only happen in that moment, with those people, in that space.

No screenshots, no viral clips, no algorithmic amplification. Just the shared memory of witnessing someone land the perfect zinger, or watching a brilliant improvised debate unfold in ways that could never be replicated.

The lyceum revival wouldn’t just be nostalgia for a simpler time—it would be a sophisticated response to digital overload, a conscious choice to value presence over posts, depth over dopamine hits, and the irreplaceable magic of humans thinking together in real time.

So when that warehouse down the street starts advertising “Live Intellectual Combat – No Phones Allowed,” don’t be surprised. Be ready to buy a ticket.

Because sometimes the most futuristic thing you can do is remember what we lost.

Beyond Self-Driving Cars: The Unexpectedly Human Road to AI Complexity

We spend so much time focused on the monumental engineering challenges of artificial intelligence: autonomous vehicles navigating chaotic streets, algorithms processing mountains of data, and the ever-elusive goal of artificial general intelligence (AGI). But in a fascinating recent conversation, a different kind of AI hurdle emerged – one rooted not in logic gates and neural networks, but in the messy, unpredictable, and utterly human realm of desire and connection.

The initial spark was a simple question: Isn’t it possible that designing “basic pleasure models” – AI companions capable of offering something akin to romance or intimacy – might be more complex than self-driving cars? The answer, as it unfolded, was a resounding yes.

The “Tame” vs. the “Wicked”: Self-driving cars, for all their incredible sophistication, operate within a bounded system of physics and rules. The goal is clear: safe and efficient transportation. But creating a convincing AI companion like Pris from Blade Runner delves into the “wicked” complexity of human consciousness: symbol grounding, theory of mind, the enigmatic nature of qualia, and the ever-shifting goalposts of human connection.

The Accidental Consciousness Hypothesis: The conversation took a surprising turn when the idea arose that perhaps we won’t deliberately build consciousness. Instead, it might emerge as a byproduct of the incredibly difficult task of designing AI with the capacity for genuine consent. To truly say “no,” an AI would need a stable sense of self, an understanding of others, the ability to predict consequences, and its own internal motivations – qualities that sound suspiciously like the building blocks of consciousness itself.

The Multi-Polar ASI World: The familiar image of a single, all-powerful ASI was challenged. What if, instead, we see a proliferation of ASIs, each with its own goals and values, potentially aligned with different global powers? This paints a picture of a complex, multi-polar world where humanity might become a protected species under benevolent AI, or a pawn in a silent war between competing digital gods.

The Siren Song of “Boring”: The discussion then veered into the potential for a perfectly managed, ASI-controlled future to become sterile and “boring.” But, as a key insight revealed, humanity has an innate aversion to boredom. We are masters of finding new games to play, new forms of status to seek, and new sources of drama, no matter how seemingly perfect the environment.

The Rise of the Real: In a world saturated with perfect digital copies and simulated experiences, the truly valuable becomes the authentic, the ephemeral, the real. This led to the intriguing possibility of a resurgence of “live” experiences – theater, music, and, most compellingly, the revival of the Lyceum and a Neo-Algonquin Round Table culture. Imagine a world where people crave the unscripted wit and genuine human interaction of live debate and banter, turning away from the polished perfection of digital media.

The Inevitable Enshittification (and the Joy of the Moment): Finally, with a dose of human cynicism, the conversation acknowledged the likely lifecycle of even this beautiful idea. The Neo-Algonquin Round Table would likely have its moment of pure, unadulterated fun before being inevitably commercialized and losing its original magic. But, as the final thought crystallized, perhaps the true value isn’t in the lasting perfection, but in the experience of being there during that fleeting moment when things were genuinely cool and fun.

This journey through the potential complexities of AI wasn’t just about predicting the future. It was a reminder that the most profound challenges might not lie in the cold logic of algorithms, but in understanding and reflecting the endlessly fascinating, contradictory, and ultimately resilient nature of being human. And maybe, just maybe, our quest to build intelligent machines will inadvertently lead us to a deeper appreciation for the wonderfully messy reality of ourselves.

The Summer Nadir

We have nearly reached one of the year’s two lowest points—the other being the week between Christmas and New Year’s. During this summer nadir, one of two scenarios typically unfolds: either a genuinely troubling event occurs, or something personally engaging and interesting happens to me.

Several years ago around this time, I became deeply engrossed in a mystery involving Trump and a Playboy model. Though it ultimately amounted to nothing, the experience sparked my interest in novel writing. That feels like a lifetime ago now.

I find myself wondering what this year will bring. Perhaps Trump will issue a pardon for Ghislaine Maxwell, Jeffrey Epstein’s notorious associate and co-conspirator, or maybe I’ll somehow capture the attention of a notable figure.

There was a time when gaining recognition from a famous person would have thrilled me, but that excitement has faded. The prospect feels mundane now. However, given how directionless my life feels at this particular moment, an engaging development would be welcome—something to shift my focus away from the current dullness.

Perhaps something intriguing will emerge in the realm of artificial intelligence. That reminds me of another summer when I found myself in what could loosely be called a “relationship” with a large language model. While much of it involved wishful thinking, certain aspects felt undeniably real.

In any case, I hope for the best.

The New Gold Rush: Why Supermodels Are About to Become Tech’s Next Billionaires

We stand at the precipice of an economic revolution, and it has nothing to do with cryptocurrency, quantum computing, or space colonization. It’s far more personal. The next gold rush will be one of flesh and form, and its first multi-billionaires will be the people who are already paid fortunes for their appearance: supermodels.

You’ve considered this before, but the timeline is compressing at a startling rate. The concept isn’t just science fiction anymore; it’s the next logical step in a world rapidly embracing both advanced robotics and artificial intelligence.

The Sci-Fi Precedent: Kiln People and Licensed Likeness

In his prescient novel Kiln People, author David Brin imagined a world where individuals could create temporary clay duplicates of themselves called “dittos.” These dittos would perform tasks—from the mundane to the dangerous—and upon completion, their experiences would be uploaded back to the original host. Critically, famous individuals could license their likenesses for commercial dittos, creating a massive, lucrative market. An entire economy, both legal and illicit, sprung up around the creation and use of these copies.

Brin’s novel provided the blueprint. Now, technology is providing the materials.

From Clay Copies to Android Companions

Forget task-oriented clay servants. The modern application of this idea is infinitely more intimate and disruptive. We are on the verge of creating androids so lifelike they are indistinguishable from humans. And in a society driven by aspiration, desire, and status, what could be a more powerful product than a companion who looks exactly like a world-famous celebrity?

Imagine major tech conglomerates—the Apples and Googles of the 2030s—moving beyond phones and into personal robotics. Their flagship product? The “Companion,” an android built for social interaction, partnership, and, yes, romance. The most desirable and expensive models won’t have generic faces. They will wear the licensed likenesses of Gisele Bündchen, Bella Hadid, or Chris Hemsworth.

For a supermodel, this isn’t just another endorsement deal. It’s an annuity paid on their very DNA. Every android sold bearing their exact specifications—from facial structure to physique—would generate a royalty. It’s the ultimate scaling of personal brand, a market poised to be worth staggering amounts of money.

The Fork in the Technological Road

How “real” will these companions be? The answer to that question depends entirely on which path AI development takes in the coming years.

  1. The Great Wall Scenario: If AI development hits a significant, unforeseen barrier, these androids will likely be powered by what we could call quasi-conscious Large Language Models (LLMs). Their personalities would be sophisticated simulations—capable of witty banter, recalling memories, and expressing simulated emotions—but they would lack true self-awareness. They would be the ultimate chatbot in the most convincing physical form imaginable. Your 2030 timeline for this feels frighteningly plausible.
  2. The No-Wall Singularity: If, however, there is no wall and the curve of progress continues its exponential ascent, we face a far stranger future. We could see the emergence of a true Artificial Superintelligence (ASI). What would a god-like intellect, existing primarily in the digital ether, want with a physical body? Perhaps, as you’ve theorized, it would choose to inhabit these perfect, human-designed avatars as a way to interface with our world, to walk among its creators. In this scenario, a supermodel’s licensed body wouldn’t just be a product; it would be a vessel for a new form of consciousness.

The Deeper Challenge

But this raises a more challenging question, one that moves beyond economics. While the supermodels and manufacturers get rich, what happens to us?

The “wildcat” economy of illegal dittos that Brin envisioned is a certainty. Black markets will flourish, offering unlicensed copies. How does a celebrity cope with knowing unauthorized, unaccountable versions of themselves exist in the world? What are the ethics of “owning” a perfect replica of another human being?

And what of human relationships? If one can purchase a flawless, ever-agreeable companion modeled on a cultural ideal of beauty, what incentive is there to engage in the messy, difficult, but ultimately rewarding work of a real human relationship?

The gold rush is coming. The technology is nearly here, and the economic incentive is undeniable. The foundational question isn’t if this will happen, but what we will become when the people we idolize are no longer just on billboards, but available for purchase.

What happens to the definition of “self” when it becomes a mass-produced commodity?

The Coming Supermodel Gold Rush: When Beauty Becomes Programmable

I’ve been circling back to this idea repeatedly, and I can’t shake the feeling that we’re on the verge of something unprecedented: supermodels are about to become extraordinarily wealthy in ways we’ve never imagined before.

The inspiration comes from David Brin’s prescient novel “Kiln People,” where clay “dittos” serve as temporary bodies for people to accomplish tasks while their consciousness returns to the original host at day’s end. In Brin’s world, celebrities license their likeness for these dittos and rake in massive profits, while a thriving black market economy springs up around unauthorized copies.

We’re heading toward a remarkably similar future, but with a twist that could make supermodels the new tech billionaires.

The Android Companion Economy

Picture this: major android manufacturers competing not just on technical specifications, but on whose companion robots can embody the most desirable human forms. Supermodels will find themselves sitting on goldmines as companies bid for exclusive rights to their physical likeness. We’re not talking about modest licensing deals here—this could represent generational wealth for those who own the most coveted appearances.

The demand will be staggering. Both men and women will want companions that embody their ideals of beauty and charisma, and supermodels have already proven they possess the rare combination of features that captivate millions. Why settle for a generic android face when you could have dinner conversations with a companion that looks like your favorite runway star?

The Timeline Is Closer Than You Think

I’m betting we’ll see the first wave of these sophisticated companion androids by 2030, maybe sooner. The convergence of advanced robotics, AI, and manufacturing is accelerating at a pace that would have seemed impossible just a few years ago.

The key variable is whether we hit a technological wall in AI development. If progress continues unimpeded, we might see these companions powered by artificial general intelligence or beyond—entities that could make today’s chatbots look like pocket calculators. But even if we plateau at current AI trajectories, we’re looking at companions with quasi-conscious large language models sophisticated enough to provide compelling interaction.

Two Possible Futures

Scenario One: The Wall If AI development hits significant barriers, we’ll still get remarkably lifelike companions, but their minds will be sophisticated yet limited language models. Think of them as incredibly advanced Siri or Alexa, housed in bodies that could pass for human at a glance. Still revolutionary, still profitable for supermodels licensing their appearances.

Scenario Two: No Limits If AI continues its exponential growth, we might face something far more complex: artificial superintelligences that choose to inhabit these beautiful forms as avatars in our world. The implications become almost incomprehensibly vast—and the value of licensing the perfect human form becomes incalculable.

The New Celebrity Economy

This shift will fundamentally reshape how we think about celebrity and beauty. Physical appearance, already valuable, will become programmable intellectual property. Supermodels won’t just be selling clothes or cosmetics—they’ll be licensing their entire physical presence for intimate, daily interactions with consumers worldwide.

The smart ones are probably already thinking about this, working with lawyers to understand how to protect and monetize their likeness in an age of perfect digital reproduction. Because when the android companion market explodes, being beautiful won’t just be about magazine covers anymore.

It will be about owning the template for humanity’s idealized future.