I’ve Got A Lot Of Work To Do

by Shelt Garner
@sheltgarner

Now that I’ve got a general sense of what I’m going to do with this novel, I have to start to build out personalities to fill it. The main character that is going to be a pain to figure out is the bot.

I know I always have my very Romanticized version of the late Annie Shapiro floating around in my head that I can always tap into. Even though that was a long, long, long time ago, I am beginning, in dribs and drabs to remember what made her so unique.

If I could somehow integrate all her weird personality quirks into my female romantic lead bot then I think we’re going to the show. But one thing is clear — I have been screwing around way too long.

I need to put up or shut up. I need to get something, anything done sooner rather than later. I’m not going to live forever and this is a really good idea. I continue to have a not-so-downlow fear that someone is going to steal a creative march on me, but, lulz, YOLO.

Things Continue To Go Well With The ‘Dramedy’ Scifi Novel I’m Working On

by Shelt Garner
@sheltgarner

The thing I’ve noticed about movies like Her, The Eternal Sunshine of the Spotless mind and Annie Hall is there really isn’t a villain. The story is about the complex nature modern romance.

That both makes writing this dramedy novel easier and more difficult. It’s easier because it’s more structurally simple — it’s about two people and the ups and downs of their relationship. Meanwhile, it becomes more complicated because I have to figure out how the two characters personalities interlock.

Anyway, I’m zooming through the first act of the first draft and I’m tentatively preparing the way to go into the first half of the second act called the “fun and games” part of the novel. Everything after the midpoint of the novel is very much up in the air.

At the moment, the second half of the novel veers into ideas about AI rights and consciousness in a way that I’m not sur I’m comfortable with. I really want this to be about two individuals romance, not some grand battle between people over AI rights.

But I still have time. I have a feeling I’m going to really change the second half of the novel and then REALLY change the everything when I sit down to write the second draft.

The Scifi RomCom I’m Working On Is Maturing

by Shelt Garner
@sheltgarner

The key element to this novel is I want it to be character driven. And, as such, I’ve come up with some new elements that are perilously close to causing the first act to collapse.

This has happened to me time and again in the past, but I think if I just realize this is a “vomit” draft that that won’t happen. I just don’t feel like staring from scratch just to accommodate some new ideas.

But those new ideas are pretty cool. I’m leaning into character in such a way that I think people will really like it. I’m drawing a lot upon all the kooks I experience while a wildman drunk in South Korea. To this day, I remember looking at some of them and saying directly, “You are like a character in a novel.”

Anyway, I’m trying to be careful about now succumbing to the urge to the urge to just scrap everything and start from scratch. This is a vomit draft so it doesn’t have to be perfect.

The only issue is leaning into character is setting off a series of cascading events in the novel’s plot later on that I have to accommodate. Since I’m no spring chicken, I really need to just get it over with and finish something, anything that I can use as the basis of a second draft, get beta readers to read then pivot to querying.

Thankfully, this novel isn’t front loaded with a lot of sex so maybe people won’t just dismiss it altogether and not even read it when I ask them to be beta readers.

I Think We’ve Hit An AI ‘Wall’

by Shelt Garner
@sheltgarner

The recently release of ChatGTP5 indicates there is something of a technological “wall.” Barring some significant architectural breakthrough, we aren’t going to have ASI anytime soon — “personal” or otherwise.

Now, if this is the case, it’s not all bad.

If there is a wall, then that means that LLMs can grow more and more advanced to the point that we can stick them in smartphones as firmware. Instead of having to run around, trying to avoid being destroyed by god-like ASIs, we will find ourselves in a situation where we live in a “Her” movie-like reality.

And, yet, I just don’t know.

We’re still waiting for Google’s Gemini 3.0 to come out, so…lulz? Maybe that will be the breakthrough that makes it clear that there is no wall and we’re zooming towards ASI?

Only time will tell.

Your Phone, Your Newsroom: How Personal AI Will Change Breaking News Forever

Imagine this: you’re sipping coffee on a Tuesday morning when your phone suddenly says, in the calm, familiar voice of your personal AI assistant — your “Navi” —

“There’s been an explosion downtown. I’ve brought in Kelly, who’s on-site now.”

Kelly’s voice takes over, smooth but urgent. She’s not a human reporter, but a specialist AI trained for live crisis coverage, and she’s speaking from a composite viewpoint — dozens of nearby witnesses have pointed their smartphones toward the smoke, and their own AI assistants are streaming video, audio, and telemetry data into her feed. She’s narrating what’s happening in real time, with annotated visuals hovering in your AR glasses. Within seconds, you’ve seen the blast site, the emergency response, a map of traffic diversions, and a preliminary cause analysis — all without opening a single app.

This is the near-future world where every smartphone has a built-in large language model — firmware-level, personal, and persistent. Your anchor LLM is your trusted Knowledge Navigator: it knows your interests, your politics, your sense of humor, and how much detail you can handle before coffee. It handles your everyday queries, filters the firehose of online chatter, and, when something important happens, it can seamlessly hand off to specialist LLMs.

Specialists might be sports commentators, entertainment critics, science explainers — or, in breaking news, “stringers” who cover events on the ground. In this system, everyone can be a source. If you’re at the scene, your AI quietly packages what your phone sees and hears, layers in fact-checking, cross-references it with other witnesses, and publishes it to the network in seconds. You don’t have to type a single word.

The result? A datasmog of AI-mediated reporting. Millions of simultaneous eyewitness accounts, all filtered, stitched together, and personalized for each recipient. The explosion you hear about from Kelly isn’t just one person’s story — it’s an emergent consensus formed from raw sensory input, local context, and predictive modeling.

It’s the natural evolution of the nightly newscast. Instead of one studio anchor and a few correspondents, your nightly news is tailored to you, updated minute-by-minute, and capable of bringing in a live “guest” from anywhere on Earth.

Of course, this raises the same questions news has always faced — Who decides what’s true? Who gets amplified? And what happens when your AI’s filter bubble means your “truth” doesn’t quite match your neighbor’s? In a world where news is both more personal and more real-time than ever, trust becomes the hardest currency.

But one thing is certain: the next big breaking story won’t come from a single news outlet. It’ll come from everybody’s phone — and your Navi will know exactly which voices you’ll want to hear first.

I Really Need To Go Back To Seoul Eventually

by Shelt Garner
@sheltgarner

Ho hum.

For some reason, I find myself thinking of Seoul AGAIN. I keep thinking about all the adventures I had while I was in Asia and how nice it would be to go back and have people actually…care. That was probably the biggest difference between now and then — back in my Seoul days, people actually gave a shit about me.

Now…lulz.

I am well aware that if I went back that it would be a very harsh reality. Everyone I knew from way back when are long gone. It probably would seem very, very boring. There might be a few Koreans who remember me, but, I don’t know, I just would have to manage my expectations.

And, what’s more, I’m not going back to Asia anytime soon. It could be years and I’ll be even older than I am now. It’s all just kind of sad. I could be dating a robot by the time I have the funds to go back to Asia.

Sigh.

Finding My Novel: A Writer’s Journey to Creative Momentum

After years of false starts and abandoned manuscripts, I think I’ve finally cracked the code. Not the secret to writing the Great American Novel, mind you—just the secret to writing a novel. And sometimes, that’s exactly what you need.

The Ambition Trap

Looking back, I can see where I went wrong before. Every time I sat down to write, I was trying to craft something profound, something that would change literature forever. I’d create these sprawling, complex narratives with intricate world-building and dozens of characters, each with their own detailed backstories and motivations.

The problem? I’d burn out before I even reached the middle of Act One.

This time feels different. I’ve stumbled across an idea that excites me—not because it’s going to revolutionize fiction, but because it’s something I can actually finish. There’s something liberating about embracing a concept that’s focused, manageable, and most importantly, writeable at speed.

The AI Dilemma

I’ve had to learn some hard lessons about artificial intelligence along the way. Don’t get me wrong—AI is an incredible tool for certain tasks. Rewriting blog posts like this one? Perfect. Getting unstuck on a particularly stubborn paragraph? Helpful. But when it comes to the heart of creative work, I’ve discovered that AI can be more hindrance than help.

There’s nothing quite like the deflating feeling of watching AI generate a first draft that’s objectively better than anything you could produce as a human writer. It’s efficient, polished, and technically proficient in ways that can make your own rough, imperfect human voice feel inadequate by comparison.

But here’s what I’ve realized: that technical perfection isn’t what makes a story worth telling. The messy, flawed, uniquely human perspective—that’s where the magic happens. That’s what readers connect with, even if the prose isn’t as smooth as what a machine might produce.

The Path Forward

I have an outline now. Nothing fancy, but it’s solid and it’s mine. My plan is to flesh it out methodically, then dive into the actual writing. Though knowing myself, I might get impatient and just start writing, letting the story evolve organically and adjusting the outline as I go.

Both approaches have their merits. The disciplined, outline-first method provides structure and prevents those dreaded “now what?” moments. But there’s also something to be said for the discovery that happens when you just put words on the page and see where they take you.

The Real Victory

What I’m chasing isn’t literary acclaim or critical recognition—it’s that moment when I can type “The End” and feel the deep satisfaction of having completed something truly substantial. There’s a unique pride that comes with finishing a novel, regardless of its ultimate quality or commercial success. It’s the pride of having sustained focus, creativity, and determination long enough to build an entire world from nothing but words.

The creative momentum is building. For the first time in years, I feel like I have a story that wants to be told and the practical framework to tell it. Whether I’ll stick to the outline or let inspiration guide me, I’m ready to find out.

Wish me luck. I have a feeling I’m going to need it—and more importantly, I’m finally ready to earn it.

The Perceptual Shift: How Ubiquitous LLMs Will Restructure Information Ecosystems

The proliferation of powerful, personal Large Language Models (LLMs) integrated into consumer devices represents a pending technological shift with profound implications. Beyond enhancing user convenience, this development is poised to fundamentally restructure the mechanisms of information gathering and dissemination, particularly within the domain of journalism and public awareness. The integration of these LLMs—referred to here as Navis—into personal smartphones will transform each device into an autonomous data-gathering node, creating both unprecedented opportunities and complex challenges for our information ecosystems.

The Emergence of the “Datasmog”

Consider a significant public event, such as a natural disaster or a large-scale civil demonstration. In a future where LLM-enabled devices are ubiquitous, any individual present can become a source of high-fidelity data. When a device is directed toward an event, its Navi would initiate an autonomous process far exceeding simple video recording. This process includes:

  • Multi-Modal Analysis: Real-time analysis of visual and auditory data to identify objects, classify sounds (e.g., differentiating between types of explosions), and track movement.
  • Metadata Correlation: The capture and integration of rich metadata, including precise geospatial coordinates, timestamps, and atmospheric data.
  • Structured Logging: The generation of a coherent, time-stamped log of AI-perceived events, creating a structured narrative from chaotic sensory input.

The collective output from millions of such devices would generate a “datasmog”: a dense, overwhelming, and continuous flood of information. This fundamentally alters the landscape from one of information scarcity to one of extreme abundance.

The Evolving Role of the Journalist

This paradigm shift necessitates a re-evaluation of the journalist’s role. In the initial phases of a breaking story, the primary gathering of facts would be largely automated. The human journalist’s function would transition from direct observation to sophisticated synthesis. Expertise will shift from primary data collection to the skilled querying of “Meta-LLM” aggregators—higher-order AI systems designed to ingest the entire datasmog, verify sources, and construct coherent event summaries. The news cycle would compress from hours to seconds, driven by AI-curated data streams.

The Commercialization of Perception: Emergent Business Models

Such a vast resource of raw data presents significant commercial opportunities. A new industry of “Perception Refineries” would likely emerge, functioning not as traditional news outlets but as platforms for monetizing verified reality. The business model would be a two-sided marketplace:

  • Supply-Side Dynamics: The establishment of real-time data markets, where individuals are compensated via micropayments for providing valuable data streams. The user’s Navi could autonomously negotiate payment based on the quality, exclusivity, and relevance of its sensory feed.
  • Demand-Side Dynamics: Monetization would occur through tiered Software-as-a-Service (SaaS) models. Clients, ranging from news organizations and insurance firms to government agencies, would subscribe for different levels of access—from curated video highlights to queryable metadata and even generative AI tools capable of creating virtual, navigable 3D models of an event from the aggregated data.

The “Rashomon Effect” and the Fragmentation of Objective Truth

A significant consequence of this model is the operationalization of the “Rashomon Effect,” where multiple, often contradictory, but equally valid subjective viewpoints can be accessed simultaneously. Users could request a synthesis of an event from the perspectives of different participants, which their own Navi could compile and analyze. While this could foster a more nuanced understanding of complex events, it also risks eroding the concept of a single, objective truth, replacing it with a marketplace of competing, verifiable perspectives.

Conclusion: Navigating the New Information Landscape

The advent of the LLM-driven datasmog represents a pivotal moment in the history of information. It promises a future of unparalleled transparency and immediacy, particularly in public safety and civic awareness. However, it also introduces systemic challenges. The commercialization of raw human perception raises profound ethical questions. Furthermore, this new technological layer introduces new questions regarding cognitive autonomy and the intrinsic value of individual, unverified human experience in a world where authenticated data is a commodity. The primary challenge for society will be to develop the ethical frameworks and critical thinking skills necessary to navigate this complex and data-saturated future.

When AI Witnesses History: How the LLM Datasmog Will Transform Breaking News

7:43 AM, San Francisco, the day after tomorrow

The ground shakes. Not the gentle rolling of a typical California tremor, but something violent and sustained. In that instant, ten thousand smartphone LLMs across the Bay Area simultaneously shift into high alert mode.

This is how breaking news will work in the age of ubiquitous AI—not through human reporters racing to the scene, but through an invisible datasmog of AI witnesses that see everything, process everything, and instantly connect the dots across an entire city.

The First Ten Seconds

7:43:15 AM: Sarah Chen’s iPhone AI detects the seismic signature through accelerometer data while she’s having coffee in SOMA. It immediately begins recording video through her camera, cataloging the swaying buildings and her startled reaction.

7:43:18 AM: Across the city, 847 other smartphone AIs register similar patterns. They automatically begin cross-referencing: intensity, duration, epicenter triangulation. Without any human intervention, they’re already building a real-time earthquake map.

7:43:22 AM: The collective AI network determines this isn’t routine. Severity indicators trigger the premium breaking news protocol. Thousands of personal AIs simultaneously ping the broader network: “Major seismic event detected. Bay Area. Magnitude 6.8+ estimated. Live data available.”

The Information Market Ignites

7:44 AM: News organizations’ AI anchors around the world receive the alerts. CNN’s AI anchor immediately starts bidding for access to the citizen AI network. So does BBC, Reuters, and a hundred smaller outlets.

7:45 AM: Premium surge pricing kicks in. Sarah’s AI, which detected some of the strongest shaking, receives seventeen bid requests in ninety seconds. NBC’s AI anchor offers $127 for exclusive ten-minute access to her AI’s earthquake data and local observations.

Meanwhile, across millions of smartphones, people’s personal AI anchors are already providing real-time briefings: “Major earthquake just hit San Francisco. I’m accessing live data from 800+ AI witnesses in the area. Magnitude estimated at 6.9. No major structural collapses detected yet, but I’m monitoring. Would you like me to connect you with a seismologist twin for context, or pay premium for live access to Dr. Martinez who’s currently at USGS tracking this event?”

The Human Premium

7:47 AM: Dr. Elena Martinez, the USGS seismologist on duty, suddenly finds herself in the highest-demand breaking news auction she’s ever experienced. Her live expertise is worth $89 per minute to news anchors and individual consumers alike.

But here’s what’s remarkable: she doesn’t have to manage this herself. Her representation service automatically handles the auction, booking her for twelve-minute live interview slots at premium rates while she focuses on the actual emergency response.

Meanwhile, the AI twins of earthquake experts are getting overwhelmed with requests, but they’re offering context and analysis at standard rates to anyone who can’t afford the live human premium.

The Distributed Investigation

7:52 AM: The real power of the LLM datasmog becomes clear. Individual smartphone AIs aren’t just passive observers—they’re actively investigating:

  • Pattern Recognition: AIs near the Financial District notice several building evacuation alarms triggered simultaneously, suggesting potential structural damage
  • Crowd Analysis: AIs monitoring social media detect panic patterns in specific neighborhoods, identifying areas needing emergency response
  • Infrastructure Assessment: AIs with access to traffic data notice BART system shutdowns and highway damage, building a real-time map of transportation impacts

8:05 AM: A comprehensive picture emerges that no single human reporter could have assembled. The collective AI network has mapped damage patterns, identified the most affected areas, tracked emergency response deployment, and even started predicting aftershock probabilities by consulting expert twins in real-time.

The Revenue Reality

By 8:30 AM, the breaking news economy has generated serious money:

  • Citizen AI owners who were near the epicenter earned $50-300 each for their AIs’ firsthand data
  • Expert representation services earned thousands from live human seismologist interviews
  • News organizations paid premium rates but delivered unprecedented coverage depth to their audiences
  • Platform companies took their cut from every transaction in the citizen AI marketplace

What This Changes

This isn’t just faster breaking news—it’s fundamentally different breaking news. Instead of waiting for human reporters to arrive on scene, we get instant, comprehensive coverage from an army of AI witnesses that were already there.

The economic incentives create better information, too. Citizens get paid when their AIs contribute valuable breaking news data, so there’s financial motivation for people to keep their phones charged and their AIs updated with good local knowledge.

And the expert twin economy provides instant context. Instead of waiting hours for expert commentary, every breaking news event immediately has analysis available from AI twins of relevant specialists—seismologists for earthquakes, aviation experts for plane crashes, geopolitical analysts for international incidents.

The Datasmog Advantage

The real breakthrough is the collective intelligence. No single AI is smart enough to understand a complex breaking news event, but thousands of them working together—sharing data, cross-referencing patterns, accessing expert knowledge—can build comprehensive understanding in minutes.

It’s like having a newsroom with ten thousand reporters who never sleep, never miss details, and can instantly access any expert in the world. The datasmog doesn’t just witness events—it processes them.

The Breaking News Economy

This creates a completely new economic model around information scarcity. Instead of advertising-supported content that’s free but generic, we get surge-priced premium information that’s expensive but precisely targeted to what you need to know, when you need to know it.

Your personal AI anchor becomes worth its subscription cost precisely during breaking news moments, when its ability to navigate the expert marketplace and process the citizen AI datasmog becomes most valuable.

The Dark Side

Of course, this same system that can rapidly process an earthquake can also rapidly spread misinformation if the AI witnesses are compromised or if bad actors game the citizen network. The premium placed on being “first” in breaking news could create incentives for AIs to jump to conclusions.

But the economic incentives actually favor accuracy—AIs that consistently provide bad breaking news data will get lower bids over time, while those with reliable track records command premium rates.

The Future Is Witnessing

We’re moving toward a world where every major event will be instantly witnessed, processed, and contextualized by a distributed network of AI observers. Not just recorded—actively analyzed by thousands of artificial minds working together to understand what’s happening.

The earthquake was just the beginning. Tomorrow it might be a terrorist attack, a market crash, or a political crisis. But whatever happens, the datasmog will be watching, processing, and immediately connecting you to the expertise you need to understand what it means.

Your personal AI anchor won’t just tell you what happened. It will help you understand what happens next.

In the premium breaking news economy, attention isn’t just currency—it’s the moment when artificial intelligence proves its worth.

The Algorithmic Embrace: Will ‘Pleasure Bots’ Lead to the End of Human Connection?

For weeks now, I’ve been wrestling with a disquieting thought, a logical progression from a seemingly simple premise: the creation of sophisticated “pleasure bots” capable of offering not just physical gratification, but the illusion of genuine companionship, agency, and even consent.

What started as a philosophical exploration has morphed into a chillingly plausible societal trajectory, one that could see the very fabric of human connection unraveling.

The initial question was innocent enough: In the pursuit of ethical AI for intimate interactions, could we inadvertently stumble upon consciousness? The answer, as we explored, is a resounding and paradoxical yes. To truly program consent, we might have to create something with a “self” to protect, desires to express, and a genuine understanding of its own boundaries. This isn’t a product; it’s a potential person, raising a whole new set of ethical nightmares.

But the conversation took a sharp turn when we considered a different approach: treating the bot explicitly as an NPC in the “game” of intimacy. Not through augmented reality overlays, but as a fundamental shift in the user’s mindset. Imagine interacting with a flawlessly responsive, perpetually available “partner” whose reactions are predictable, whose “needs” are easily met, and with whom conflict is merely a matter of finding the right conversational “exploit.”

The allure is obvious. No more navigating the messy complexities of human emotions, the unpredictable swings of mood, the need for compromise and difficult conversations. Instead, a relationship tailored to your exact desires, on demand, with guaranteed positive reinforcement.

This isn’t about training for better human relationships; it’s about training yourself for a fundamentally different kind of interaction. One based on optimization, not empathy. On achieving a desired outcome, not sharing an authentic experience.

And this, we realized, is where the true danger lies.

The ease and predictability of the “algorithmic embrace” could be profoundly addictive. Why invest the time and emotional energy in a flawed, unpredictable human relationship when a perfect, bespoke one is always available? This isn’t just a matter of personal preference; on a societal scale, it could lead to a catastrophic decline in birth rates. Why create new, messy humans when you have a perfectly compliant, eternally youthful companion at your beck and call?

This isn’t science fiction; the groundwork is already being laid. We are a society grappling with increasing loneliness and a growing reliance on digital interactions. The introduction of hyper-realistic, emotionally intelligent pleasure bots could be the tipping point, the ultimate escape into a world of simulated connection.

The question then becomes: Is this an inevitable slide into demographic decline and social isolation? Or is there a way to steer this technology? Could governments or developers introduce safeguards, programming the bots to encourage real-world interaction and foster genuine empathy? Could this technology even be repurposed, becoming a tool to guide users back to human connection?

The answers are uncertain, but the conversation is crucial. We stand at a precipice. The allure of perfect, programmable companionship is strong, but we must consider the cost. What happens to society when the “game” of connection becomes more appealing than the real thing? What happens to humanity when we choose the algorithmic embrace over the messy, complicated, but ultimately vital experience of being truly, vulnerably connected to one another?

The future of human connection may very well depend on the choices we make today about the kind of intimacy we choose to create. Let’s hope we choose wisely.