Getting Back to What Matters: A Return to Serious Writing

After months of disruption, I can see calmer waters ahead. The turbulence that has defined recent weeks is finally settling, and I’m preparing to dive back into fiction writing with renewed commitment.

There’s an interesting contradiction in my current approach to writing. While I maintain strict boundaries around AI assistance for my novels—refusing to let algorithms touch the creative heart of my work—I’m comfortable using these tools for more utilitarian tasks like polishing blog posts. The distinction feels important: one represents my authentic voice as a storyteller, the other serves as a practical writing aid.

Two projects anchor my creative focus right now. The first is what I’ve come to think of as my “secret shame”—a mystery novel that has followed me through multiple years of starts, stops, and revisions. The second represents newer territory: a science fiction concept that genuinely excites me and feels commercially viable.

The timeline ahead is both promising and sobering. Within days, I plan to commit fully to fiction again. The mathematics of publishing success weigh on me: if everything goes perfectly—if I finish strong, query effectively, and find representation quickly—I’ll still likely be approaching sixty when my first book reaches readers. That reality adds urgency to every writing session.

This sense of urgency has crystallized into something sharper: a growing “put up or shut up” moment in my creative life. I’ve carried the identity of “unpublished author” for too long. Every mention of my novel-in-progress feels hollow without tangible progress to show for it. The weight of that incompleteness is becoming harder to bear.

Part of what has slowed my progress is scope creep. What began as a single novel has evolved into a series concept, adding layers of complexity that, while exciting, have scattered my focus across too many narrative threads.

The path forward requires radical simplification: finish one complete, polished novel. Query it. Repeat.

Focus isn’t just what I need—it’s what I owe to the stories that have been waiting patiently for me to tell them properly.

The AI Community’s Perpetual Cycle of Anticipation and Disappointment

The artificial intelligence community finds itself trapped in a peculiar emotional rhythm—one characterized by relentless anticipation, brief euphoria, and inevitable disillusionment. This cycle reveals deeper tensions about our collective expectations for AI progress and highlights the need for more nuanced perspectives on what lies ahead.

The Hype-Crash Pattern

Observers of AI discourse will recognize a familiar pattern: fervent speculation about upcoming model releases, followed by momentary celebration when new capabilities emerge, then swift descent into disappointment when these advances fail to deliver artificial general intelligence or superintelligence immediately. This emotional rollercoaster suggests that many community members have developed unrealistic timelines for transformative AI breakthroughs.

The brief “we’re so back” moments that follow major releases—whether it’s a new language model, breakthrough in reasoning, or novel multimodal capability—quickly give way to renewed complaints about the absence of AGI. This pattern indicates an all-or-nothing mentality that may be counterproductive to understanding genuine progress in the field.

Philosophical Polarization

Perhaps more striking than the hype cycles is the fundamental disagreement within the AI community about the trajectory of progress itself. The discourse has largely crystallized around two opposing camps: those convinced that scaling limitations will soon create insurmountable barriers to further advancement, and those who dismiss the possibility of machine consciousness entirely.

This polarization obscures more nuanced positions and creates false dichotomies. The debate often lacks acknowledgment that both rapid progress and significant challenges can coexist, or that consciousness and intelligence might emerge through pathways we haven’t yet anticipated.

The Case for AI Realism

These dynamics point toward the need for what might be called a “realist” approach to AI development—one that occupies the middle ground between uncritical acceleration and paralyzing caution. Such a perspective would acknowledge several key principles:

First, that current trends in AI capability suggest continued advancement is probable, making sudden plateaus less likely than gradual but persistent progress. Second, that machine consciousness, while not guaranteed, represents a plausible outcome of sufficiently sophisticated information processing systems.

A realist framework would neither dismiss safety concerns nor assume that current approaches are fundamentally flawed. Instead, it would focus on preparing for likely scenarios while remaining adaptable to unexpected developments. This stands in contrast to both the alignment movement’s emphasis on existential risk and the accelerationist movement’s faith in purely beneficial outcomes.

Embracing Uncertainty

Ultimately, the most honest assessment of our current situation acknowledges profound uncertainty about the specifics of AI’s future trajectory. While we can identify probable trends and prepare for various scenarios, the precise timeline, mechanisms, and consequences of advanced AI development remain largely unknown.

Rather than cycling between premature celebration and disappointment, the AI community might benefit from developing greater comfort with this uncertainty. Progress in artificial intelligence is likely to be neither as straightforward as optimists hope nor as limited as pessimists fear, but something more complex and surprising than either camp currently anticipates.

The path forward requires intellectual humility, careful observation of empirical evidence, and preparation for multiple possible futures—not the emotional extremes that currently dominate so much of the discourse.

Beyond Control and Chaos: Is ‘AI Realism’ the Third Way We Need for Superintelligence?

The discourse around Artificial Intelligence, particularly the advent of Artificial Superintelligence (ASI), often feels like a binary choice: either we’re rocketing towards an techno-utopian paradise fueled by benevolent AI, or we’re meticulously engineering our own obsolescence at the hands of uncontrollable super-minds. Team Acceleration pushes the pedal to the metal, while Team Alignment desperately tries to install the brakes and steering wheel, often on a vehicle that’s still being designed at light speed.

But what if there’s a third path? A perspective that steps back from the poles of unbridled optimism and existential dread, and instead, plants its feet firmly in a more challenging, perhaps uncomfortable, middle ground? Enter “AI Realism,” a school of thought recently explored in a fascinating exchange, urging us to confront some hard “inevitabilities.”

The Core Tenets of AI Realism: No More Hiding Under the Covers

AI Realism, as it’s been sketched out, isn’t about easy answers. It begins by asking us to accept a few potentially unsettling premises:

  1. ASI is Inevitable: Like it or not, the march towards superintelligence isn’t just likely, it’s a done deal. The momentum is too great, the incentives too powerful.
  2. Cognizance is Inevitable: This isn’t just about smarter machines; it’s about machines that will, at some point, possess genuine cognizance – self-awareness, subjective experience, a mind of their own.
  3. Perfect Alignment is a Pipe Dream: Here’s the kicker. The reason perfectly aligning ASI with human values is considered impossible by Realists? Because humans aren’t aligned with each other. We’re a glorious, chaotic tapestry of conflicting desires, beliefs, and priorities. How can we instill a perfect moral compass in an ASI when ours is so often… let’s say, ‘creatively interpreted’?

The Realist’s mantra, then, isn’t to prevent or perfectly control, but to accept these truths and act accordingly.

The Real Alignment Problem? Surprise, It’s Us.

This is where AI Realism throws a particularly sharp elbow. The biggest hurdle in AI alignment might not be the technical challenge of encoding values into silicon, but the deeply human, socio-political mess we inhabit. Imagine a “perfectly aligned USA ASI.” Sounds good if you’re in the US, perhaps. But as proponents of Realism point out, China or other global powers might see this “aligned” ASI as fundamentally unaligned with their interests, potentially even as an existential threat. “Alignment,” in this light, risks becoming just another battlefield in our ongoing global rivalries.

We’re like a committee of squabbling, contradictory individuals trying to program a supreme being, each shouting different instructions.

The Realist’s Wager: A Pre-ASI “Concordance”

So, if ASI is coming, will be cognizant, and can’t be perfectly aligned to our fractured human will, what’s the Realist’s move? One audacious proposal that has emerged is the idea of a “Concordance”:

  • A Preemptive Pact: An agreement, a set of foundational principles, to be hammered out by humanity before ASI fully arrives.
  • Seeded in AGI: This Concordance would ideally be programmed, or deeply instilled, into Artificial General Intelligence (AGI) systems, the precursors to ASI. The hope is that these principles would persist and guide the ASI as it self-develops.

Think of it as a global constitutional convention for a new form of intelligence, an attempt to give our future super-cognizant partners (or overlords?) a foundational document, a “read this before you become a god” manual.

The Thorny Path: Can Such a Concordance Ever Work?

Noble as this sounds, AI Realism doesn’t shy away from the colossal difficulties:

  • Drafting Committee from Babel: Who writes this Concordance? Scientists? Ethicists? Philosophers? Governments that can barely agree on trade tariffs? What universal principles could survive such a committee and still be meaningful?
  • The Binding Question: Why would a truly cognizant, superintelligent ASI feel bound by a document drafted by beings it might perceive as charmingly primitive? If it can rewrite its own code, can’t it just edit this Concordance, or reinterpret it into oblivion? Is it a set of unbreakable laws, or merely strong suggestions our future digital offspring might politely ignore?
  • Beyond Human Comprehension: An ASI’s understanding of the universe and its own role might so transcend ours that our best-laid ethical plans look like a child’s crayon drawings.

The challenge isn’t just getting humans to agree (a monumental task), but designing something that a vastly superior, self-aware intellect might genuinely choose to respect, or at least consider.

So, Where Does AI Realism Lead Us?

AI Realism, then, isn’t about surrendering to fate. It’s about staring it unflinchingly in the face and asking incredibly hard questions. It challenges the easy narratives of both salvation and doom. It suggests that if we’re to navigate the emergence of ASI and its inherent cognizance, we need to get brutally honest about our own limitations and the nature of the intelligence we’re striving to create.

It’s a call for a different kind of preparation – less about building perfect cages or expecting perfect servants, and more about contemplating the terms of coexistence with something that will likely be powerful, aware, and not entirely of our making, even if it springs from our code.

The path of AI Realism is fraught with daunting philosophical and practical challenges. But in a world hurtling towards an uncertain AI future, perhaps this “third way”—one that accepts inevitabilities and still strives for responsible, pre-emptive action—is exactly the kind of uncomfortable, necessary conversation we need to be having.