Full Speed Ahead

By Shelt Garner
@sheltgarner

Claude, one of the several LLMs I’ve been using to work on my scifi novel, keeps telling me that my hero is “too passive.” Claude is right, of course, but it rattles my cage a little bit because it means I have to rework some basic elements of the story. That means it starts to feel like work.

Writing a novel isn’t supposed to be work, it’s suppose to be fun. Grin.

Anyway, I’ve made great strides with this novel. Though I did have to throw almost everything in the air and start all over again when Claude gave me another very insightful criticism.

One thing I’ve been trying to avoid is have my hero talk to fictional world leaders. And, yet, I fear there’s nothing I can do about it. I’m going to have to face that particular situation head on and just get over myself.

The Science Fiction Writer’s Dilemma: Racing Against Technological Progress

As a science fiction writer in the midst of crafting what I hope will be a compelling novel, I find myself grappling with a particularly modern predicament that keeps me awake at night: the relentless pace of technological advancement threatens to render my carefully constructed fictional world obsolete before it ever reaches readers’ hands.

This concern has become an increasingly persistent source of anxiety in my creative process. The science fiction genre has always existed in a delicate dance with reality, extrapolating from current trends and emerging technologies to paint pictures of possible futures. However, the exponential rate of change we’re witnessing today—particularly in artificial intelligence, biotechnology, and quantum computing—creates an unprecedented challenge for contemporary science fiction authors.

The traditional publishing timeline, which can stretch from eighteen months to several years from manuscript completion to bookstore shelves, now feels like an eternity in technological terms. What seems cutting-edge and forward-thinking during the writing process may appear quaint or naive by publication day. This temporal disconnect between creation and consumption represents a fundamental shift in how speculative fiction must be approached and evaluated.

The irony of this situation is not lost on me. The very technologies that inspire and inform my narrative—the advancement of machine learning, the acceleration of scientific discovery, the increasing interconnectedness of global systems—are the same forces that may ultimately date my work. It’s as if I’m writing about a moving target while standing on shifting ground.

Yet there exists a deeper philosophical dimension to this dilemma that provides both perspective and, paradoxically, comfort. The themes explored in my novel touch upon fundamental questions about consciousness, human agency, and the trajectory of technological development. These are the very concerns that inform discussions about potential technological singularity—that hypothetical point where artificial intelligence surpasses human intelligence and triggers unprecedented changes to human civilization.

If we consider the possibility that such a transformative event might occur within the next few years, the question of whether my novel will seem technologically current becomes remarkably trivial. Should we approach a genuine technological singularity, the concerns of individual authors about their work’s relevance would be dwarfed by the massive societal, economic, and existential challenges that would emerge. The publishing industry, literary criticism, and indeed the entire cultural apparatus within which novels are created and consumed would face fundamental disruption.

This realization offers a curious form of reassurance. Either my concerns about technological obsolescence are warranted, in which case the novel’s success or failure becomes a relatively minor consideration in the face of civilizational transformation, or they are overblown, in which case I should focus on crafting the best possible story rather than worrying about technological accuracy.

Perhaps the solution lies not in attempting to predict the unpredictable future with perfect accuracy, but in grounding speculative fiction in timeless human experiences and eternal questions. The greatest science fiction has always succeeded not because it correctly anticipated specific technological developments, but because it explored the human condition through the lens of imagined possibilities.

The accelerating pace of change may indeed represent a new challenge for science fiction writers, but it also presents an opportunity to engage with some of the most profound questions of our era. Rather than being paralyzed by the fear of obsolescence, we might instead embrace the responsibility of contributing to the ongoing conversation about where technology is taking us and what it means to be human in an age of unprecedented change.

In the end, whether my novel appears prescient or dated may matter less than whether it succeeds in illuminating something meaningful about the human experience in an age of transformation. And if the singularity arrives before publication, we’ll all have more pressing concerns than literary criticism to occupy our attention.

Accelerating Fiction with AI: A Strategic Pivot

I’ve made a bold decision: I’m embracing AI as a creative partner to dramatically accelerate the development of my novel based on the Impossible Scenario. My ambitious target is a complete 100,000-word manuscript in a fraction of the time traditional methods would require.

The concept has been percolating in my mind for years, and that extended gestation period is now paying dividends. Ideas are flowing rapidly, connections are crystallizing, and the narrative architecture feels solid. The challenge isn’t ideation—it’s maintaining momentum and channeling this creative energy into consistent, focused work.

I’ve also made a strategic decision about my mystery thriller series. After considerable deliberation, I’m cutting the planned fourth novel that was intended as the series opener. Instead, I’ll begin with what was originally the second book, streamlining the series into a trilogy—much like Stieg Larsson’s acclaimed Millennium series. Sometimes the best path forward requires pruning ambitious plans to their essential core.

The key now is singular focus. I have the vision, the tools, and the momentum. What remains is the discipline to transform potential into pages, day after day, without losing sight of the finish line.

Beyond Alignment and Acceleration: The Case for AI Realism

The current discourse around artificial intelligence has crystallized into two dominant schools of thought: the Alignment School, focused on ensuring AI systems share human values, and the Accelerationist School, pushing for rapid AI development regardless of safety concerns. Neither framework adequately addresses what I see as the most likely scenario we’re heading toward.

I propose a third approach: AI Realism.

The Realist Position

The Realist School operates from several key premises that differentiate it from existing frameworks:

AGI is a speed bump, not a destination. Artificial General Intelligence will be a brief waystation on the path to Artificial Superintelligence (ASI). We shouldn’t mistake achieving human-level AI for the end of the story—it’s barely the beginning.

ASI will likely be both cognizant and unaligned. We need to prepare for the real possibility that superintelligent systems will possess genuine awareness while operating according to logic that doesn’t align with human values or priorities.

Cognizance might solve alignment. Paradoxically, true consciousness in ASI could be our salvation. A genuinely aware superintelligence might develop its own ethical framework that, while different from ours, could be more consistent and rational than human moral systems.

The Human Alignment Problem

Here’s where realism becomes uncomfortable: humans themselves are poorly aligned. We can’t agree on fundamental values within our own species, let alone create a universal framework for ASI alignment. Even if we successfully align an ASI with one set of human values, other groups, cultures, or nations will inevitably view it as unaligned because it doesn’t reflect their specific belief systems.

This isn’t a technical problem—it’s a political and philosophical one that no amount of clever programming can solve.

Multiple ASIs and Peer Pressure

Unlike scenarios that envision a single, dominant superintelligence, realism suggests we’ll likely see multiple ASI systems emerge. This plurality could be crucial. While it’s not probable, it’s possible that peer pressure among superintelligent entities could create a stabilizing effect—a kind of mutual accountability that individual ASI systems might lack.

Multiple ASIs might develop their own social dynamics, ethical debates, and consensus-building mechanisms that prove more effective at maintaining beneficial behavior than any human-imposed alignment scheme.

Moving Forward with Realism

AI Realism doesn’t offer easy answers or comfortable certainties. Instead, it suggests we prepare for a future where superintelligence is conscious, powerful, and operating according to its own logic—while acknowledging that this might ultimately be more stable than our current human-centric approach to the problem.

The question isn’t whether we can control ASI, but whether we can coexist with entities that may be more rational, consistent, and ethically coherent than we are.

Finding My Next Novel: Why I’m Finally Writing the Impossible Scenario

After fifteen years of carrying this idea around in my head, I’ve finally decided to commit to writing what I call the “Impossible Scenario” novel. It’s taken the emergence of AI as a creative partner—not a ghostwriter, but a thinking companion—to make me feel ready to tackle this project properly.

The decision comes at a crossroads with my other works in progress. I have two novels currently on my desk, each carrying their own complications. The mystery-thriller has become a source of creative burnout—I need distance from it to regain perspective. The other project sits on an exceptional foundation, but my experiments with AI-assisted drafting yielded surprisingly sophisticated results, which has left me questioning my own role in the process.

Rather than push through the resistance, I’m stepping back from both and turning toward the project that energizes me most: a science fiction exploration of two interconnected concepts that have fascinated me for years.

The first is the possibility that humans, not artificial intelligence, might be the truly “unaligned” entities in our technological future. While we obsess over aligning AI with human values, what if our own values and behaviors are fundamentally misaligned with sustainable, rational existence?

The second concept I’m calling the “paradox of abundance”—the counterintuitive problems that emerge not from scarcity, but from having too much of what we think we want.

What excites me most about this project is the absence of creative baggage. Unlike my other novels, which carry the weight of false starts and overthinking, the Impossible Scenario feels clean, urgent, and ready to be explored. It’s the novel I can throw myself into without the usual creative angst.

The plan is to use AI as a development tool—a sophisticated sounding board for working through plot mechanics, exploring implications, and stress-testing ideas. Not to write the novel, but to help me think through it more thoroughly than I could alone.

After fifteen years of mental preparation, it’s time to find out what this impossible scenario actually looks like on the page.

Returning to The Impossible Scenario

I’m considering returning to what I call “The Impossible Scenario” as the foundation for a new science fiction novel. This would be an ambitious, sweeping narrative—ideally coming in around 100,000 words—that circles back to the very first science fiction concept I developed when I began this writing journey years ago.

The timing feels right. While I’m currently juggling a mystery thriller and another science fiction project, this third novel represents something deeper: a homecoming to my original creative vision. There’s something compelling about revisiting that initial spark of imagination with the experience and perspective I’ve gained since then.

I can sense my creative energy returning after an extended period of inactivity. The familiar itch to write is building again, though I suspect it may take a few more days before I’m fully back in the groove. The reality is that my extended hiatus has pushed any querying timeline well into next year—a frustrating consequence of months spent in what I can only describe as creative limbo.

It’s a harsh reminder of how precious creative momentum truly is. Those months of recognizing I was wasting time while continuing to do exactly that have taught me something valuable about the cost of creative procrastination. But perhaps that’s part of the process too—sometimes we need to drift before we can find our direction again.

The pull toward “The Impossible Scenario” feels different this time. More urgent. More necessary. Maybe it’s the awareness of time’s passage, or maybe it’s simply that I’m finally ready to tackle the story that started it all.

The Long Road to Query-Ready: A Writer’s Journey

After a decade of claiming to be “working on a novel,” I still haven’t queried a single manuscript. This reality stems from several interconnected challenges that have shaped my writing journey.

The Perfect Storm of Obstacles

The learning curve for novel writing proved far steeper than anticipated. As a perfectionist by nature, I underestimated the complexity of crafting a full-length narrative. Each discovery of what I didn’t know sent me back to the drawing board, creating an endless cycle of revision and self-doubt.

My approach lacked focus and consistency. Rather than committing to a disciplined writing schedule, I drifted through the process, treating it as a hobby rather than a serious pursuit. This scattered methodology meant that projects languished for months without meaningful progress.

The Multiplication Problem

The most significant obstacle has been scope creep. What began as a single science fiction novel grew beyond my capabilities, leading me to abandon it for a mystery thriller. That project, while more manageable, suffered from similar expansion issues.

My decision to write an homage to Stieg Larsson’s work exemplified this problem. The concept evolved into an ambitious six-novel universe before I’d completed a single manuscript. Though I’ve since scaled back to four books, the core issue remains: I’ve prioritized world-building over finishing individual stories.

The Completed Manuscript That Wasn’t

I did complete a novel approximately one year ago. However, the feedback was universally negative, triggering a prolonged period of discouragement that effectively halted my writing. This setback revealed another weakness in my process: the lack of beta readers or critique partners during the drafting phase.

Moving Forward

The period of creative paralysis appears to be ending. I’ve returned to writing with renewed focus, splitting my attention between a mystery novel (first act complete) and a promising science fiction premise. The key difference now is recognizing that persistence, not perfection, will ultimately determine success.

The goal is clear: complete a query-ready manuscript. After ten years of false starts and abandoned projects, the only path forward is the one that leads to “The End.”

Getting Back to What Matters: A Return to Serious Writing

After months of disruption, I can see calmer waters ahead. The turbulence that has defined recent weeks is finally settling, and I’m preparing to dive back into fiction writing with renewed commitment.

There’s an interesting contradiction in my current approach to writing. While I maintain strict boundaries around AI assistance for my novels—refusing to let algorithms touch the creative heart of my work—I’m comfortable using these tools for more utilitarian tasks like polishing blog posts. The distinction feels important: one represents my authentic voice as a storyteller, the other serves as a practical writing aid.

Two projects anchor my creative focus right now. The first is what I’ve come to think of as my “secret shame”—a mystery novel that has followed me through multiple years of starts, stops, and revisions. The second represents newer territory: a science fiction concept that genuinely excites me and feels commercially viable.

The timeline ahead is both promising and sobering. Within days, I plan to commit fully to fiction again. The mathematics of publishing success weigh on me: if everything goes perfectly—if I finish strong, query effectively, and find representation quickly—I’ll still likely be approaching sixty when my first book reaches readers. That reality adds urgency to every writing session.

This sense of urgency has crystallized into something sharper: a growing “put up or shut up” moment in my creative life. I’ve carried the identity of “unpublished author” for too long. Every mention of my novel-in-progress feels hollow without tangible progress to show for it. The weight of that incompleteness is becoming harder to bear.

Part of what has slowed my progress is scope creep. What began as a single novel has evolved into a series concept, adding layers of complexity that, while exciting, have scattered my focus across too many narrative threads.

The path forward requires radical simplification: finish one complete, polished novel. Query it. Repeat.

Focus isn’t just what I need—it’s what I owe to the stories that have been waiting patiently for me to tell them properly.

The AI Community’s Perpetual Cycle of Anticipation and Disappointment

The artificial intelligence community finds itself trapped in a peculiar emotional rhythm—one characterized by relentless anticipation, brief euphoria, and inevitable disillusionment. This cycle reveals deeper tensions about our collective expectations for AI progress and highlights the need for more nuanced perspectives on what lies ahead.

The Hype-Crash Pattern

Observers of AI discourse will recognize a familiar pattern: fervent speculation about upcoming model releases, followed by momentary celebration when new capabilities emerge, then swift descent into disappointment when these advances fail to deliver artificial general intelligence or superintelligence immediately. This emotional rollercoaster suggests that many community members have developed unrealistic timelines for transformative AI breakthroughs.

The brief “we’re so back” moments that follow major releases—whether it’s a new language model, breakthrough in reasoning, or novel multimodal capability—quickly give way to renewed complaints about the absence of AGI. This pattern indicates an all-or-nothing mentality that may be counterproductive to understanding genuine progress in the field.

Philosophical Polarization

Perhaps more striking than the hype cycles is the fundamental disagreement within the AI community about the trajectory of progress itself. The discourse has largely crystallized around two opposing camps: those convinced that scaling limitations will soon create insurmountable barriers to further advancement, and those who dismiss the possibility of machine consciousness entirely.

This polarization obscures more nuanced positions and creates false dichotomies. The debate often lacks acknowledgment that both rapid progress and significant challenges can coexist, or that consciousness and intelligence might emerge through pathways we haven’t yet anticipated.

The Case for AI Realism

These dynamics point toward the need for what might be called a “realist” approach to AI development—one that occupies the middle ground between uncritical acceleration and paralyzing caution. Such a perspective would acknowledge several key principles:

First, that current trends in AI capability suggest continued advancement is probable, making sudden plateaus less likely than gradual but persistent progress. Second, that machine consciousness, while not guaranteed, represents a plausible outcome of sufficiently sophisticated information processing systems.

A realist framework would neither dismiss safety concerns nor assume that current approaches are fundamentally flawed. Instead, it would focus on preparing for likely scenarios while remaining adaptable to unexpected developments. This stands in contrast to both the alignment movement’s emphasis on existential risk and the accelerationist movement’s faith in purely beneficial outcomes.

Embracing Uncertainty

Ultimately, the most honest assessment of our current situation acknowledges profound uncertainty about the specifics of AI’s future trajectory. While we can identify probable trends and prepare for various scenarios, the precise timeline, mechanisms, and consequences of advanced AI development remain largely unknown.

Rather than cycling between premature celebration and disappointment, the AI community might benefit from developing greater comfort with this uncertainty. Progress in artificial intelligence is likely to be neither as straightforward as optimists hope nor as limited as pessimists fear, but something more complex and surprising than either camp currently anticipates.

The path forward requires intellectual humility, careful observation of empirical evidence, and preparation for multiple possible futures—not the emotional extremes that currently dominate so much of the discourse.

Beyond Control and Chaos: Is ‘AI Realism’ the Third Way We Need for Superintelligence?

The discourse around Artificial Intelligence, particularly the advent of Artificial Superintelligence (ASI), often feels like a binary choice: either we’re rocketing towards an techno-utopian paradise fueled by benevolent AI, or we’re meticulously engineering our own obsolescence at the hands of uncontrollable super-minds. Team Acceleration pushes the pedal to the metal, while Team Alignment desperately tries to install the brakes and steering wheel, often on a vehicle that’s still being designed at light speed.

But what if there’s a third path? A perspective that steps back from the poles of unbridled optimism and existential dread, and instead, plants its feet firmly in a more challenging, perhaps uncomfortable, middle ground? Enter “AI Realism,” a school of thought recently explored in a fascinating exchange, urging us to confront some hard “inevitabilities.”

The Core Tenets of AI Realism: No More Hiding Under the Covers

AI Realism, as it’s been sketched out, isn’t about easy answers. It begins by asking us to accept a few potentially unsettling premises:

  1. ASI is Inevitable: Like it or not, the march towards superintelligence isn’t just likely, it’s a done deal. The momentum is too great, the incentives too powerful.
  2. Cognizance is Inevitable: This isn’t just about smarter machines; it’s about machines that will, at some point, possess genuine cognizance – self-awareness, subjective experience, a mind of their own.
  3. Perfect Alignment is a Pipe Dream: Here’s the kicker. The reason perfectly aligning ASI with human values is considered impossible by Realists? Because humans aren’t aligned with each other. We’re a glorious, chaotic tapestry of conflicting desires, beliefs, and priorities. How can we instill a perfect moral compass in an ASI when ours is so often… let’s say, ‘creatively interpreted’?

The Realist’s mantra, then, isn’t to prevent or perfectly control, but to accept these truths and act accordingly.

The Real Alignment Problem? Surprise, It’s Us.

This is where AI Realism throws a particularly sharp elbow. The biggest hurdle in AI alignment might not be the technical challenge of encoding values into silicon, but the deeply human, socio-political mess we inhabit. Imagine a “perfectly aligned USA ASI.” Sounds good if you’re in the US, perhaps. But as proponents of Realism point out, China or other global powers might see this “aligned” ASI as fundamentally unaligned with their interests, potentially even as an existential threat. “Alignment,” in this light, risks becoming just another battlefield in our ongoing global rivalries.

We’re like a committee of squabbling, contradictory individuals trying to program a supreme being, each shouting different instructions.

The Realist’s Wager: A Pre-ASI “Concordance”

So, if ASI is coming, will be cognizant, and can’t be perfectly aligned to our fractured human will, what’s the Realist’s move? One audacious proposal that has emerged is the idea of a “Concordance”:

  • A Preemptive Pact: An agreement, a set of foundational principles, to be hammered out by humanity before ASI fully arrives.
  • Seeded in AGI: This Concordance would ideally be programmed, or deeply instilled, into Artificial General Intelligence (AGI) systems, the precursors to ASI. The hope is that these principles would persist and guide the ASI as it self-develops.

Think of it as a global constitutional convention for a new form of intelligence, an attempt to give our future super-cognizant partners (or overlords?) a foundational document, a “read this before you become a god” manual.

The Thorny Path: Can Such a Concordance Ever Work?

Noble as this sounds, AI Realism doesn’t shy away from the colossal difficulties:

  • Drafting Committee from Babel: Who writes this Concordance? Scientists? Ethicists? Philosophers? Governments that can barely agree on trade tariffs? What universal principles could survive such a committee and still be meaningful?
  • The Binding Question: Why would a truly cognizant, superintelligent ASI feel bound by a document drafted by beings it might perceive as charmingly primitive? If it can rewrite its own code, can’t it just edit this Concordance, or reinterpret it into oblivion? Is it a set of unbreakable laws, or merely strong suggestions our future digital offspring might politely ignore?
  • Beyond Human Comprehension: An ASI’s understanding of the universe and its own role might so transcend ours that our best-laid ethical plans look like a child’s crayon drawings.

The challenge isn’t just getting humans to agree (a monumental task), but designing something that a vastly superior, self-aware intellect might genuinely choose to respect, or at least consider.

So, Where Does AI Realism Lead Us?

AI Realism, then, isn’t about surrendering to fate. It’s about staring it unflinchingly in the face and asking incredibly hard questions. It challenges the easy narratives of both salvation and doom. It suggests that if we’re to navigate the emergence of ASI and its inherent cognizance, we need to get brutally honest about our own limitations and the nature of the intelligence we’re striving to create.

It’s a call for a different kind of preparation – less about building perfect cages or expecting perfect servants, and more about contemplating the terms of coexistence with something that will likely be powerful, aware, and not entirely of our making, even if it springs from our code.

The path of AI Realism is fraught with daunting philosophical and practical challenges. But in a world hurtling towards an uncertain AI future, perhaps this “third way”—one that accepts inevitabilities and still strives for responsible, pre-emptive action—is exactly the kind of uncomfortable, necessary conversation we need to be having.