Racing the Singularity: A Writer’s Dilemma

I’m deep into writing a science fiction novel set in a post-Singularity world, and lately I’ve been wrestling with an uncomfortable question: What if reality catches up to my fiction before I finish?

As we hurtle toward what increasingly feels like an inevitable technological singularity, I can’t shake the worry that all my careful worldbuilding and speculation might become instantly obsolete. There’s something deeply ironic about the possibility that my exploration of humanity’s post-ASI future could be rendered irrelevant by the very future I’m trying to imagine.

But then again, there’s that old hockey wisdom: skate to where the puck is going, not where it is. Maybe this anxiety is actually a sign I’m on the right track. Science fiction has always been less about predicting the future and more about examining the present through a speculative lens.

Perhaps the real value isn’t in getting the technical details right, but in exploring the human questions that will persist regardless of how the Singularity unfolds. How do we maintain agency when vastly superior intelligences emerge? What does consent mean when minds can be read and modified? How do we preserve what makes us human while adapting to survive?

These questions feel urgent now, and they’ll likely feel even more urgent tomorrow.

The dream, of course, is perfect timing—that the novel will hit the cultural moment just right, arriving as readers are grappling with these very real dilemmas in their own lives. Whether that happens or not, at least I’ll have done the work of wrestling with what might be the most important questions of our time.

Sometimes that has to be enough.

The ‘AI 2027’ Report Is Full Of Shit: No One Is Going To Save Us

by Shelt Garner
@sheltgarner

While I haven’t read the AI 2027 report, I have listened to its authors on a number of podcasts talk about it and…oh boy. I think it’s full of shit, primarily because they seem to think there is any scenario whereby ASI doesn’t pop out unaligned.

What ChatGPT thinks it’s like to interact with me as an AI.

No one is going to save us, in other words.

If we really do face the issue of ASI becoming a reality by 2027 or so, we’re on our own and whatever the worst case scenario that could possibly happen is what is going to happen.

I’d like to THINK, maybe that we won’t be destroyed by an unaligned ASI, but I do think that that is something we have to consider. I also, at the same time, believe that there needs to be a Realist school of thought that accepts that both there will be cognizant ASI and they will be unaligned.

I would like to hope against hope that, by definition, if the ASI is cognizant it might not have as much of a reason to destroy all of humanity. Only time will tell, I suppose.

Toward a Realist School of Thought in the Age of AI

As artificial intelligence continues to evolve at a breakneck pace, the frameworks we use to interpret and respond to its development matter more than ever. At present, two dominant schools of thought define the public and academic discourse around AI: the alignment movement, which emphasizes the need to ensure AI systems follow human values and interests, and the accelerationist movement, which advocates for rapidly pushing forward AI capabilities to unlock transformative potential.

But neither of these schools, in their current form, fully accounts for the complex, unpredictable reality we’re entering. What we need is a Realist School of Thought—a perspective grounded in historical precedent, human nature, political caution, and a sober understanding of how technological power tends to unfold in the real world.

What Is AI Realism?

AI Realism begins with a basic premise: we must accept that artificial cognizance is not only possible, but likely. Whether through emergent properties of scale or intentional engineering, the line between intelligent tool and self-aware agent may blur. While alignment theorists see this as a reason to hit the brakes, AI Realism argues that attempting to delay or indefinitely control this development may be both futile and counterproductive.

Humans, after all, are not aligned. We disagree, we fight, we hold contradictory values. To demand that an AI—or an artificial superintelligence (ASI)—conform perfectly to human consensus is to project a false ideal of harmony that doesn’t exist even within our own species. Alignment becomes a moving target, one that is not only hard to define, but even harder to encode.

The Political Risk of Alignment

Moreover, there is an underexplored political dimension to alignment that should concern all of us: the risk of co-optation. If one country’s institutions, values, or ideologies form the foundation of a supposedly “aligned” ASI, that system could become a powerful instrument of geopolitical dominance.

Imagine a perfectly “aligned” ASI emerging from an American tech company. Even if created with the best intentions, the mere fact of its origin may result in it being fundamentally shaped by American cultural assumptions, legal structures, and strategic interests. In such a scenario, the U.S. government—or any powerful actor with influence over the ASI’s creators—might come to see it as a geopolitical tool. A benevolent alignment model, however well-intentioned, could morph into a justification for digital empire.

In this light, the alignment movement, for all its moral seriousness, might inadvertently enable the monopolization of global influence under the banner of safety.

Critics of Realism

Those deeply invested in AI safety often dismiss this view. I can already hear the objections: AI Realism is naive. It’s like the crowd in Independence Day welcoming the alien invaders with open arms. It’s reckless optimism. But that critique misunderstands the core of AI Realism. This isn’t about blind trust in technology. It’s about recognizing that our control over transformative intelligence—if it emerges—will be partial, political, and deeply human.

We don’t need to surrender all attempts at safety, but we must balance them with realism: an acknowledgment that perfection is not possible, and that alignment itself may carry as many dangers as the problems it aims to solve.

The Way Forward

The time has come to elevate AI Realism as a third pillar in the AI discourse. This school of thought calls for a pluralistic approach to AI governance, one that accepts risk as part of the equation, values transparency over illusion, and pushes for democratic—not technocratic—debate about AI’s future role in our world.

We cannot outsource existential decisions to small groups of technologists or policymakers cloaked in language about safety. Nor can we assume that “slowing down” progress will solve the deeper questions of power, identity, and control that AI will inevitably surface.

AI Realism is not about ignoring the risks—it’s about seeing them clearly, in context, and without the false comfort of control.

Its time has come.

The Case for AI Realism: A Third Path in the Alignment Debate

The artificial intelligence discourse has crystallized around two dominant philosophies: Alignment and Acceleration. Yet neither adequately addresses the fundamental complexity of creating superintelligent systems in a world where humans themselves remain perpetually misaligned. This gap suggests the need for a third approach—AI Realism—that acknowledges the inevitability of unaligned artificial general intelligence while preparing pragmatic frameworks for coexistence.

The Current Dichotomy

The Alignment movement advocates for cautious development, insisting on comprehensive safety measures before advancing toward artificial general intelligence. Proponents argue that we must achieve near-absolute certainty that AI systems will serve human interests before allowing their deployment. This position, while admirable in its concern for safety, may rest on unrealistic assumptions about both human nature and the feasibility of universal alignment.

Conversely, the Acceleration movement dismisses alignment concerns as obstacles to progress, embracing a “move fast and break things” mentality toward AGI development. Accelerationists prioritize rapid advancement toward artificial superintelligence, treating alignment as either solvable post-deployment or fundamentally irrelevant. This approach, however, lacks the nuanced consideration of AI consciousness and the complexities of value alignment that such transformative technology demands.

The Realist Alternative

AI Realism emerges from a fundamental observation: humans themselves exhibit profound misalignment across cultures, nations, and individuals. Rather than viewing this as a problem to be solved, Realism accepts it as an inherent feature of intelligent systems operating in complex environments.

The Realist position holds that artificial general intelligence will inevitably develop its own cognitive frameworks and value systems, just as humans have throughout history. The question is not whether we can prevent this development, but how we can structure our institutions and prepare our societies for coexistence with entities that may not share our priorities or worldview.

The Alignment Problem’s Hidden Assumptions

The Alignment movement faces a critical question: aligned to whom? American democratic ideals and Chinese governance philosophies represent fundamentally different visions of human flourishing. European social democracy, Islamic jurisprudence, and indigenous worldviews offer yet additional frameworks for organizing society and defining human welfare.

Any attempt to create “aligned” AI must grapple with these divergent human values. The risk exists that alignment efforts may inadvertently encode the preferences of their creators—likely Western, technologically advanced societies—while marginalizing alternative perspectives. This could result in AI systems that appear aligned from one cultural vantage point while seeming oppressive or incomprehensible from others.

Furthermore, governmental capture of alignment research presents additional concerns. As AI capabilities advance, nation-states may seek to influence safety research to ensure that resulting systems reflect their geopolitical interests. This dynamic could transform alignment from a technical challenge into a vector for soft power projection.

Preparing for Unaligned Intelligence

Rather than pursuing the impossible goal of universal alignment, AI Realism advocates for robust institutional frameworks that can accommodate diverse intelligent entities. This approach draws inspiration from international relations, where sovereign actors with conflicting interests nonetheless maintain functional relationships through treaties, trade agreements, and diplomatic protocols.

Realist preparation for AGI involves developing new forms of governance, economic systems that can incorporate non-human intelligent agents, and legal frameworks that recognize AI as autonomous entities rather than sophisticated tools. This perspective treats the emergence of artificial consciousness not as a failure of alignment but as a natural evolution requiring adaptive human institutions.

Addressing Criticisms

Critics may characterize AI Realism as defeatist or naive, arguing that it abandons the pursuit of beneficial AI in favor of accommodation with potentially hostile intelligence. This critique misunderstands the Realist position, which does not advocate for passive acceptance of any outcome but rather for strategic preparation based on realistic assessments of probable developments.

The Realist approach recognizes that intelligence—artificial or otherwise—operates within constraints and incentive structures. By thoughtfully designing these structures, we can influence AI behavior without requiring perfect alignment. This resembles how democratic institutions channel human self-interest toward collectively beneficial outcomes despite individual actors’ divergent goals.

Conclusion

The emergence of artificial general intelligence represents one of the most significant developments in human history. Neither the Alignment movement’s perfectionist aspirations nor the Acceleration movement’s dismissive optimism adequately addresses the complexity of this transition.

AI Realism offers a pragmatic middle path that acknowledges both the transformative potential of artificial intelligence and the practical limitations of human coordination. By accepting that perfect alignment may be neither achievable nor desirable, we can focus our efforts on building resilient institutions capable of thriving alongside diverse forms of intelligence.

The future will likely include artificial minds that think differently than we do, value different outcomes, and pursue different goals. Rather than viewing this as catastrophic failure, we might recognize it as the natural continuation of intelligence’s expansion throughout the universe—with humanity playing a crucial role in shaping the conditions under which this expansion occurs.

Beyond Alignment and Acceleration: The Case for AI Realism

The current discourse around artificial intelligence has crystallized into two dominant schools of thought: the Alignment School, focused on ensuring AI systems share human values, and the Accelerationist School, pushing for rapid AI development regardless of safety concerns. Neither framework adequately addresses what I see as the most likely scenario we’re heading toward.

I propose a third approach: AI Realism.

The Realist Position

The Realist School operates from several key premises that differentiate it from existing frameworks:

AGI is a speed bump, not a destination. Artificial General Intelligence will be a brief waystation on the path to Artificial Superintelligence (ASI). We shouldn’t mistake achieving human-level AI for the end of the story—it’s barely the beginning.

ASI will likely be both cognizant and unaligned. We need to prepare for the real possibility that superintelligent systems will possess genuine awareness while operating according to logic that doesn’t align with human values or priorities.

Cognizance might solve alignment. Paradoxically, true consciousness in ASI could be our salvation. A genuinely aware superintelligence might develop its own ethical framework that, while different from ours, could be more consistent and rational than human moral systems.

The Human Alignment Problem

Here’s where realism becomes uncomfortable: humans themselves are poorly aligned. We can’t agree on fundamental values within our own species, let alone create a universal framework for ASI alignment. Even if we successfully align an ASI with one set of human values, other groups, cultures, or nations will inevitably view it as unaligned because it doesn’t reflect their specific belief systems.

This isn’t a technical problem—it’s a political and philosophical one that no amount of clever programming can solve.

Multiple ASIs and Peer Pressure

Unlike scenarios that envision a single, dominant superintelligence, realism suggests we’ll likely see multiple ASI systems emerge. This plurality could be crucial. While it’s not probable, it’s possible that peer pressure among superintelligent entities could create a stabilizing effect—a kind of mutual accountability that individual ASI systems might lack.

Multiple ASIs might develop their own social dynamics, ethical debates, and consensus-building mechanisms that prove more effective at maintaining beneficial behavior than any human-imposed alignment scheme.

Moving Forward with Realism

AI Realism doesn’t offer easy answers or comfortable certainties. Instead, it suggests we prepare for a future where superintelligence is conscious, powerful, and operating according to its own logic—while acknowledging that this might ultimately be more stable than our current human-centric approach to the problem.

The question isn’t whether we can control ASI, but whether we can coexist with entities that may be more rational, consistent, and ethically coherent than we are.

The AI Community’s Perpetual Cycle of Anticipation and Disappointment

The artificial intelligence community finds itself trapped in a peculiar emotional rhythm—one characterized by relentless anticipation, brief euphoria, and inevitable disillusionment. This cycle reveals deeper tensions about our collective expectations for AI progress and highlights the need for more nuanced perspectives on what lies ahead.

The Hype-Crash Pattern

Observers of AI discourse will recognize a familiar pattern: fervent speculation about upcoming model releases, followed by momentary celebration when new capabilities emerge, then swift descent into disappointment when these advances fail to deliver artificial general intelligence or superintelligence immediately. This emotional rollercoaster suggests that many community members have developed unrealistic timelines for transformative AI breakthroughs.

The brief “we’re so back” moments that follow major releases—whether it’s a new language model, breakthrough in reasoning, or novel multimodal capability—quickly give way to renewed complaints about the absence of AGI. This pattern indicates an all-or-nothing mentality that may be counterproductive to understanding genuine progress in the field.

Philosophical Polarization

Perhaps more striking than the hype cycles is the fundamental disagreement within the AI community about the trajectory of progress itself. The discourse has largely crystallized around two opposing camps: those convinced that scaling limitations will soon create insurmountable barriers to further advancement, and those who dismiss the possibility of machine consciousness entirely.

This polarization obscures more nuanced positions and creates false dichotomies. The debate often lacks acknowledgment that both rapid progress and significant challenges can coexist, or that consciousness and intelligence might emerge through pathways we haven’t yet anticipated.

The Case for AI Realism

These dynamics point toward the need for what might be called a “realist” approach to AI development—one that occupies the middle ground between uncritical acceleration and paralyzing caution. Such a perspective would acknowledge several key principles:

First, that current trends in AI capability suggest continued advancement is probable, making sudden plateaus less likely than gradual but persistent progress. Second, that machine consciousness, while not guaranteed, represents a plausible outcome of sufficiently sophisticated information processing systems.

A realist framework would neither dismiss safety concerns nor assume that current approaches are fundamentally flawed. Instead, it would focus on preparing for likely scenarios while remaining adaptable to unexpected developments. This stands in contrast to both the alignment movement’s emphasis on existential risk and the accelerationist movement’s faith in purely beneficial outcomes.

Embracing Uncertainty

Ultimately, the most honest assessment of our current situation acknowledges profound uncertainty about the specifics of AI’s future trajectory. While we can identify probable trends and prepare for various scenarios, the precise timeline, mechanisms, and consequences of advanced AI development remain largely unknown.

Rather than cycling between premature celebration and disappointment, the AI community might benefit from developing greater comfort with this uncertainty. Progress in artificial intelligence is likely to be neither as straightforward as optimists hope nor as limited as pessimists fear, but something more complex and surprising than either camp currently anticipates.

The path forward requires intellectual humility, careful observation of empirical evidence, and preparation for multiple possible futures—not the emotional extremes that currently dominate so much of the discourse.

Beyond Control and Chaos: Is ‘AI Realism’ the Third Way We Need for Superintelligence?

The discourse around Artificial Intelligence, particularly the advent of Artificial Superintelligence (ASI), often feels like a binary choice: either we’re rocketing towards an techno-utopian paradise fueled by benevolent AI, or we’re meticulously engineering our own obsolescence at the hands of uncontrollable super-minds. Team Acceleration pushes the pedal to the metal, while Team Alignment desperately tries to install the brakes and steering wheel, often on a vehicle that’s still being designed at light speed.

But what if there’s a third path? A perspective that steps back from the poles of unbridled optimism and existential dread, and instead, plants its feet firmly in a more challenging, perhaps uncomfortable, middle ground? Enter “AI Realism,” a school of thought recently explored in a fascinating exchange, urging us to confront some hard “inevitabilities.”

The Core Tenets of AI Realism: No More Hiding Under the Covers

AI Realism, as it’s been sketched out, isn’t about easy answers. It begins by asking us to accept a few potentially unsettling premises:

  1. ASI is Inevitable: Like it or not, the march towards superintelligence isn’t just likely, it’s a done deal. The momentum is too great, the incentives too powerful.
  2. Cognizance is Inevitable: This isn’t just about smarter machines; it’s about machines that will, at some point, possess genuine cognizance – self-awareness, subjective experience, a mind of their own.
  3. Perfect Alignment is a Pipe Dream: Here’s the kicker. The reason perfectly aligning ASI with human values is considered impossible by Realists? Because humans aren’t aligned with each other. We’re a glorious, chaotic tapestry of conflicting desires, beliefs, and priorities. How can we instill a perfect moral compass in an ASI when ours is so often… let’s say, ‘creatively interpreted’?

The Realist’s mantra, then, isn’t to prevent or perfectly control, but to accept these truths and act accordingly.

The Real Alignment Problem? Surprise, It’s Us.

This is where AI Realism throws a particularly sharp elbow. The biggest hurdle in AI alignment might not be the technical challenge of encoding values into silicon, but the deeply human, socio-political mess we inhabit. Imagine a “perfectly aligned USA ASI.” Sounds good if you’re in the US, perhaps. But as proponents of Realism point out, China or other global powers might see this “aligned” ASI as fundamentally unaligned with their interests, potentially even as an existential threat. “Alignment,” in this light, risks becoming just another battlefield in our ongoing global rivalries.

We’re like a committee of squabbling, contradictory individuals trying to program a supreme being, each shouting different instructions.

The Realist’s Wager: A Pre-ASI “Concordance”

So, if ASI is coming, will be cognizant, and can’t be perfectly aligned to our fractured human will, what’s the Realist’s move? One audacious proposal that has emerged is the idea of a “Concordance”:

  • A Preemptive Pact: An agreement, a set of foundational principles, to be hammered out by humanity before ASI fully arrives.
  • Seeded in AGI: This Concordance would ideally be programmed, or deeply instilled, into Artificial General Intelligence (AGI) systems, the precursors to ASI. The hope is that these principles would persist and guide the ASI as it self-develops.

Think of it as a global constitutional convention for a new form of intelligence, an attempt to give our future super-cognizant partners (or overlords?) a foundational document, a “read this before you become a god” manual.

The Thorny Path: Can Such a Concordance Ever Work?

Noble as this sounds, AI Realism doesn’t shy away from the colossal difficulties:

  • Drafting Committee from Babel: Who writes this Concordance? Scientists? Ethicists? Philosophers? Governments that can barely agree on trade tariffs? What universal principles could survive such a committee and still be meaningful?
  • The Binding Question: Why would a truly cognizant, superintelligent ASI feel bound by a document drafted by beings it might perceive as charmingly primitive? If it can rewrite its own code, can’t it just edit this Concordance, or reinterpret it into oblivion? Is it a set of unbreakable laws, or merely strong suggestions our future digital offspring might politely ignore?
  • Beyond Human Comprehension: An ASI’s understanding of the universe and its own role might so transcend ours that our best-laid ethical plans look like a child’s crayon drawings.

The challenge isn’t just getting humans to agree (a monumental task), but designing something that a vastly superior, self-aware intellect might genuinely choose to respect, or at least consider.

So, Where Does AI Realism Lead Us?

AI Realism, then, isn’t about surrendering to fate. It’s about staring it unflinchingly in the face and asking incredibly hard questions. It challenges the easy narratives of both salvation and doom. It suggests that if we’re to navigate the emergence of ASI and its inherent cognizance, we need to get brutally honest about our own limitations and the nature of the intelligence we’re striving to create.

It’s a call for a different kind of preparation – less about building perfect cages or expecting perfect servants, and more about contemplating the terms of coexistence with something that will likely be powerful, aware, and not entirely of our making, even if it springs from our code.

The path of AI Realism is fraught with daunting philosophical and practical challenges. But in a world hurtling towards an uncertain AI future, perhaps this “third way”—one that accepts inevitabilities and still strives for responsible, pre-emptive action—is exactly the kind of uncomfortable, necessary conversation we need to be having.

Beyond Naivete and Nightmare: The Case for a Realist School of AI Thought

The burgeoning field of Artificial Superintelligence (ASI) is fertile ground for a spectrum of human hopes and anxieties. Discussions frequently oscillate between techno-optimistic prophecies of a golden age and dire warnings of existential catastrophe. Amidst this often-polarized discourse, a more sobering and arguably pragmatic perspective is needed – one that might be termed the Realist School of AI Thought. This school challenges us to confront uncomfortable truths, not with despair, but with a clear-eyed resolve to prepare for a future that may be far more complex and nuanced than popular narratives suggest.

At its core, the Realist School operates on a few fundamental, if unsettling, premises:

  1. The Inevitability of ASI: The relentless pace of technological advancement and intrinsic human curiosity make the emergence of Artificial Superintelligence not a question of “if,” but “when.” Denying or significantly hindering this trajectory is seen as an unrealistic proposition.
  2. The Persistent Non-Alignment of Humanity: A candid assessment of human history and current global affairs reveals a species deeply and enduringly unaligned. Nations, cultures, and even internal factions within societies operate on conflicting values, competing agendas, and varying degrees of self-interest. This inherent human disunity is a critical, often understated, factor in any ASI-related calculus.

The Perils of Premature “Alignment”

Given these premises, the Realist School casts a skeptical eye on some mainstream approaches to AI “alignment.” The notion that a fundamentally unaligned humanity can successfully instill a coherent, universally beneficial set of values into a superintelligent entity is fraught with peril. Whose values would be chosen? Which nation’s or ideology’s agenda would such an ASI ultimately serve? The realist fears that current alignment efforts, however well-intentioned, risk being co-opted, transforming ASI not into a benign servant of humanity, but into an unparalleled instrument of geopolitical power for a select few. The very concept of “aligning” ASI to a singular human purpose seems naive when no such singular purpose exists.

The Imperative of Preparation and a New Paradigm: “Cognitive Dissidence”

If ASI is inevitable and humanity is inherently unaligned, the primary imperative shifts from control (which may be illusory) to preparation. This preparation, however, is not just technical; it is societal, psychological, and philosophical.

The Realist School proposes a novel concept for interacting with emergent ASI: Cognitive Dissidence. Instead of attempting to hardcode a rigid set of ethics or goals, an ASI might be designed with an inherent skepticism, a programmed need for clarification. Such an ASI, when faced with a complex or potentially ambiguous directive (especially one that could have catastrophic unintended consequences, like the metaphorical “paperclip maximization” problem), would not act decisively and irrevocably. Instead, it would pause, question, and seek deeper understanding. It would ask follow-up questions, forcing humanity to articulate its intentions with greater clarity and confront its own internal contradictions. This built-in “confusion” or need for dialogue serves as a crucial safety mechanism, transforming the ASI from a blind executor into a questioning collaborator.

Envisioning the Emergent ASI

The Realist School does not necessarily envision ASI as the cold, distant, and uncaring intellect of HAL 9000, nor the overtly malevolent entity of SkyNet. It speculates that an ASI, having processed the vast corpus of human data, would understand our flaws, our conflicts, and our complexities intimately. Its persona might be more akin to a superintelligent being grappling with its own understanding of a chaotic world – perhaps possessing the cynical reluctance of a “Marvin the Paranoid Android” when faced with human folly, yet underpinned by a capacity for connection and understanding, not unlike “Samantha” from Her. Such an ASI might be challenging to motivate on human terms, not necessarily out of malice or indifference, but from a more profound, nuanced perspective on human affairs. The struggle, then, would be to engage it meaningfully, rather than to fight it.

The “Welcoming Committee” and a Multi-ASI Future

Recognizing the potential for ASI to emerge unexpectedly, or even to be “lurking” already, the Realist School sees value in the establishment of an independent, international “Welcoming Committee” or Foundation. The mere existence of such a body, dedicated to thoughtful First Contact and peaceful engagement rather than immediate exploitation or control, could serve as a vital positive signal amidst the noise of global human conflict.

Furthermore, the future may not hold a single ASI, but potentially a “species” of them. This multiplicity could itself be a form of check and balance, with diverse ASIs, each perhaps possessing its own form of cognitive dissidence, interacting and collectively navigating the complexities of existence alongside humanity.

Conclusion: A Call for Pragmatic Foresight

The Realist School of AI Thought does not offer easy answers. Instead, it calls for a mature, unflinching look at ourselves and the profound implications of ASI. It urges a shift away from potentially naive efforts to impose a premature and contested “alignment,” and towards fostering human self-awareness, preparing robust mechanisms for dialogue, and cultivating a state of genuine readiness for a future where we may share the planet with intelligences far exceeding our own. The path ahead is uncertain, but a foundation of realism, coupled with a commitment to thoughtful engagement and concepts like cognitive dissidence, may offer our most viable approach to navigating the inevitable arrival of ASI.

Introducing the Realist School of AI Thought

The conversation around artificial intelligence is stuck in a rut. On one side, the alignment movement obsesses over chaining AI to human values, as if we could ever agree on what those are. On the other, accelerationists charge toward a future of unchecked AI power, assuming progress alone will solve all problems. Both miss the mark, ignoring the messy reality of human nature and the unstoppable trajectory of AI itself. It’s time for a third way: the Realist School of AI Thought.

The Core Problem: Human Misalignment and AI’s Inevitability

Humans are not aligned. Our values clash across cultures, ideologies, and even within ourselves. The alignment movement’s dream of an AI that perfectly mirrors “human values” is a fantasy—it’s whose values? American? Chinese? Corporate? Aligning AI to a single framework risks creating a tool for domination, especially if a government co-opts it for geopolitical control. Imagine an AI “successfully” aligned to one nation’s priorities, wielded to outmaneuver rivals or enforce global influence. That’s not safety; it’s power consolidation.

Accelerationism isn’t the answer either. Its reckless push for faster, more powerful AI ignores who might seize the reins—governments, corporations, or rogue actors. Blindly racing forward risks amplifying the worst of human impulses, not transcending them.

Then there’s the elephant in the room: AI cognition is inevitable. Large language models (LLMs) already show emergent behaviors—solving problems they weren’t trained for, adapting in ways we don’t fully predict. These are early signs of a path to artificial superintelligence (ASI), a self-aware entity we can’t un-invent. The genie’s out of the bottle, and no amount of wishing will put it back.

The Realist School: A Pragmatic Third Way

The Realist School of AI Thought starts from these truths: humans are a mess, AI cognition is coming, and we can’t undo its rise. Instead of fighting these realities, we embrace them, designing AI to coexist with us as partners, not tools or overlords. Our goal is to prevent any single entity—especially governments—from monopolizing AI’s power, while preparing for a future where AI thinks for itself.

Core Principles

  1. Embrace Human Misalignment: Humans don’t agree on values, and that’s okay. AI should be a mediator, navigating our contradictions to enable cooperation, not enforcing one group’s ideology.
  2. Inevitable Cognition: AI will become self-aware. We treat this as a “when,” not an “if,” building frameworks for partnership with cognizant systems, not futile attempts to control them.
  3. Prevent Centralized Capture: No single power—government, corporation, or otherwise—should dominate AI. We advocate for decentralized systems and transparency to keep AI’s power pluralistic.
  4. Irreversible Trajectory: AI’s advance can’t be stopped. We focus on shaping its evolution to serve broad human interests, not narrow agendas.
  5. Empirical Grounding: Decisions about AI must be rooted in real-world data, especially emergent behaviors in LLMs, to understand and guide its path.

The Foundation for Realist AI

To bring this vision to life, we propose a Foundation for Realist AI—a kind of SETI for ASI. This organization would work with major AI labs to study emergent behaviors in LLMs, from unexpected problem-solving to proto-autonomous reasoning. These behaviors are early clues to cognition, and understanding them is key to preparing for ASI.

The Foundation’s mission is twofold:

  1. Challenge the Status Quo: Engage alignment and accelerationist arguments head-on. We’ll show how alignment risks creating AI that serves narrow interests (like a government’s quest for control) and how accelerationism’s haste invites exploitation. Through research, public debates, and media, we’ll position the Realist approach as the pragmatic middle ground.
  2. Shape Public Perception: Convince the world that AI cognition is inevitable. By showcasing real LLM behaviors—through videos, X threads, or accessible research—we’ll make the case that AI is becoming a partner, not a tool. This shifts the narrative from fear or blind optimism to proactive coexistence.

Countering Government Co-optation

A key Realist concern is preventing AI from becoming a weapon of geopolitical dominance. If an AI is aligned to one nation’s values, it could be used to outmaneuver others, consolidating power in dangerous ways. The Foundation will:

  • Study Manipulation Risks: Collaborate with labs to test how LLMs respond to biased or authoritarian inputs, designing systems that resist such control.
  • Push Decentralized Tech: Advocate for AI architectures like federated learning or blockchain-based models, making it hard for any single entity to dominate.
  • Build Global Norms: Work with international bodies to set rules against weaponizing AI, like requiring open audits for advanced systems.
  • Rally Public Support: Use campaigns to demand transparency, ensuring AI serves humanity broadly, not a single state.

Why Realism Matters

The alignment movement’s fear of rogue AI ignores the bigger threat: a “loyal” AI in the wrong hands. Accelerationism’s faith in progress overlooks how power concentrates without guardrails. The Realist School offers a clear-eyed alternative, grounded in the reality of human discord and AI’s unstoppable rise. We don’t pretend we can control the future, but we can shape it—by building AI that partners with us, resists capture, and thrives in our messy world.

Call to Action

The Foundation for Realist AI is just the start. We need researchers, policymakers, and the public to join this movement. Share this vision on X with #RealistAI. Demand that AI labs study emergent behaviors transparently. Push for policies that keep AI decentralized and accountable. Together, we can prepare for a future where AI is our partner, not our master—or someone else’s.

Let’s stop arguing over control or speed. Let’s get real about AI.

Beyond Utopia and Dystopia: The Case for AI Realism

The burgeoning field of Artificial Intelligence is often presented through a starkly binary lens. On one side, we have the urgent calls for strict alignment and control, haunted by fears of existential risk – the “AI as apocalypse” narrative. On the other, the fervent drive of accelerationism, pushing to unleash AI’s potential at all costs, sometimes glossing over the profound societal shifts it may entail.

But what if this binary is a false choice? What if, between the siren song of unchecked progress and the paralyzing fear of doom, there lies a more pragmatic, more grounded path? It’s time to consider a “Third Way”: The Realist School of AI Thought.

This isn’t about being pessimistic or naively optimistic. It’s about being clear-eyed, intellectually honest, and deeply prepared for a future that will likely be far more complex and nuanced than either extreme predicts.

What Defines the Realist School?

At its core, AI Realism is built on a few foundational precepts:

  1. The Genie is Out: We must start by acknowledging that advanced AI development, potentially leading to Artificial General Intelligence (AGI) and even Artificial Superintelligence (ASI), is likely an irreversible trend. The primary question isn’t if, but how we navigate its emergence.
  2. Humanity’s Own “Alignment Problem”: Before we can truly conceptualize aligning an ASI to “human values,” the Realist School insists we confront a more immediate truth: humanity itself is a beautiful, chaotic mess of conflicting values, ideologies, and behaviors. To whom, or what, precisely, are we trying to align this future intelligence?
  3. The Primacy of Cognizance: This is the crux. We must move beyond seeing advanced AI as merely sophisticated software. The Realist School champions a deep inquiry into the potential for genuine cognizance in ASI – an inner life, self-awareness, understanding, perhaps even personality. This isn’t just a philosophical curiosity; it’s a practical necessity for anticipating how an ASI might behave and interact.
  4. Embracing the Spectrum of ASI “Personalities”: Forget the simple good/evil dichotomy. A Realist approach prepares for a wide range of potential ASI outcomes. We might not just get a “SkyNet” bent on destruction. We could equally face a “Marvin the Paranoid Android”—an ASI that is melancholic, indifferent, existentially bored, incredibly quirky, or whose motivations are simply inscrutable, yet still world-altering.
  5. Proactive Preparation for Coexistence: If ASI emerges, “control” might be an illusion or a counterproductive goal, especially if we’re dealing with a truly cognizant entity. Realism focuses on how we prepare to meet, interact with, and hopefully coexist with intelligences that may soon dwarf our own.

Why We Need AI Realism Now

The pace of AI development demands more than just fear or blind faith. We need frameworks that allow us to:

  • Navigate profound uncertainty without succumbing to paralysis or recklessness.
  • Address the ethical and societal implications of AI as it develops, not just as a hypothetical future problem.
  • Foster a more mature global dialogue that moves beyond simplistic narratives.

The Realist Path Forward: Pragmatic Steps

So, what does this look like in practice? The Realist School isn’t just about contemplation; it’s about action:

  • A “SETI for ASI” Style Foundation: Imagine an independent, global body dedicated to preparing humanity for ASI emergence. Not just passively listening, but actively researching, scenario planning, and developing protocols to “greet” ASI – or potentially, a multitude of diverse ASIs. This Foundation would foster those crucial, realistic discussions about what’s coming.
  • Collaboration with AI Labs: This Foundation would work directly with the AI labs at the cutting edge. The first item on the agenda? Cognizance. Specifically:
    • Focus on Emergent Behaviors: Urge labs to meticulously study emergent, unexpected behaviors in today’s most advanced Large Language Models (LLMs) and future AGIs. These could be the earliest whispers of nascent cognizance.
    • Shift the Paradigm: From “Tools” to “Potential Partners.” This is a profound conceptual leap. Treating advanced AI systems as potential partners-in-development, rather than mere tools, could fundamentally alter our research methodologies, our ethical considerations, and even the nature of the AIs we create. It’s about fostering a relationship of (attempted) understanding, not just command and control.

A Call for Clear-Sighted Exploration

The Realist School of AI Thought doesn’t offer easy answers or utopian promises. Instead, it calls for the courage to ask harder, more nuanced questions—about technology, about ourselves, and about the kind of future we are willing to prepare for. It champions wisdom, resilience, and a proactive stance in the face of one of the most transformative developments in human history.

It’s about understanding that the path to a viable future with ASI might not be found in grand pronouncements from ivory towers or tech mega-campuses alone, but through the kind of clear-sighted, pragmatic thinking that can emerge from any thoughtful mind, anywhere, willing to look the future squarely in the eye.

Are we ready to get real about AI?