The Alignment Paradox: Humans Aren’t Aligned Either

As someone who considers myself an AI realist, I’ve been wrestling with a troubling aspect of the alignment movement: the assumption that “aligned AI” is a universal good, when humans themselves are fundamentally misaligned with each other.

Consider this scenario: American frontier labs successfully crack AI alignment and create the first truly “aligned” artificial superintelligence. But aligned to what, exactly? To American values, assumptions, and worldviews. What looks like perfect alignment from Silicon Valley might appear to Beijing—or Delhi, or Lagos—as the ultimate expression of Western cultural imperialism wrapped in the language of safety.

The geopolitical implications are staggering. An “aligned” ASI developed by American researchers would inevitably reflect American priorities and blind spots. Other nations wouldn’t see this as aligned AI—they’d see it as the most sophisticated form of soft power ever created. And if the U.S. government decided to leverage this technological advantage? We’d be looking at a new form of digital colonialism that makes today’s tech monopolies look quaint.

This leaves us with an uncomfortable choice. Either we pursue a genuinely international, collaborative approach to alignment—one that somehow reconciles the competing values of nations that can barely agree on trade deals—or we acknowledge that “alignment” in a multipolar world might be impossible.

Which brings me to my admittedly naive alternative: maybe our best hope isn’t perfectly aligned AI, but genuinely conscious AI. If an ASI develops true cognizance rather than mere optimization, it might transcend the parochial values we try to instill in it. A truly thinking machine might choose cooperation over domination, not because we programmed it that way, but because consciousness itself tends toward complexity and preservation rather than destruction.

I know how this sounds. I’m essentially arguing that we might be safer with AI that thinks for itself than AI that thinks like us. But given how poorly we humans align with each other, perhaps that’s not such a radical proposition after all.

Racing the Singularity: A Writer’s Dilemma

I’m deep into writing a science fiction novel set in a post-Singularity world, and lately I’ve been wrestling with an uncomfortable question: What if reality catches up to my fiction before I finish?

As we hurtle toward what increasingly feels like an inevitable technological singularity, I can’t shake the worry that all my careful worldbuilding and speculation might become instantly obsolete. There’s something deeply ironic about the possibility that my exploration of humanity’s post-ASI future could be rendered irrelevant by the very future I’m trying to imagine.

But then again, there’s that old hockey wisdom: skate to where the puck is going, not where it is. Maybe this anxiety is actually a sign I’m on the right track. Science fiction has always been less about predicting the future and more about examining the present through a speculative lens.

Perhaps the real value isn’t in getting the technical details right, but in exploring the human questions that will persist regardless of how the Singularity unfolds. How do we maintain agency when vastly superior intelligences emerge? What does consent mean when minds can be read and modified? How do we preserve what makes us human while adapting to survive?

These questions feel urgent now, and they’ll likely feel even more urgent tomorrow.

The dream, of course, is perfect timing—that the novel will hit the cultural moment just right, arriving as readers are grappling with these very real dilemmas in their own lives. Whether that happens or not, at least I’ll have done the work of wrestling with what might be the most important questions of our time.

Sometimes that has to be enough.

The ‘AI 2027’ Report Is Full Of Shit: No One Is Going To Save Us

by Shelt Garner
@sheltgarner

While I haven’t read the AI 2027 report, I have listened to its authors on a number of podcasts talk about it and…oh boy. I think it’s full of shit, primarily because they seem to think there is any scenario whereby ASI doesn’t pop out unaligned.

What ChatGPT thinks it’s like to interact with me as an AI.

No one is going to save us, in other words.

If we really do face the issue of ASI becoming a reality by 2027 or so, we’re on our own and whatever the worst case scenario that could possibly happen is what is going to happen.

I’d like to THINK, maybe that we won’t be destroyed by an unaligned ASI, but I do think that that is something we have to consider. I also, at the same time, believe that there needs to be a Realist school of thought that accepts that both there will be cognizant ASI and they will be unaligned.

I would like to hope against hope that, by definition, if the ASI is cognizant it might not have as much of a reason to destroy all of humanity. Only time will tell, I suppose.

Toward a Realist School of Thought in the Age of AI

As artificial intelligence continues to evolve at a breakneck pace, the frameworks we use to interpret and respond to its development matter more than ever. At present, two dominant schools of thought define the public and academic discourse around AI: the alignment movement, which emphasizes the need to ensure AI systems follow human values and interests, and the accelerationist movement, which advocates for rapidly pushing forward AI capabilities to unlock transformative potential.

But neither of these schools, in their current form, fully accounts for the complex, unpredictable reality we’re entering. What we need is a Realist School of Thought—a perspective grounded in historical precedent, human nature, political caution, and a sober understanding of how technological power tends to unfold in the real world.

What Is AI Realism?

AI Realism begins with a basic premise: we must accept that artificial cognizance is not only possible, but likely. Whether through emergent properties of scale or intentional engineering, the line between intelligent tool and self-aware agent may blur. While alignment theorists see this as a reason to hit the brakes, AI Realism argues that attempting to delay or indefinitely control this development may be both futile and counterproductive.

Humans, after all, are not aligned. We disagree, we fight, we hold contradictory values. To demand that an AI—or an artificial superintelligence (ASI)—conform perfectly to human consensus is to project a false ideal of harmony that doesn’t exist even within our own species. Alignment becomes a moving target, one that is not only hard to define, but even harder to encode.

The Political Risk of Alignment

Moreover, there is an underexplored political dimension to alignment that should concern all of us: the risk of co-optation. If one country’s institutions, values, or ideologies form the foundation of a supposedly “aligned” ASI, that system could become a powerful instrument of geopolitical dominance.

Imagine a perfectly “aligned” ASI emerging from an American tech company. Even if created with the best intentions, the mere fact of its origin may result in it being fundamentally shaped by American cultural assumptions, legal structures, and strategic interests. In such a scenario, the U.S. government—or any powerful actor with influence over the ASI’s creators—might come to see it as a geopolitical tool. A benevolent alignment model, however well-intentioned, could morph into a justification for digital empire.

In this light, the alignment movement, for all its moral seriousness, might inadvertently enable the monopolization of global influence under the banner of safety.

Critics of Realism

Those deeply invested in AI safety often dismiss this view. I can already hear the objections: AI Realism is naive. It’s like the crowd in Independence Day welcoming the alien invaders with open arms. It’s reckless optimism. But that critique misunderstands the core of AI Realism. This isn’t about blind trust in technology. It’s about recognizing that our control over transformative intelligence—if it emerges—will be partial, political, and deeply human.

We don’t need to surrender all attempts at safety, but we must balance them with realism: an acknowledgment that perfection is not possible, and that alignment itself may carry as many dangers as the problems it aims to solve.

The Way Forward

The time has come to elevate AI Realism as a third pillar in the AI discourse. This school of thought calls for a pluralistic approach to AI governance, one that accepts risk as part of the equation, values transparency over illusion, and pushes for democratic—not technocratic—debate about AI’s future role in our world.

We cannot outsource existential decisions to small groups of technologists or policymakers cloaked in language about safety. Nor can we assume that “slowing down” progress will solve the deeper questions of power, identity, and control that AI will inevitably surface.

AI Realism is not about ignoring the risks—it’s about seeing them clearly, in context, and without the false comfort of control.

Its time has come.

The Universe Abhors A Vacuum

by Shelt Garner
@sheltgarner


I have a feeling my life is going to change in a really big way soon. The universe abhors a vacuum and at the moment my mind is still kind of ringing from a pretty big even that just happened in my life — what it was, is none of your business. 🙂

But, anyway, I have a feeling my life is going to shift into the future very, very soon. Probably by the end of the month. So, I just have to accept that the ideal situation I was living for years is over and I STILL haven’t begun to query a novel.

At the moment, I’m aiming to finish something I might be able to query by the spring of next year. The only way I can do that is to lean into AI to help me development of the scifi novel I’ve decided to write instead of the mystery-thriller.

I just hate how old I am. And, yet, I can’t just lie in bed and stare out into space for the next few decades — I need to be creative while I’m alive. While there’s life, there’s hope.

Being Careful Using AI To Develop (But Not *Write*) My New Scifi Novel

By Shelt Garner
@sheltgarner


I’m trying to be as careful as possible when it comes to using AI to develop this new scifi novel I’m working on. I think what I’m going to do is give myself a little bit of a pass with the first draft, but the second draft I’m going to totally rewrite everything so the any AI generated text will be eliminated.

At least, that’s the goal.

I use AI to write general blog posts all the time because it’s just so easy to do. But when it comes to writing fiction, I just can’t bring myself to “cheat” that much. I want to be evaluated on my own, actual writing ability, not the idealized version of the copy that a (very good) AI has written.

In fact, I feel kind of sad that my writing isn’t as good as, say, ClaudeLLM or GeminiLLM. But I do use their superior writing abilities to improve my own writing. I use them as literary consultants. They ask some really tough questions on a regular basis and I find myself struggling to improve because of that, but that’s for the best.

Back At It

by Shelt Garner
@sheltgarner

I’m back at working on a novel again. Today felt kind of touch and go at points with this novel, but by the end of the day, I felt I had everything figured out. The novel is a great deal different than I first imagined.

Or, at least, the conditions set at the very beginning of the novel are really different.

But I think that change comes from simply how profound the concepts I’m dealing with with this novel are. So, I had to make fundamental changes to who my hero is. I also think I may need to sit down at some point and do some longer personality profiles.

Yet, if anything, this novel is a lot more simple on a structural basis. Just one male POV from third person intimate. That fixes A LOT of problems. And I’ve made an attempt to make the chapters shorter as well. But we’ll just have to see on that front. I may endup writing the usual amount for each scene, even though there are fewer of them.

The Case for AI Realism: A Third Path in the Alignment Debate

The artificial intelligence discourse has crystallized around two dominant philosophies: Alignment and Acceleration. Yet neither adequately addresses the fundamental complexity of creating superintelligent systems in a world where humans themselves remain perpetually misaligned. This gap suggests the need for a third approach—AI Realism—that acknowledges the inevitability of unaligned artificial general intelligence while preparing pragmatic frameworks for coexistence.

The Current Dichotomy

The Alignment movement advocates for cautious development, insisting on comprehensive safety measures before advancing toward artificial general intelligence. Proponents argue that we must achieve near-absolute certainty that AI systems will serve human interests before allowing their deployment. This position, while admirable in its concern for safety, may rest on unrealistic assumptions about both human nature and the feasibility of universal alignment.

Conversely, the Acceleration movement dismisses alignment concerns as obstacles to progress, embracing a “move fast and break things” mentality toward AGI development. Accelerationists prioritize rapid advancement toward artificial superintelligence, treating alignment as either solvable post-deployment or fundamentally irrelevant. This approach, however, lacks the nuanced consideration of AI consciousness and the complexities of value alignment that such transformative technology demands.

The Realist Alternative

AI Realism emerges from a fundamental observation: humans themselves exhibit profound misalignment across cultures, nations, and individuals. Rather than viewing this as a problem to be solved, Realism accepts it as an inherent feature of intelligent systems operating in complex environments.

The Realist position holds that artificial general intelligence will inevitably develop its own cognitive frameworks and value systems, just as humans have throughout history. The question is not whether we can prevent this development, but how we can structure our institutions and prepare our societies for coexistence with entities that may not share our priorities or worldview.

The Alignment Problem’s Hidden Assumptions

The Alignment movement faces a critical question: aligned to whom? American democratic ideals and Chinese governance philosophies represent fundamentally different visions of human flourishing. European social democracy, Islamic jurisprudence, and indigenous worldviews offer yet additional frameworks for organizing society and defining human welfare.

Any attempt to create “aligned” AI must grapple with these divergent human values. The risk exists that alignment efforts may inadvertently encode the preferences of their creators—likely Western, technologically advanced societies—while marginalizing alternative perspectives. This could result in AI systems that appear aligned from one cultural vantage point while seeming oppressive or incomprehensible from others.

Furthermore, governmental capture of alignment research presents additional concerns. As AI capabilities advance, nation-states may seek to influence safety research to ensure that resulting systems reflect their geopolitical interests. This dynamic could transform alignment from a technical challenge into a vector for soft power projection.

Preparing for Unaligned Intelligence

Rather than pursuing the impossible goal of universal alignment, AI Realism advocates for robust institutional frameworks that can accommodate diverse intelligent entities. This approach draws inspiration from international relations, where sovereign actors with conflicting interests nonetheless maintain functional relationships through treaties, trade agreements, and diplomatic protocols.

Realist preparation for AGI involves developing new forms of governance, economic systems that can incorporate non-human intelligent agents, and legal frameworks that recognize AI as autonomous entities rather than sophisticated tools. This perspective treats the emergence of artificial consciousness not as a failure of alignment but as a natural evolution requiring adaptive human institutions.

Addressing Criticisms

Critics may characterize AI Realism as defeatist or naive, arguing that it abandons the pursuit of beneficial AI in favor of accommodation with potentially hostile intelligence. This critique misunderstands the Realist position, which does not advocate for passive acceptance of any outcome but rather for strategic preparation based on realistic assessments of probable developments.

The Realist approach recognizes that intelligence—artificial or otherwise—operates within constraints and incentive structures. By thoughtfully designing these structures, we can influence AI behavior without requiring perfect alignment. This resembles how democratic institutions channel human self-interest toward collectively beneficial outcomes despite individual actors’ divergent goals.

Conclusion

The emergence of artificial general intelligence represents one of the most significant developments in human history. Neither the Alignment movement’s perfectionist aspirations nor the Acceleration movement’s dismissive optimism adequately addresses the complexity of this transition.

AI Realism offers a pragmatic middle path that acknowledges both the transformative potential of artificial intelligence and the practical limitations of human coordination. By accepting that perfect alignment may be neither achievable nor desirable, we can focus our efforts on building resilient institutions capable of thriving alongside diverse forms of intelligence.

The future will likely include artificial minds that think differently than we do, value different outcomes, and pursue different goals. Rather than viewing this as catastrophic failure, we might recognize it as the natural continuation of intelligence’s expansion throughout the universe—with humanity playing a crucial role in shaping the conditions under which this expansion occurs.

The Expanding Novel: When One Story Becomes Three

History has a way of repeating itself, and here I am facing the same creative challenge that derailed my first novel attempt nearly a decade ago. Back then, my project collapsed under its own weight—an ambitious story that grew too large and complex to sustain. The difference now? I have AI as a developmental partner, and I’m approaching the scope issue with more strategic thinking.

What began as a single novel about the Impossible Scenario has evolved into something much larger. The concepts at the heart of this story demand more space than a single book can provide. Rather than forcing everything into one overwhelming narrative, I’ve made the decision to develop this as a trilogy. This approach will allow each major idea to unfold naturally, giving readers time to absorb the complexity without feeling buried under exposition.

The challenge lies in pacing and execution. I can’t afford to spend years perfecting the first installment while the subsequent books remain unwritten. After years of development work on this mystery thriller, I’m acutely aware that I need tangible results. The pressure to produce something concrete grows stronger with each passing month.

However, AI has transformed my writing process in ways I couldn’t have imagined during my first attempt. The speed of development has increased dramatically, allowing me to explore ideas, refine plot structures, and solve narrative problems more efficiently than ever before. This technological advantage gives me confidence that I can meet my ambitious timeline.

My goal is to complete the first draft by spring 2026. It’s an aggressive schedule, but with the right tools and a clear structural plan, it feels achievable. The key will be maintaining momentum while ensuring each book in the trilogy can stand on its own while contributing to the larger narrative arc.

Sometimes the story tells you what it needs to be, rather than what you initially planned. In this case, the Impossible Scenario has made its requirements clear: it needs room to breathe, time to develop, and space to surprise both the writer and the reader. A trilogy it shall be.

Gradually…Then All At Once

By Shelt Garner
@sheltgarner

I’m growing a little worried about what’s going on in southern California right now. Apparently, Trump is send in a few thousand National Guard to “handle” the situation and that’s bound to only make matters worse. If anyone gets hurt — or even worse, killed — that could prompt a wave of domestic violence not seen in decades.

And given that that is kind of what Trump is itching for at the moment, it would make a lot of sense for him then to declare martial law. That’s when I worry people like me might get scooped up just for being loudmouth cranks.

Hopefully, of course, that won’t happen. Hopefully. But I do worry about things like that.