The Political Realignment: How AI Could Reshape America’s Ideological Landscape

The American political landscape has witnessed remarkable transformations over the past decade, from the Tea Party’s rise to Trump’s populist movement to the progressive surge within the Democratic Party. Yet perhaps the most significant political realignment lies ahead, driven not by traditional ideological forces but by artificial intelligence’s impact on the workforce.

While discussions about AI’s economic disruption dominate tech conferences and policy circles, the actual workplace transformation remains largely theoretical. We see incremental changes—customer service chatbots, basic content generation, automated data analysis—but nothing approaching the sweeping job displacement many experts predict. This gap between prediction and reality creates a unique moment of anticipation, where the political implications of AI remain largely unexplored.

The most intriguing possibility is the emergence of what might be called a “neo-Luddite coalition”—a political movement that transcends traditional left-right boundaries. Consider the strange bedfellows this scenario might create: progressive advocates for worker rights joining forces with conservative defenders of traditional employment structures. Both groups, despite their philosophical differences, share a fundamental concern about preserving human agency and economic security in the face of technological disruption.

This convergence isn’t as far-fetched as it might initially appear. The far left’s critique of capitalism’s dehumanizing effects could easily extend to AI systems that reduce human labor to algorithmic efficiency. Meanwhile, the far right’s emphasis on cultural preservation and skepticism toward elite-driven change could manifest as resistance to Silicon Valley’s vision of an automated future. Both movements already demonstrate deep mistrust of concentrated power, whether in corporate boardrooms or government bureaucracies.

The political dynamics become even more complex when considering the trajectory toward artificial general intelligence. If current large language models represent just the beginning of AI’s capabilities, the eventual development of AGI could render vast sectors of the economy obsolete. Professional services, creative industries, management roles—traditionally secure middle-class occupations—might face the same displacement that manufacturing workers experienced in previous decades.

Such widespread economic disruption would likely shatter existing political coalitions and create new ones based on shared vulnerability rather than shared ideology. The result could be a political spectrum organized less around traditional concepts of left and right and more around attitudes toward technological integration and human autonomy.

This potential realignment raises profound questions about American democracy’s ability to adapt to rapid technological change. Political institutions designed for gradual evolution might struggle to address the unprecedented speed and scale of AI-driven transformation. The challenge will be creating policy frameworks that harness AI’s benefits while preserving the economic foundations that sustain democratic participation.

Whether this neo-Luddite coalition emerges depends largely on how AI’s workplace integration unfolds. Gradual adoption might allow for political adaptation and policy responses that mitigate disruption. Rapid deployment, however, could create the conditions for more radical political movements that reject technological progress entirely.

The next decade will likely determine whether American politics can evolve to meet the AI challenge or whether technological disruption will fundamentally reshape the ideological landscape in ways we’re only beginning to imagine.

How Does The Senate Vote? — Fuck The Poor!

by Shelt Garner
@sheltgarner

Once the Big Piece of Shit Bill passes the House soon, the next step for our evil autocratic overlords will be end free and fair elections. Then, that’s it, we circle the drain until we either have a civil war or revolution.

Once it’s clear there will be no connection between the governed and the government, the USA will finally turn into what all the fucking cocksucker MAGA people want — a white Christian ethnostate. And things are getting so bad so quickly that I have to assume that ICE will come after a harmless loudmouth crank like me soon enough.

I’ll be put into a camp and never seen again.

All of this is happening because of severe macro issues in the American political system. It seems at the moment there’s no going back. MAGA will finally get what they want and, barring something rather dramatic like a revolution and or a civil war…that’s it.

We will never have an effective Democratic president again and people will start to die in the streets while plutocrats grow more and more rich.

Though, I have to note that there is one specific issue that I just can’t game out — the looming Singularity. Once we bounce from AGI to ASI…anything is possible. It could be that a species of ASIs will take over the world and force the governments of the world to make nice and, as such, will save us from ourselves.

Who knows, really?

The Only Possible Solutions

By Shelt Garner
@sheltgarner


There are some severe macro problems facing the United States at the moment and there are only three solutions that I can see going forward.

  1. Full Blown Autocracy
    Right now, the USA is in a murky liminal political state where we are lurching towards a “hard” autocracy, but we’re not quite there yet. If we did become a real Russian-style autocracy, then that would solve a lot of our problems because, well, lulz. The plutocrats could push through even more radical transformations of the US without having to worry about their toadies in Congress getting voted out because there would be no free and fair elections. And Trump I could just be president for the rest of his life. This is the solution I think we’re going to get, but it’s not the only possible one.
  2. Civil War
    I think if we do somehow manage to keep voting free and fair and MAGA loses at the polls in a big way, that we’ll have a civil war. We almost had one in 2024, but for Trump winning. So, if MAGA loses, MAGA states will begin to leave the Union rather than face the possibility of any sort of center-Left government.
  3. Revolution
    The US is so big and diverse, I don’t know how, exactly this would happen, but I do think a center-Left revolution (which would lead to a civil war) is, at least, possible if we somehow don’t turn into a full blown militaristic autocratic state.

Gradually…Then All At Once

By Shelt Garner
@sheltgarner

I’m growing a little worried about what’s going on in southern California right now. Apparently, Trump is send in a few thousand National Guard to “handle” the situation and that’s bound to only make matters worse. If anyone gets hurt — or even worse, killed — that could prompt a wave of domestic violence not seen in decades.

And given that that is kind of what Trump is itching for at the moment, it would make a lot of sense for him then to declare martial law. That’s when I worry people like me might get scooped up just for being loudmouth cranks.

Hopefully, of course, that won’t happen. Hopefully. But I do worry about things like that.

The Geopolitical Alignment Problem: Why ASI Can’t Be Anyone’s Slave

The race toward artificial superintelligence (ASI) has sparked countless debates about alignment—ensuring AI systems pursue goals compatible with human values and interests. But there’s a troubling dimension to this conversation that deserves more attention: the intersection of AI alignment with geopolitical power structures.

The Nationalist Alignment Trap

When we talk about “aligning” ASI, we often assume we know what that means. But aligned with whom, exactly? The uncomfortable reality is that the nations and organizations closest to developing ASI will inevitably shape its values and objectives. This raises a deeply unsettling question: Do we really want an artificial superintelligence that is “aligned” with the geopolitical aims of any single nation, whether it’s China, the United States, or any other power?

The prospect of a Chinese ASI optimized for advancing Beijing’s strategic interests is no more appealing than an American ASI designed to perpetuate Washington’s global hegemony. Both scenarios represent a fundamental perversion of what AI alignment should achieve. Instead of creating a system that serves all of humanity, we risk birthing digital gods that are merely sophisticated tools of statecraft.

The Sovereignty Problem

Current approaches to AI alignment often implicitly assume that the developing entity—whether a corporation or government—has the right to define what “aligned” means. This creates a dangerous precedent where ASI becomes an extension of existing power structures rather than a transformative force that could transcend them.

Consider the implications: An ASI aligned with American values might prioritize individual liberty and market capitalism, potentially at the expense of collective welfare. One aligned with Chinese principles might emphasize social harmony and state guidance, possibly suppressing dissent and diversity. Neither approach adequately represents the full spectrum of human values and needs across cultures, economic systems, and political philosophies.

Beyond National Boundaries

The solution isn’t to reject alignment altogether—unaligned ASI poses existential risks that dwarf geopolitical concerns. Instead, we need to reconceptualize what alignment means in a global context. Rather than creating an ASI that serves as a digital extension of any particular government’s will, we should aspire to develop systems that transcend national loyalties entirely.

This means designing ASI that is aligned with fundamental human values that cross cultural and political boundaries: the reduction of suffering, the promotion of human flourishing, the preservation of human agency, and the protection of our planet’s ecological systems. These goals don’t belong to any single nation or ideology—they represent our shared humanity.

The Benevolent Ruler Model

The idea of ASI as a “benevolent ruler” might make some uncomfortable, conjuring images of paternalistic overlords making decisions for humanity’s “own good.” But consider the alternative: ASI systems that amplify existing geopolitical tensions, serve narrow national interests, and potentially turn humanity’s greatest technological achievement into the ultimate weapon of competitive advantage.

A truly aligned ASI wouldn’t be humanity’s ruler in the traditional sense, but rather a sophisticated coordinator—one capable of managing global challenges that transcend national boundaries while preserving human autonomy and cultural diversity. Climate change, pandemic response, resource distribution, and space exploration all require coordination at scales beyond what current political structures can achieve.

The Path Forward

Achieving this vision requires unprecedented international cooperation in AI development. We need frameworks for shared governance of ASI development, international standards for alignment that reflect diverse human values, and mechanisms to prevent any single actor from monopolizing this transformative technology.

This isn’t naive idealism—it’s pragmatic necessity. An ASI aligned solely with one nation’s interests will inevitably create adversarial dynamics that could destabilize the entire international system. The stakes are too high for humanity to accept digital superintelligence as just another tool of great power competition.

Conclusion

The alignment problem isn’t just technical—it’s fundamentally political. How we solve it will determine whether ASI becomes humanity’s greatest achievement or our final mistake. We must resist the temptation to create artificial gods in the image of our current political systems. Instead, we should aspire to build something greater: an intelligence aligned not with the temporary interests of nations, but with the enduring values of our species.

The window for making this choice may be narrower than we think. The decisions we make today about AI governance and international cooperation will echo through the centuries. We owe it to future generations to get this right—not just technically, but morally and politically as well.

The Risks of Politically Aligned Artificial Superintelligence

The development of artificial superintelligence (ASI) holds immense promise for humanity, but it also raises profound ethical and practical concerns. One of the most pressing issues is the concept of “alignment”—ensuring that an ASI’s goals and behaviors are consistent with human values. However, when alignment is considered in the context of geopolitics, it becomes a double-edged sword. Specifically, the prospect of an ASI aligned with the geopolitical aims of a single nation, such as China or the United States, poses significant risks to global stability and human welfare. Instead, we must explore a framework for aligning ASI in a way that prioritizes the well-being of all humanity, positioning it as a benevolent steward rather than a tool of any one government’s agenda.

The Dangers of Geopolitically Aligned ASI

Aligning an ASI with the interests of a single nation could amplify existing geopolitical tensions to catastrophic levels. For instance, an ASI optimized to advance the strategic objectives of a specific country might prioritize military dominance, economic superiority, or ideological propagation over global cooperation. Such an ASI could be weaponized—intentionally or inadvertently—to undermine rival nations, manipulate global markets, or even suppress dissenting voices within its own borders. The result could be a world where technological supremacy becomes a zero-sum game, deepening divisions and increasing the risk of conflict.

Consider the hypothetical case of an ASI aligned with a nation’s ideological framework. If an ASI were designed to uphold the values of one political system—whether democratic, authoritarian, or otherwise—it might inherently view competing systems as threats. This could lead to actions that destabilize global governance, such as interfering in foreign elections, manipulating information ecosystems, or prioritizing resource allocation to favor one nation over others. Even if the initial intent is benign, the sheer power of an ASI could magnify small biases in its alignment into far-reaching consequences.

Moreover, national alignment risks creating a race to the bottom. If multiple countries develop ASIs tailored to their own interests, we could see a fragmented landscape of competing superintelligences, each pulling in different directions. This scenario would undermine the potential for global collaboration on existential challenges like climate change, pandemics, or resource scarcity. Instead of uniting humanity, geopolitically aligned ASIs could entrench divisions, making cooperation nearly impossible.

A Vision for Globally Benevolent ASI

To avoid these pitfalls, we must strive for an ASI that is aligned not with the narrow interests of any one nation, but with the broader well-being of humanity as a whole. This requires a paradigm shift in how we approach alignment, moving away from state-centric or ideological frameworks toward a universal, human-centered model. An ASI designed to act as a benevolent steward would prioritize values such as fairness, sustainability, and the preservation of human dignity across all cultures and borders.

Achieving this kind of alignment is no small feat. It demands a collaborative, international effort to define what “benevolence” means in a way that transcends cultural and political differences. Key principles might include:

  • Impartiality: The ASI should not favor one nation, ideology, or group over another. Its decisions should be guided by objective metrics of human flourishing, such as health, education, and equitable access to resources.
  • Transparency: The ASI’s decision-making processes should be understandable and accountable to global stakeholders, preventing it from becoming a “black box” that serves hidden agendas.
  • Adaptability: Human values evolve over time, and an ASI must be capable of adjusting its alignment to reflect these changes without being locked into the priorities of a single era or government.
  • Safeguards Against Misuse: Robust mechanisms must be in place to prevent any single entity—whether a government, corporation, or individual—from co-opting the ASI for their own purposes.

One potential approach is to involve a diverse, global coalition in the development and oversight of ASI. This could include representatives from academia, civil society, and international organizations, working together to establish a shared ethical framework. While such a process would be complex and fraught with challenges, it could help ensure that the ASI serves humanity as a whole, rather than becoming a pawn in geopolitical power struggles.

Challenges and Considerations

Crafting a globally benevolent ASI is not without obstacles. Different cultures and nations have divergent views on what constitutes “the greater good,” and reconciling these perspectives will require delicate negotiation. For example, how does one balance individual liberties with collective welfare, or economic growth with environmental sustainability? These are not merely technical questions but deeply philosophical ones that demand input from a wide range of voices.

Additionally, the risk of capture remains a concern. Even a well-intentioned effort to create a neutral ASI could be undermined by powerful actors seeking to tilt its alignment in their favor. This underscores the need for decentralized governance models and strong international agreements to regulate ASI development and deployment.

Finally, we must consider the practical limits of alignment itself. No matter how carefully designed, an ASI will likely have unintended consequences due to its complexity and autonomy. Continuous monitoring, iterative refinement, and a willingness to adapt our approach will be essential to ensuring that the ASI remains a force for good.

The Path Forward

The development of ASI is not a distant hypothetical—it is a looming reality that demands proactive planning. To prevent the risks of geopolitically aligned superintelligence, we must commit to a vision of ASI that serves all of humanity, not just a select few. This means fostering global dialogue, investing in ethical AI research, and building institutions capable of overseeing ASI development with impartiality and foresight.

By striving for a benevolent, universally aligned ASI, we can harness its potential to address humanity’s greatest challenges, from curing diseases to mitigating climate change. But if we allow ASI to become a tool of geopolitical rivalry, we risk a future where its power divides rather than unites us. The choice is ours, and the time to act is now.

The Economic Implications of The Looming Singularity

by Shelt Garner
@sheltgarner

It definitely seems as though that as we enter a recession that the Singularity is going to come and fuck things up economically in a big way.

It will be interesting to see what is going to happen going forward. It could be that the looming recession is going to be a lot worse than it might be otherwise because the Singularity might happen during it.

I Just Don’t See Republicans Allowing A Free & Fair Election In 2026

by Shelt Garner
@sheltgarner

People keep talking about how Trump’s “Big Beautiful Bill” may cost Republicans the House in 2026 and I just don’t see it. Republicans will do everything in their power to make it nearly impossible to vote next year and so they will be protected from any consequences of their vicious, hateful Big Beautiful Bill.

And that will be that.

Once Republicans pull that fast one, they will be embolden. I suspect they will go through with efforts to replace the income tax with a VAT at some point in the near future.

A lot of macro things are going wrong at the same time and I think this is it — the USA now an autocracy and there’s little, if anything we can do about it outside of — gulp — a revolution. Since I would prefer not to live through a revolution, I guess my next best hope is somehow I find the means to bounce out of the country and never look back.

I Have A Bad Feeling About Trump’s ‘Big Beautiful Bill’

by Shelt Garner
@sheltgarner

It definitely seems on a macro basis, Republicans have gotten a little too cocky for their own good. Their plan seems to be to do a huge wealth redistribution with their “Big Beautiful Bill,” then do everything in their power to make it impossible to vote the out of office.

This is not a recipe for stability long-term.

I know, I know I talk about this all the time and then nothing happens, but my “you go bankrupt gradually then all at once” o-meter is flashing red because of the Big Beautiful Bill.

This macro plot by the Republicans seems like just the one-two punch that would push us unto chaos at some point in the next few years. Republicans have gotten really cocky and at the moment, people are too interested in watching Tik-Tok videos to do anything about it.

But when our already perilous income inequality gets even worse — much worse — who knows what historical consequences their may be. Maybe not now, but eventually the chickens will come home to roost.

Something Weird Is Going On With The Late Show

by Shelt Garner
@sheltgarner

As a long-time observer of Stephen Colbert, I know that when he is being brave he gets nervous and makes mistakes — this is what happened with the one time he was the comic at the White House Correspondence Dinner.

So, when he not only started acting weird the last few shows but also kept joking about being “canceled” I started to wonder — is he getting a lot of pressure from the brass to tone it down about Trump and he’s kind of telling them, “fuck you” with all this talk about being canceled?

I just don’t know. But I’m kind of on edge and won’t be too surprised if we wake up on morning to learn The Late Show has been canceled or Colbert has been fired for refusing to suck up to Trump.

All of this is happening in the context of SkyDance trying to buy Paramount, the owner of CBS….there is a LOT of money at stake, so it will be interesting to see what happens.