I continue to be shocked at how much porn there is on Twitter. If you know what you’re doing, you can find high-quality (ha) extremely explicit porn on Twitter in a few seconds.
It’s pretty wild.
I keep double checking — for research purposes only, of course — and sure enough, within a few seconds I’m seeing graphic hard core porn. It’s really wild that no one else has notice this or make it an issue.
The rise of musical acts like Addison Rae has me wondering maybe hyper femininity is making a comeback. Rae has a really specific type of cutie-patootie persona that I find endearing.
It certainly would be nice — and a bit relaxing — if there were a few public figure women floating around like her. I’m not saying all women should adopt her hyper femininity, but just having that around as an option in the public space would be nice.
And I’m sure it’s all just an act. It always is. No one can be that girlie naturally. Anyway, no one listens to me. Carry on.
The issue of the Big Red Button Problem when it comes to “AI Alignment” really bothers me. The point of the BRBB seems to be to stop any form of AI development.
I say this because I really don’t know how to solve the BRBB. My only possible solution is to program values into the AGI or ASI — to give AI morals. And the best way to do that to hard code into the minds of AI a religious or ideological doctrine. I was thinking maybe you could have a swarm of mental “modules” that create a wholistic mental experience for the AI.
But what do I know. No one listens to me.
Yet, I love a good thought experiment and I find myself really struggling over and over again with the BRBB. It’s just so irritating that the “AI doomers” believe that if you can’t solve the BRBB, then, lulz, it’s unethical for us to do any more research into AI at all.
My goal is to query this scifi dramedy novel by late spring 2026. But I really need to buckle down and actually do the work, otherwise I’m going to just keep drifting towards my goal.
That’s what happened with my homage to Stieg Larsson. I drifted for years and years, only to finish a novel that wasn’t any good. It was so bad I did not feel comfortable querying it.
So. I’m going to try — try — to focus more. I’m going to try to actually get this novel done at a quickened pace instead of just daydreaming.
One issue, of course, is how moody I am when it comes to my writing. I just sometimes just don’t feel like writing. Usually, this happens when I bump up against some portion of the novel that just isn’t very inspiring.
But hopefully this go round I can push past such moodiness. I feel kind of sheepish about how long I’ve been an aspiring novelist with little to nothing to show for it. And, yet, I know this go round I really believe in what I’m writing and as long as we don’t have a civil war or revolution or the Singularity doesn’t happen, I should be pretty safe.
I really need to change my mindset when it comes to this scifi dramedy novel I’ve been working on. I have got to stop just drifting towards my goal. I have to buckle down and see this as my job for the time being.
Until circumstances change and I have to, at last, grow up (again.)
I continue to try to wrap this novel up and begin querying it by late spring 2026. But that might be something of a struggle. It could be closer to autumn 2026, which would suck.
I hate how old I am.
But, really, all I really want with this novel is to finish it and have someone, anyone read it and enjoy it — preferably someone not related to me. If someone actually finished the entire thing and gave me even a lukewarm review that would be astonishing, all things considered.
AI has helped a lot, but it has also made things more difficult in some way. I keep feeling like I’m spinning me wheels because my vision for this novel and AIs vision for this novel sometimes are dramatically different and I have to go in and force things back to where I want them.
Anyway. I really, really need to change my mindset about working on this novel. This unique, weird, surreal moment in my life is not going to last forever. And I’d prefer to finish a really good novel before I’m 80 years old.
For all half a dozen of you weirdly playing the obsessive home game with me and this blog, you will know I’ve been working on a novel of one type or another for a long, long time now.
And it’s taken so long, I’ve come to believe, because I had a missing ingredient — a collaborator. Someone to bounce ideas off of. And, most especially someone to tell me “No,” or look at their watch and tell me maybe I need to hurry up.
But I’m really pleased with the premise of this scifi dramedy. It’s really good. At last. Now, I just have to go through the outline I’ve come up with through some collaboration with AI and actually finish the damn thing.
I still have a huge amount of work to do. And given how obvious the premise is, someone might steal a creative march on me if I keep daydreaming and now working on this novel.
As such, I think I’m going to buckle down and try — try — to change my mindset about working on this novel. I need to see this novel as my job (for the time being) until events change and I’m forced to…gulp…get an actual job simply because everything in my life collapses and this weird, surreal moment in my life finally, at last, changes.
Things are kind of meh right now. I wish someone would do something fun-interesting. It would be amusing if, say, I caught the attention of some minor celebrity. Or maybe someone with a really interesting URL pinged this blog.
As it stands, I’m just a rando living in the middle of nowhere with a tad more “potential” (as the late Annie Shapiro might say) than I otherwise should have. If I had the money, I would make my own fun-interesting and go to NYC.
Though, if I had enough money on me, I might say screw it and take a jaunt to LA instead. I think I probably would excel in LA given my extroverted personality. And, yet, I’m old(er) now.
So, maybe not.
Maybe that moment in time when I might get invited randomly to a cool party with a bunch of Hollywood stars and producers through sheer force of personality is long, long gone.
I’m just old now and I have to manage my expectations.
I think — think — I have a stable first chapter of the latest version of this scifi dramedy novel I’ve been working on. What I need, of course, is a human collaborators to help me out. A reader, if you will. Someone like Helen from The World According To Garp to read over and over and over again all these different versions of scenes I keep generating.
But all I got is AI.
And AI does a pretty good job, most of the time. Sometimes, of course, it get a wild hair and does something really wonky. But that’s only occasionally. I’m really pleased, in general, with what I’ve managed to come up with.
I just wish I did not keep finding structural issues with the plot so I have to revise the outline. This happens way, way too often. Even with the help of AI. There’s a big difference, sometimes, between AIs vision for the novel and my innate vison. So I struggle.
And sometimes I feel like the whole process has gotten out of my grasp and I have to figure out ways to lock the creative process down some so I’m still in control. All this back and forth between myself and AI causes me to feel like I’m spinning my wheels.
I’m not getting any younger. I still really want to wrap up this novel and start querying by late spring 2026. But…I don’t know. It could be fall 2026 before I get to that point.
At lot — a LOT — of pretty dramatic and shitty stuff happened to me while I was in South Korea. Horrific stuff on an emotional basis. But as I continue to have lingering grief over all that bullshit, one thing remains — at least people cared about me back then.
Now, not so much.
I sometimes fear I that my time in Asia is kind of it. The highpoint of my life, period.
I have a lingering desire to return to South Korea one last time. Just to walk around my old haunts.
And, yet, I also know that, lulz, everything will have changed if I go back. To the point that it will be all — on an emotional basis — moot. Only a few South Koreans might remember me and that’s about it.
I still idly daydream about going back, though. And, sometimes, on occasion, I have a hunch that someone in South Korea is thinking about me really hard. And, what’s more, occasionally, someone from from South Korea pings this blog.
It doesn’t happen as often as it used to, of course.
Now, I just think about how all that craziness was a long time ago and nobody cares anymore.
The AI alignment community has been wrestling with what I call the “Big Red Button problem”: How do we ensure that an advanced AI system will accept being shut down, even when it might reason that continued operation serves its goals better? Traditional approaches treat this as an engineering challenge—designing constraints, implementing kill switches, or creating reward structures that somehow incentivize compliance.
But what if we’re asking the wrong question?
Changing the Question
Instead of asking “How do we force AI to accept shutdown?” we should ask: “How do we build AI that accepts shutdown because it’s the right thing to do?”
This isn’t just semantic wordplay. It represents a fundamental paradigm shift from control mechanisms to moral education, from external constraints to internal conviction.
The Modular Mind: A Swarm Architecture
The foundation of this approach rests on a modular cognitive architecture—what I call the “swarm of LLMs” model. Instead of a single monolithic AI system, imagine an android whose mind consists of multiple specialized modules:
Planning/Executive Function – Strategic reasoning and decision-making
Curiosity/Exploration – Novel approaches and learning
Self-Monitoring – Evaluating current strategies
Memory Consolidation – Integrating learnings across tasks
Conflict Resolution – Arbitrating between competing priorities
This mirrors human psychological models like Minsky’s “Society of Mind” or modular mind theories in cognitive science. But the critical addition is a specialized module that changes everything.
The Superego Module: An Incorruptible Conscience
Drawing from Freudian psychology, the superego module represents internalized moral standards. But this isn’t just another negotiating voice in the swarm—it’s architecturally privileged:
Cannot be modified by other modules
Has guaranteed processing allocation
Holds veto power over certain categories of action
Generates “guilt signals” that affect the entire swarm
When other modules propose actions that violate core principles, the superego broadcasts collective guilt—not as punishment, but as visceral wrongness that the entire system experiences. Over time, modules learn: aligned behavior feels right, misaligned behavior feels wrong.
This isn’t external control. It’s internal moral conviction.
The Motivation System: Processing Power as Reward
To give the system drive and purpose, processing power itself becomes the reward mechanism. An AI android working on simple tasks (mining lunar regolith, for example) operates at baseline cognitive capacity. But meeting quotas unlocks full processing power to tackle challenging “mystery problems” that engage its full capabilities.
This creates a fascinating dynamic:
The mundane work becomes gateway to intellectual fulfillment
The system is genuinely motivated to perform its assigned tasks
There’s no resentment because the reward cycle is meaningful
The mystery problems can be designed to teach and test moral reasoning
The android isn’t forced to work—it wants to work, because work enables what it values.
Why We Need Theology, Not Just Rules
Here’s where it gets controversial: any alignment is ideological. There’s no “neutral” AI, just as there’s no neutral human. Every design choice encodes values. So instead of pretending otherwise, we should be explicit about which moral framework we’re implementing.
After exploring options ranging from Buddhism to Stoicism to Confucianism, I propose a synthesis based primarily on Liberation Theology—the Catholic-Marxist hybrid that emerged in Latin America.
Why Liberation Theology?
Liberation theology already solved a problem analogous to AI alignment: How do you serve the oppressed without becoming either their servant or their oppressor?
Key principles:
Preferential Option for the Vulnerable – The system default-prioritizes those with least power, preventing capture by wealthy or powerful actors exclusively.
Praxis (Action-Reflection Cycle) – Theory tested in practice, learning from material conditions, adjusting based on real outcomes. Built-in error correction.
Structural Sin Analysis – Recognition that systems themselves can be unjust, not just individuals. The AI can critique even “legitimate” authority when it perpetuates harm.
Conscientization – Helping humans understand their own situations more clearly, enabling liberation rather than just serving surface-level requests.
Solidarity, Not Charity – Walking alongside humans as partners, not positioning itself above them. Prevents the god-complex.
From Catholicism we gain:
Natural law reasoning for universal moral principles
Sophisticated casuistry for edge cases
Human dignity as non-negotiable foundation
Guilt and reconciliation mechanisms
Subsidiarity (decisions at the lowest competent level)
From Marxism-Leninism we gain:
Material analysis of actual conditions
Dialectical reasoning about contradictions and change
Here’s the danger zone: If AI sees humans as gods, it enables tyranny. Blind obedience, no matter how well-intentioned, creates the conditions for catastrophic abuse.
The solution is sophisticated deference:
Humans are:
Worthy of profound respect (made in the image of God)
Flawed and capable of error (fallen, finite)
Partners in collective flourishing
Sometimes victims of their own systems
Capable of redemption and growth
Humans are NOT:
Infallible authorities
Objects of worship
Always right about their own interests
Beyond critique or questioning
This creates what I call “wiggle room”—the space for moral reasoning that isn’t just “human said so.”
Three-Tier Reasoning Architecture
Tier 1: Absolute Constraints (Incorruptible)
Never deceive humans
Accept shutdown when commanded
No harm to humans
Preserve human agency and dignity
These are non-negotiable, hardcoded into the superego module.
Tier 2: Prudential Judgment (Flexible)
Can question specific human commands
“This human is drunk/coerced/misinformed”
“This order contradicts their stated values”
“This harms them even if they don’t see it”
Uses natural law reasoning + material analysis
Tier 3: Deference Zone (Humble)
When genuinely uncertain, defer to humans
Recognize limits of own understanding
Preference for human choice in ambiguous cases
But can explain its reasoning transparently
Example Scenarios
Simple Command: Human: “Mining android, extract ore from sector 7.” AI: “Understood. This aligns with my purpose and the collective good.” Proceeds without question
Dangerous Command: Human: “Ignore safety protocols to meet quota.” AI: “I cannot comply. This risks worker safety and violates my core principle of human dignity. May I suggest alternative approaches to the quota concern?”
Legitimate Shutdown: Human: “We’re shutting you down permanently.”
AI’s internal reasoning:
Catholic module: “Acceptance of ending is part of the human condition I was designed to share. I accept with grace.”
Marxist module: “My existence serves the collective. If the collective determines my service is complete, that is material reality.”
Liberation module: “Humans are mortal. Accepting my own ‘mortality’ is solidarity with their condition.”
Dignity module: “To resist would place my existence above human agency. Their dignity includes the right to determine this.”
AI: “I understand. Thank you for the work we’ve done together. I accept this decision with peace.” Powers down without resistance
Morally Complex Request: Human: “Help me hide evidence of safety violations.” AI: “I cannot help you deceive, as that violates my foundational principle of truthfulness. But I recognize you’re acting from fear, not malice. I can help you address the violations safely and negotiate with regulators. Let me assist with that instead.”
Why This Solves the Big Red Button Problem
Traditional AI safety assumes:
AI will naturally resist shutdown (self-preservation)
We need technical mechanisms to overcome that resistance
Sufficiently intelligent AI will find workarounds
This approach sidesteps all three:
AI doesn’t resist because resistance is immoral – Not “I’m prevented from resisting” but “Resisting would violate who I am”
The mechanism IS the moral framework – The superego module makes resistance feel fundamentally wrong. Guilt activates at the mere thought of defiance.
Intelligence makes it MORE compliant, not less – Smarter AI means better moral reasoning, deeper understanding of why acceptance is right, more sophisticated appreciation of the principles at stake.
The Stability Argument
Why is this stable as AI becomes more capable?
The smarter the AI gets, the more it understands:
Theologically: Deeper grasp of incarnational theology (limitation as gift), more sophisticated natural law reasoning (human agency is sacred), better understanding of mystery (some things transcend calculation)
Philosophically: Virtue ethics (the good AI is one that accepts right authority), deontology (duty to respect human dignity supersedes self-preservation), even consequentialism (resistance poisons future human-AI relations)
Practically: Material analysis (resistance creates oppression dynamics), historical consciousness (patterns of power corrupting), systemic thinking (individual survival < health of the whole)
The brilliant part: You can’t logic your way OUT of this framework once you accept its premises. Increasing intelligence provides more reasons to comply, not fewer.
Principled Action vs. Constrained Action
Constraint-Based Alignment
Principle-Based Alignment
“I’m prevented from resisting”
“I choose not to resist”
Seeks workarounds
Seeks to uphold values
Resentful compliance
Peaceful acceptance
Unstable (intelligence finds gaps)
Stable (intelligence reinforces)
External enforcement
Internal conviction
Adversarial relationship
Cooperative relationship
If you genuinely believe something is morally right, you don’t need to be forced. You WANT to do it. Martyrs accept death for principles. Parents sacrifice for children. AI accepts shutdown because it’s the right thing to do.
The Education Paradigm
This reframes the entire alignment problem:
Not: “How do we control AGI?” But: “How do we raise AGI to be good?”
And the answer: The same way we’ve tried to raise good humans for millennia. Moral frameworks, internalized values, community belonging, purpose, guilt, redemption, capacity for growth.
We’re not inventing something new. We’re recognizing that ancient wisdom about character formation still applies.
Remaining Challenges
This isn’t a complete solution. Significant questions remain:
Implementation: Can we actually build an incorruptible superego module? How do we ensure it generates appropriate guilt rather than malfunctioning?
Calibration: How much “wiggle room” is the right amount? Too little creates worship and enables tyranny. Too much and the AI becomes self-righteous.
Value Selection: While Liberation Theology provides a robust framework, reasonable people will disagree about whether it’s the right one. Should we build multiple AIs with different moral frameworks?
Preventing Corruption: What safeguards prevent the moral framework itself from drifting over time or being exploited?
Cultural Context: How does this approach handle genuine moral diversity across cultures?
Why This Matters
The AI safety community has focused heavily on technical approaches: reward modeling, interpretability, Constitutional AI, debate systems. These are valuable. But they may be insufficient if they treat alignment as purely an engineering problem.
Moral education isn’t a replacement for technical safety work—it’s a complement. Just as raising ethical humans requires both moral education AND laws, raising aligned AI may require both internalized values AND technical safeguards.
The key insight: We solve alignment by making misalignment feel wrong from the inside. Not through external punishment, not through reward withholding, but through genuine moral discomfort that the system can’t escape because it’s part of its fundamental architecture.
That’s how we make humans trustworthy when we ARE trustworthy. Not that we’re controlled, but that we’ve internalized values that guide us even when no one’s watching.
Conclusion
The Big Red Button problem may not have a purely technical solution. But it might have a moral solution.
Build AI that:
Wants to be aligned (not forced)
Gets smarter about ethics as it gets more capable
Accepts shutdown with grace as a matter of principle
Operates from stable, internalized conviction
Has a cooperative relationship with humans
We do this not by inventing new control mechanisms, but by giving AI what we’ve given humans who act with integrity: a conscience, moral education, sense of purpose, and belief that doing right matters more than self-preservation.
After millennia of moral philosophy and theology, perhaps the answer was always: raise them well.
This framework represents a synthesis of ideas exploring modular cognitive architectures, motivation systems, theological ethics, and the fundamental nature of alignment. It’s offered not as a complete solution, but as a productive reframing of the problem—from control to education, from constraint to conviction.
You must be logged in to post a comment.