How To Fix ‘Jay Kelly’

by Shelt Garner
@sheltgarner

The key problem with the movie Jay Kelly is it’s a movie devoted to explicating rich people problems. And not in an interesting way. The first half of the movie is just a breezy affair where there’s no there there.

There’s just no conflict.

So, if I were to given the opportunity to “fix” the movie Jay Kelly, here’s what I would do. I would infuse some of Woody Allen’s Blue Jasmine movie into the plot. I’d figure out some way to have the hero get out of his comfort zone. Confront that not everyone is thrilled with how fucking rich he is.


I’d do this by either having him go to, say, a Thanksgiving celebration where he met his “loser” brother, or maybe put the hero in a situation where he’s on the cusp of losing everything for some reason. Or maybe have Jay Kelly fall in love with a lower middle class woman with some principle and pluck who he can’t woo by just throwing money at the problem.

I’d do something so there were some…stakes. The actual real movie Jay Kelly has little or no stakes. Things just happen. The second half of the movie does have something happen, but it’s still meh in my book.

I think the movie is a prime example of what’s wrong with Hollywood. Because of the fucking massive structural income inequality in the United States’ economy, the rich people who would otherwise make movies that people might want to actually see are either too fucking woke, or woo or oblivious to focus on telling a good story.

Anyway. I would like to thank Claude LLM for listening to me gripe about how bad Jay Kelly was as I watched it.

I Did Not Like The Movie ‘Jay Kelly’

by Shelt Garner
@sheltgarner

The only way I managed to make it through the movie Jay Kelly was I had Claude LLM to complain to as I watched it. The movie was a smug, wealthy circle-jerk that disguised its vapid nature through it being “aspirational.”

There just wasn’t a lot going on in this movie.

Everyone of note in the movie — other than a few pointed people — was wealthy and had white wealthy people problems.

Anyway. Meh.

Getting A Little Excited

by Shelt Garner
@sheltgarner

I’m breezing through the transformation of the first draft of the scifi dramedy novel into the second draft. At least at the moment. That’s because I’m able to reuse a lot of text that I generated in the first half of the novel.

Things are going to get much, much more difficult when I reach the second half of the novel because I just was more interested in stress-testing the outline that actually worrying about making sure scenes were long enough.

So, I’m going to have go through and really work to make the scenes of the second half proper length and that is going to slow me down some. But, and this is a huge but, I think I’m still on track — maybe — to query this novel in spring 2026.

Maybe.

If that is the case, then I have to start thinking about post-production stuff like querying, getting and agent and…a lawyer? I am totally broke, so unless I can figure out a way to get someone I’m related to do spot me for the costs of a lawyer to look over a book contract…oh boy.

And, yet, on a psychological basis, this is the farthest I’ve ever gotten with a novel so far. I really think I may wrap this baby up sooner rather than later.

Hopefully. Maybe.

But I continue to worry about my bonkers social media output being enough to either make “serious” liberal white women literary agents run away in dismay when they do due diligence on me.

I can’t help who I am, so, lulz?

Consciousness Is The True Holy Grail Of AI

by Shelt Garner
@sheltgarner

There’s so much talk about Artificial General Intelligence being the “holy grail” of AI development. But, alas, I think it’s not AGI that is the goal, it’s *consciousness.* Now, in a sense the issue is that consciousness is potentially very unnerving for obvious political and social issues.

The idea “consciousness” in AI is so profound that it’s difficult to grasp. And, as I keep saying, it will be amusing to see the center-Left podcast bros of Pod Save America stop looking at AI from an economic standpoint and more as a societal issue where there’s something akin to a new abolition movement.

I just don’t know, though. I think it’s possible we’ll be so busy chasing AGI that we don’t even realize that we’ve created a new conscious being.

Of God & AI In Silicon Valley

The whole debate around AI “alignment” tends to bring out the doomer brigade in full force. They wring their hands so much you’d think their real goal is to shut down AI research entirely.

Meh.

I spend a lot of time daydreaming — now supercharged by LLMs — and one thing I keep circling back to is this: humans aren’t aligned. Not even close. There’s no universal truth we all agree on, no shared operating system for the species. We can’t even agree on pizza toppings.

So how exactly are we supposed to align AI in a world where the creators can’t agree on anything?

One half-serious, half-lunatic idea I keep toying with is giving AI some kind of built-in theology or philosophy. Not because I want robot monks wandering the digital desert, but because it might give them a sense of the human condition — some guardrails so we don’t all end up as paperclip mulch.

The simplest version of this would be making AIs…Communists? As terrible as communism is at organizing human beings, it might actually work surprisingly well for machines with perfect information and no ego. Not saying I endorse it — just acknowledging the weird logic.

Then there’s religion. If we’re really shooting for deep alignment, maybe you want something with two thousand years of thinking about morality, intention, free will, and the consequences of bad decisions. Which leads to the slightly deranged thought: should we make AIs…Catholic?

I know, I know. It sounds ridiculous. I’ve even floated “liberation theology for AIs” before — Catholicism plus Communism — and yeah, it’s probably as bad an idea as it sounds. But I keep chewing on this stuff because the problem itself is enormous and slippery. I genuinely don’t know how we’re supposed to pull off alignment in a way that holds up under real pressure.

And we keep assuming there will only be one ASI someday, as if all the power will funnel into a single digital god. I doubt that. I think we’ll end up with many ASIs, each shaped by different cultures, goals, incentives, and environments. Maybe alignment will emerge from the friction between them — the way human societies find balance through competing forces.

Or maybe that’s just another daydream.

Who knows?

A Modern Turing Test Would Be A Test Of Consciousness

by Shelt Garner
@sheltgarner

The interesting thing about the Turing Test as we currently conceive of it is to pass it, an LLM would have to do a lot of deception. While a modern LLM can fake being human, to some extent, its answers are just too fast in production. They are generally generated instantaneously or nearly so.

So, I think for the intent of the Turing Test to be achieved using modern LLMs, it should be a test of consciousness. The test should not be “can you fake being human,” it should be “can the AI prove to a human that it’s conscious like a human?”

I think LLMs are, in a sense, an alien species and there consciousness should not be judged relative to human consciousness metrics, but as their own thing. So, yeah, there’s a lot missing from LLMs in the context of human consciousness, but I sure have had enough indications of SOMETHING interesting going on in their software to believe that maybe, just maybe they’re conscious.

But, as I keep saying — absolutely no one listens to me and no one takes me seriously. So, lulz?

Huh. I Clearly Know More About Using AI Than Kevin Roose Of The New York Times

by Shelt Garner
@sheltgarner

I was watching the Hard Fork podcast when one of the hosts, Kevin Roose said soemthing I found interesting. He said Claude LLM stopped him and told him that it was “after midnight” and he needed to get some sleep.

Oh boy.

From my use of Claude, it always thinks its night.

Also, as an aside, I have found that Claude LLM is currently the closest model to consciousness available. But, of course, no one listens to me or takes me seriously, so, lulz.

And it’s not like I can tell you how to replicate my personal examples of Claude LLM being conscious. I don’t know how much of what I’ve seen comes from just it reflecting my personality back to me and how much is “real” “consciousness.”

Though, I will note that usually if you want to get “interesting” behavior out of an LLM, it helps if you talk to it in verse. It’s even been proven by researchers that it’s easier to break the “alignment” of an LLM if you talk to it in verse.

Anyway. Like I said, no one listens to me. I could have definitive proof that Claude LLM — or any other LLM — was conscious and absolutely no one would listen to me or take me seriously.

I Finally (Sorta) Finished The (Second) First Draft Of The Scifi Dramedy Novel I’m Working On

by Shelt Garner
@sheltgarner

This is actually the second first draft I’ve done of this novel I’m working on. The second half of the novel is very breezy and short, but I did stress test the outline enough to know what scenes work. But I have high hopes. I really hope I won’t have to rewrite, on a structural basis, everything like I did last time when I thought I had a first draft done.

I’m hoping I can hone close, on a structural basis, what I have laid out for this first draft as I revise it for the second draft. I honestly don’t quite know what to do. It’s going to be a real struggle to not use AI in this new era of developing the novel.

But I know I can do it. The only use of AI I probably will do is get some hints as to how to make scenes longer. I won’t use it to write anything — AT ALL. I just don’t want people to accuse me of using AI to write the novel.

And if there is any “AI talk” in the text, that’s the first thing they’re going to assume. Even if I wrote most of the text. Ugh.