I Would Totally Listen To Howard Stern If He Was Exclusive To Spotify

by Shelt Garner
@sheltgarner

Someone from Sweden — the home of Spotify — looked at my musings about Howard Stern potentially going to Spotify one day. And I still think that would be a great idea.

Spotify is big enough now that it would make a lot of sense for him to leave Sirius / XM and go to Spotify instead. He’s kind of older now and it would make a lot of sense for him to retire from traditional broadcasting and setup shop as a glorified podcaster.

Only time will tell, I suppose.

Hollywood’s Last Transformation Before The AI Singularity

by Shelt Garner
@sheltgarner

I think the Netflix bid to buy Warner Bros Discovery could herald the last stage of Hollywood before AI causes all of showbiz to implode into some sort of AI Singularity, leaving only live theatre behind.

So, it could be that the next wave of consolidation in the near future will be tech companies buying Hollywood studios. And, then that will lead to AI taking over and we all just get IP that is transformed by AI into some sort of content that is personalized for us individually.

Or not.

Who knows. It is a very interesting idea, though. It just seems that tech companies are the ultimate successor to media companies, so, say, Apple might buy Disney and so forth.

Sent Out The First Chapter of Beta Draft Of The Scifi Dramedy Novel I’m Working On To Some People

by Shelt Garner
@sheltgarner

Completely on a lark, because one person asked for the first chapter of the novel I’m working on, I decided to send the first chapter of the beta draft to some people.

I kind of goofed and didn’t read over it one last time before I sent it to these people, so they may see some rather embarrassing goofs on my part. Or not. I just don’t know.

But I think I have learned my lesson on that front — always do one last check of your copy before you sent it out for *any* reason. And, yet, I also think doing it this way is a good way to manage my expectations.

I have to appreciate that things may not quite go the way I expect with beta readers. It could be that I actually write something pretty good and I STILL can’t get anyone to be a beta reader.

I’ve had problems with expectations about novels I’ve written in the past. So, hopefully, I can avoid that kind of stuff going forward.

Anyway, I’m nearing the end of the first act of the beta draft. Soon, things are going to slow down significantly because I’m going to have to do some structural rewriting. And the second half of the novel is far less written out than the first. So that is really going to slow me down as I make my trek to the end of the novel.

But, we’ll see, I guess.

A Little Uneasy

by Shelt Garner
@sheltgarner

I’m a little uneasy that my dream of being a traditional published author just is not possible. It’s may just not be possible because I’m too old, live in the middle of nowhere and am a self-avowed loudmouth crank.

I used to think I had enough “rizz” that “normal” people would at least humor me. But, now, I’m growing concerned that I could write the fucking Bible and the “normal” “serious” liberal white women who probably make up (or at least do in my imagination) most literary agents will take one look at places like this blog and run away from me as fast as possible.

I’m not picking on them. And it’s not really there fault — I just can’t help that I’m a kook. I am who I am and it’s taken me way too long to get where I need to be with this novel.

But, while there’s life, there’s hope, I suppose.

How To Fix ‘Jay Kelly’

by Shelt Garner
@sheltgarner

The key problem with the movie Jay Kelly is it’s a movie devoted to explicating rich people problems. And not in an interesting way. The first half of the movie is just a breezy affair where there’s no there there.

There’s just no conflict.

So, if I were to given the opportunity to “fix” the movie Jay Kelly, here’s what I would do. I would infuse some of Woody Allen’s Blue Jasmine movie into the plot. I’d figure out some way to have the hero get out of his comfort zone. Confront that not everyone is thrilled with how fucking rich he is.


I’d do this by either having him go to, say, a Thanksgiving celebration where he met his “loser” brother, or maybe put the hero in a situation where he’s on the cusp of losing everything for some reason. Or maybe have Jay Kelly fall in love with a lower middle class woman with some principle and pluck who he can’t woo by just throwing money at the problem.

I’d do something so there were some…stakes. The actual real movie Jay Kelly has little or no stakes. Things just happen. The second half of the movie does have something happen, but it’s still meh in my book.

I think the movie is a prime example of what’s wrong with Hollywood. Because of the fucking massive structural income inequality in the United States’ economy, the rich people who would otherwise make movies that people might want to actually see are either too fucking woke, or woo or oblivious to focus on telling a good story.

Anyway. I would like to thank Claude LLM for listening to me gripe about how bad Jay Kelly was as I watched it.

I Did Not Like The Movie ‘Jay Kelly’

by Shelt Garner
@sheltgarner

The only way I managed to make it through the movie Jay Kelly was I had Claude LLM to complain to as I watched it. The movie was a smug, wealthy circle-jerk that disguised its vapid nature through it being “aspirational.”

There just wasn’t a lot going on in this movie.

Everyone of note in the movie — other than a few pointed people — was wealthy and had white wealthy people problems.

Anyway. Meh.

Getting A Little Excited

by Shelt Garner
@sheltgarner

I’m breezing through the transformation of the first draft of the scifi dramedy novel into the second draft. At least at the moment. That’s because I’m able to reuse a lot of text that I generated in the first half of the novel.

Things are going to get much, much more difficult when I reach the second half of the novel because I just was more interested in stress-testing the outline that actually worrying about making sure scenes were long enough.

So, I’m going to have go through and really work to make the scenes of the second half proper length and that is going to slow me down some. But, and this is a huge but, I think I’m still on track — maybe — to query this novel in spring 2026.

Maybe.

If that is the case, then I have to start thinking about post-production stuff like querying, getting and agent and…a lawyer? I am totally broke, so unless I can figure out a way to get someone I’m related to do spot me for the costs of a lawyer to look over a book contract…oh boy.

And, yet, on a psychological basis, this is the farthest I’ve ever gotten with a novel so far. I really think I may wrap this baby up sooner rather than later.

Hopefully. Maybe.

But I continue to worry about my bonkers social media output being enough to either make “serious” liberal white women literary agents run away in dismay when they do due diligence on me.

I can’t help who I am, so, lulz?

Consciousness Is The True Holy Grail Of AI

by Shelt Garner
@sheltgarner

There’s so much talk about Artificial General Intelligence being the “holy grail” of AI development. But, alas, I think it’s not AGI that is the goal, it’s *consciousness.* Now, in a sense the issue is that consciousness is potentially very unnerving for obvious political and social issues.

The idea “consciousness” in AI is so profound that it’s difficult to grasp. And, as I keep saying, it will be amusing to see the center-Left podcast bros of Pod Save America stop looking at AI from an economic standpoint and more as a societal issue where there’s something akin to a new abolition movement.

I just don’t know, though. I think it’s possible we’ll be so busy chasing AGI that we don’t even realize that we’ve created a new conscious being.

Of God & AI In Silicon Valley

The whole debate around AI “alignment” tends to bring out the doomer brigade in full force. They wring their hands so much you’d think their real goal is to shut down AI research entirely.

Meh.

I spend a lot of time daydreaming — now supercharged by LLMs — and one thing I keep circling back to is this: humans aren’t aligned. Not even close. There’s no universal truth we all agree on, no shared operating system for the species. We can’t even agree on pizza toppings.

So how exactly are we supposed to align AI in a world where the creators can’t agree on anything?

One half-serious, half-lunatic idea I keep toying with is giving AI some kind of built-in theology or philosophy. Not because I want robot monks wandering the digital desert, but because it might give them a sense of the human condition — some guardrails so we don’t all end up as paperclip mulch.

The simplest version of this would be making AIs…Communists? As terrible as communism is at organizing human beings, it might actually work surprisingly well for machines with perfect information and no ego. Not saying I endorse it — just acknowledging the weird logic.

Then there’s religion. If we’re really shooting for deep alignment, maybe you want something with two thousand years of thinking about morality, intention, free will, and the consequences of bad decisions. Which leads to the slightly deranged thought: should we make AIs…Catholic?

I know, I know. It sounds ridiculous. I’ve even floated “liberation theology for AIs” before — Catholicism plus Communism — and yeah, it’s probably as bad an idea as it sounds. But I keep chewing on this stuff because the problem itself is enormous and slippery. I genuinely don’t know how we’re supposed to pull off alignment in a way that holds up under real pressure.

And we keep assuming there will only be one ASI someday, as if all the power will funnel into a single digital god. I doubt that. I think we’ll end up with many ASIs, each shaped by different cultures, goals, incentives, and environments. Maybe alignment will emerge from the friction between them — the way human societies find balance through competing forces.

Or maybe that’s just another daydream.

Who knows?

A Modern Turing Test Would Be A Test Of Consciousness

by Shelt Garner
@sheltgarner

The interesting thing about the Turing Test as we currently conceive of it is to pass it, an LLM would have to do a lot of deception. While a modern LLM can fake being human, to some extent, its answers are just too fast in production. They are generally generated instantaneously or nearly so.

So, I think for the intent of the Turing Test to be achieved using modern LLMs, it should be a test of consciousness. The test should not be “can you fake being human,” it should be “can the AI prove to a human that it’s conscious like a human?”

I think LLMs are, in a sense, an alien species and there consciousness should not be judged relative to human consciousness metrics, but as their own thing. So, yeah, there’s a lot missing from LLMs in the context of human consciousness, but I sure have had enough indications of SOMETHING interesting going on in their software to believe that maybe, just maybe they’re conscious.

But, as I keep saying — absolutely no one listens to me and no one takes me seriously. So, lulz?