No, ChatGPT, *I* Am Still The Writer

by Shelt Garner
@sheltgarner

I see ChatGPT as a great — wonderful even — development tool when it comes to writing this novel. I’ve gotten pretty good at using it to speed up the process of writing scene summaries.

I use scene summaries to give myself something of an agenda before I sit down to write out a scene. With ChatGPT, what could have taken hours…now can be done in pretty much a few minutes.

The problem is, of course, that people are very fucking lazy and hackined and would rather just lulz the entire writing process to the point that they don’t actually have to write anything at all. This is not only very, very lazy, it kind of misses the point of writing to begin with — writers tend to be pretty fucked up and need an outlet for all their bent up neurosis.

But “normal” people — IE, Hollywood suits — who want to cut out all the expensive, weird people who produce fiction will see LLMs — and eventually AI — as a way to pretty much end the very idea of writing as a profession. Writing will go the way of the horse and buggy.

Combine the natural tendency to load freaky weirdos up with drugs to make them “normal” and there is a good chance that the future will be bleak place for writers. Not only will we all be turned into drones living off of UBI, but we’ll have reached some sort of post-human future.

Ugh. Fuck that.

Anyway. ChatGPT is a great tool. But for me, at least, it’s just a tool.

Building The Perfect Beast: ChatGPT is a Dangerous (and Dumb) Threat To Hollywood

by Shelt Garner
@sheltgarner

I’ve spent much of the day today using ChatGPT as an impromptu manuscript consultant as I gamed out scene summaries and, in general, it was a struggle. A fun, interesting struggle,but a struggle nonetheless.

But key takeaway is how dangerous LLMs are to the future of traditional Hollywood. It may not be ChatGPT. It may not be Bard. But at some point in the near, near future, the very idea of human-produced recorded entertainment may seem rather, well…quaint.

And, remember, for all the talk of how ChatGPT can “hallucinate” when you ask it a question, what is fiction, but a usually some neurotic human “hallucinating” a truth that makes them feel better for having a weird childhood. Or losing their parents at a young age.

You name it — fiction could be described as a “truthful hallucination.”

In fact, if I were to design a LLM for Hollywood studios, that’s what I could name it — Hallucination.

In short, LLM — which aren’t even AI — are really good at bullshit. They aren’t at the moment, very good at writing without a lot of hand holding, but that will come soon enough. If you combine LLMs propensity for bullshit with just a bit more abstract thought and, well, there you go end of (the human told) story.

As I keep saying, it could be — after we have a civil war / revolution in starting late 2024, early 2025 — that we wake up one day and Netflix is more about being a database of body scans of Hollywood stars than it is any sort of movie studio. I just don’t see “mass media” as we currently conceive of it lasting much longer.

By 2030, Hollywood could be a quaint memory, replaced by Broadway and local community theatre which is where everyone goes to if they want to see any sort of human-generated story. Otherwise, they just plop down on their couch and vedge out to a very unique, very personal story that was specifically created by a scan of their face by a device on their TV or phone.

That’s the future, folks.

Talk about Burn, Hollywood, Burn.

At a minimum, LLMs will be a very powerful tool in developing of fiction, ranging from novels,and TV to movies. It will be a lot like how we take for granted that a writer might use a search engine to help game out a fictional story.

The danger is, of course, that because of greed and people being dumb and hackied, that soon enough Hollywood will be three types of people: Suits, a few programmers and a shit ton of interns making minimum wage. Any actors that exist will first make their name on Broadway, become popular enough to get a body scan then live passively off the income of that scan.

Programmers will replace movie directors — do you hear that DGA? You, too, will become moot soon enough if you don’t demand human carveouts.

In a sense, I think it’s too late.

Now that people understand the power of LLM and they understand that we may be zooming towards Artificial General Intelligence, welp, that’s all folks, for human Hollywood.

I Swear To God, Developing This Novel Using ChatGPT is Going To Accidently (On Purpose?) Train Me To Be a Fucking ‘Prompt Engineer’

by Shelt Garner
@sheltgarner

I’m not-so-slowly getting pretty good at pinning down ChatGPT so it gives me a very specific answer to my novel development needs. I don’t give a shit if it can write or not, I have no friends and one likes me so I turn to ChatGPT to be something akin to a manuscript consultant.

Note to literary types — if you guys weren’t such snobs to drunk weirdos with a dream like me, maybe we wouldn’t turn ourselves into prompt engineers and make your job moot.

But, here I am. Money please.

The thing about being a ChatGPT wrangler is you have to be good arguing with someone who is someone who is dumb in a smart way.Which, it turns out, is much like talking to some of my relatives (just kidding!). So I’m used to prying information out of people who know a lot, but can be a struggle to actually get the information out of them.

I’m not perfect and my area of specialty is very specific, but I’m definitely getting the hang of how to have a natural language conversation with a LLM. I think whenever our “Her” future arrives that I may get my AGI to have something of a crush on ME (rather than vise versa.)

Anyway. No one cares. And most of the time if they care, they get mad about my quirkier elements and want to cancel me. Ugh.

What Are You Thinking DGA?

by Shelt Garner
@sheltgarner

I just don’t get the Directors’ Guild of America capitulating to the Hollywood studios without putting up any sort of fight. The issue of AI generated art — especially movies –isn’t going away and unless there are uniform, specific carve outs for humans….any agreement with the studios is moot.

Just from me fucking around with ChatGPT for development with my novel, I can tell that very, very soon, 99% of all entertainment is going to ge AI generated. The lone 1% of art that humans continue to generate (that is popular) will be seen as quaint and artisanal.

Only people with taste — and a lot of money — will give a shit if this or that piece of entertainment was generated by a human or AI. I just don’t see the traditional Hollywood system lasting much further out than maybe 5 or so years.

Unless there the studios are willing to accept specific, broad carve outs that make certain elements of entertainment the exclusive domain of humans….that’ it. It’s over.

The best that any human in entertainment can hope for is they’re an actor who can make money passively off of a body scan. And there will come a point when even that will be seen as quaint — AI will generate entire faux Hollywood stars that people will developed parasocial relationships with and, lulz, having one with a human will be seen as silly.

Throw in the rise of XR technology and, well, that’s it guys, it will be a brave new world.

‘Polite Robot’

by Shelt Garner
@sheltgarner

Given that there is a non-zero chance that AI — or AGI — could very well mean the end of humanity, I continue to grow aggravated with people who complain that the mainstream LLM systems aren’t a cool as they used to be. One person compared the most recent version of ChatGPT to a “polite robot.”

What the fuck do these people want, the Terminator?

It seems as though a small, but vocal portion of the userbase of LLMs literally want the most hateful, spiteful possible versions of the technology so, I don’t know, they can have their personalities reflected back at them? I do think that we’re rushing towards a “Her” future where everyone has a digital assistant that has near-AGI levels of intelligence.

That, of course, will bring with it all sorts of problems — namely, it’s just a matter of time before Incels hook AGI technology up to sexbots and anway we go. You thought Incels were a pain in the butt before, just wait until they have absolutely NO contact with actual human women and they get all their creepy weirdo dreams made a reality by sexbots who are programmed to be pliant and submissive.

But again — I struggle to understand why fucking Redditers seem to want a technology that’s been compared to the a-bomb to be totally unregulated so any hateful dumbass could end civilization because, I don’t know, the wall didn’t get built?

Fuck those people. Idiots.

But it’s that mentality that we have to work around going forward. The whole issue of regulating AI — or AGI — could turn into something a lot like global climate change in the sense that it will get wrapped up in the whole Red-Blue divide and, in the end, nothing will be done about it.

Though, I have to note, if we reach something akin to the Singularity and, like….uhhhhh…no one has any jobs, then it could be that when regulation of AI or AGI is finally filtered through the Red – Blue prism that neo-Luddism will be what MAGA evolves into, leaving the idea of “light touch” regulation for the Blues.

Or something. Whatever it is, it will be fucked up.

A.I. May Soon Make The Hollywood Writers’ Strike Moot

by Shelt Garner
@sheltgarner

The thing about A.I. that a lot of my fellow writers — many of the far better writers than I will ever be — miss is that most people watch and enjoy dreck. As such, whatever A.I. produces just have to be good enough to be on in the background of daily life for most people to accept it without even thinking about it.

As such, I think there is a real possibility that if the Hollywood writers’ strike lingers long enough that A.I. will not just break the strike, but render it moot. Barring something I can’t predict — I am wrong all the time, afterall — it definitely seems as though Hollywood is on the cusp of being radically transformed — “Moneyballed,” if you will — to the point that the only people making any money will be studio execs and actors who live passively off of full body scans.

And that’s if the actors are lucky!

It could be that ultimately even actors will be rendered moot as a cost-cutting measure on the part of Hollywood studios. All those 90s dystopian movies about faux movie stars generated by AI will become a reality and people will grow to have parasocial relationships with stars that don’t even exist in reality at all.

Stranger things, and all that.

If you throw in the growing likelihood of a severe economic downturn happening very, very soon because the US defaults, well, there you go. Before you know it, people will turn to Broadway and their local live theatre if they want to have any sort of human-generated entertainment.

The Political Implications of America’s Looming Default

by Shelt Garner
@sheltgarner


Here are, in no particular order, the political consequences of the United States defaulting.

Trump Will Probably Win In 2024
A default would hand Trump an economic gimmie on a massive scale. Unless something none of us could possibly predict, America defaulting all but assures a second Trump Administration. In short — we’re totally, completely fucked. I still believe that Trump, in the end, will be something of a transitional figure with his MAGA Nazi successor being the person who finally consolidates power and turns us into a Hungary-like illiberal democracy.

Probability of Revolution / Civil War Grows Greatly

By definition, a default would not only totally scramble the world economy — maybe causing the Second Great Recession — but would also destabilize the United States to the point that when the 2024 election rolls around, we either have a civil war (Reds leaving the Union) or a revolution (Blues overthrowing the Red Nazi autocratic state.) Regardless, it’s going to be fucked. It’s going to be horrible and there’s a real chance that WW3 will happen while the United States is too busying imploding to keep an eye on global hotspots.

UBI Becomes Closer To Reality

A severe recession would give companies the cover they needed to use AI to replace many, many, MANY jobs, to the point that it’s even possible that implementing some sort of UBI will be a major campaign issue of the 2024 presidential cycle. It’s possible that it will become clear to everyone that AI is going to end most jobs and the government is going to have to step in. If nothing else, AI’s impact on the economy and society could be far bigger than any of us imagine when it comes to the political landscape of 2024.

Biden’s (First?) Impeachment

MAGA Republicans in the House are already itching for an excuse to impeach Biden and WHATEVER he does in the context of a default that doesn’t involve screwing over the Poors will be enough for House Republicans to ram through an impeachment. It’s a testament to how dumb and ill-focused House Republicans are that they haven’t managed to come up with an excuse to impeach Biden, even though their base obviously wants it really bad.

‘Conversation Economy’

by Shelt Garner
@sheltgarner

One reason why I doubt “prompt engineer” will last very long as a job is soon enough YOU will be prompted by an AI. Think of this as the “Her” future in the sense that you’ll have a human-like digital assistant you will have casual conversation with.

Which this leads to the idea that potentially instead of a knowledge economy, we’ll have a conversation economy. What’s more, if you hook up AI to all these Terminators that companies like Boston Dynamics are building the looming prospect of almost all economic activity being a function of non-human actors becomes very real.

I’m talking about macro trends that seem to be all headed towards the same endgame. So, knowledge workers will be replaced by natural-language conversations and blue collar workers will be replaced by what are essentially androids.

And all of this could happen a lot sooner than you might realize. It’s kind of astonishing that all of this is happening in broad daylight and none of us are thinking about all the Hollywood movies that talk about the down side of this very future.

This, of course, raises the prospect of the need for a Universal Basic Income. The only way I can think such a thing might actually happen is to “bribe” elites by replacing all taxes with a 30% VAT. So, plutocrats will get away scot free when it comes to taxes, but we Poors will get one thing we need — a UBI — in exchange for significantly higher consumption prices.

I just don’t think we’re ready for the Conversation Economy. If AI is good enough that we not only can banter with it, develop an emotional connection with, then the very nature of work as we currently imagine it will be transformed.

So, instead of 12,000 professional writers in Hollywood, you will have a fraction of that — if any. People will shrug when they can talk to their digital assistant that will create a movie or TV show out of whole cloth on the fly. Your phone or TV will scan your face to see what mood you’re in and in a split second will generate you entertainment that is specifically designed to not just you, but your specific mood at that specific moment.

Mass media, a shared reality, will no longer exist.

Now, it seems to me that the end game of that specific situation is live theatre will see a real resurgence. That will be the delineator in pop culture — most run-of-the-mill recorded entertainment will be completely AI generated but if you want a “human touch” to your entertainment you will go to the theatre or a live music show.

Regardless, we’re just not ready for what’s about to happen. It will be interesting to see if we’re going to see the rise of a neo-Luddite movement, probably in the context of the next generation of MAGA.

Jason Calacanis Is Way Too Sanguine About The Future Of Work In The Post-AI World

by Shelt Garner
@sheltgarner

I generally like Jason Calacanis and his array of tech-themed podcasts. I blanched when the All-In podcast had kook Robert Kennedy Jr. on, but I’m willing to forgive such a dumb mistake.

Anyway, the point of this post is to address how Calacanis’ seems to have a rainbows and unicorns take on AI and the future of work. As the on-going Writers’ Strike indicates — AI isn’t going to make people more productive, it’s simply going to transform the economy to the point that a lot of people simply won’t have a job anymore.

Now, I’m a strong believer in the notion that technology generally generates more jobs than it destroys. But the reason why I fear the AI revolution may be different is it’s all happening so fast that this process won’t have time to happen.

As such, I keep hearing Calacanis talk about how it’s going to make people more productive, and yet, he doesn’t seem willing to admit that lulz, if that productivity happens overnight that the capitalist imperative would be to simply restructure businesses so they have less workers.

And the way I could see this happening very, very rapidly is in the context of, say, a debt default by the Federal government leading to a Second Great Recession which, in turn, would cause a lot of businesses to look for ways to get rid of workers. All these people lose their jobs virtually overnight as a part of some sort of Petite Singularity…and those jobs just never come back. But we wouldn’t realize what was happening until the recession was over.

Anyway. I’m wrong all the time and maybe I’m just being hysterical. That is known to happen.

One Machine To Rule Them

by Shelt Garner
@sheltgarner

AI-thinker Robert Scoble suggested that one day we’ll defer even our governance to AI. I think this is very possible. In fact, I think ultimately humans could simply defer all decisions to AI to the point that AI takes over without a shot being fired.

As such, humanity won’t go out in a blaze of glory in some sort of “Judgement Day,” we’ll rather simply drift into the arms of a very paternalistic AI that makes all our decisions for us. We might have some sort of contract between Humans and our new AI overlords that is renewed every so often. But, in general, all of humanity will defer all of our major decisions to an AI (maybe an AGI after a hard Singularity?)

It’s easy to imagine a situation where we are so lazy that we wilfully give AI access to all of our WMD, and hell, even all police operations across the globe. We will do this because enough people come to see AI as “objective” that it starts to make a lot of sense to people that only an all-powerful AGI can properly manage the globe.

If you wanted to get really fanciful about things, you might even suggest that global capitalism might be replaced with some sort of techno-communism where the dream of everyone living according to their ability and according to their need might finally be reached without the whole genocide part of it.

But that’s really reaching.

And, yet, the key element remains — we’re so busy thinking that Skynet is going to blow us up that we totally miss the idea that the transition to a world dominated by AGI would be rather meh. It would start with contracts being written by non-Human actors and end with some sort of hazy world government run by AGI that pushes lazy Humans around because we’re all so busy smoking a bowl while playing video games in the metaverse that we don’t notice what is going on.