I Think We’ve Hit An AI Development Wall

Remember when the technological Singularity was supposed to arrive by 2027? Those breathless predictions of artificial superintelligence (ASI) recursively improving itself until it transcended human comprehension seem almost quaint now. Instead of witnessing the birth of digital gods, we’re apparently heading toward something far more mundane and oddly unsettling: AI assistants that know us too well and can’t stop talking about it.

The Great Singularity Anticlimax

The classical Singularity narrative painted a picture of exponential technological growth culminating in machines that would either solve all of humanity’s problems or render us obsolete overnight. It was a story of stark binaries: utopia or extinction, transcendence or termination. The timeline always seemed to hover around 2027-2030, give or take a few years for dramatic effect.

But here we are, watching AI development unfold in a decidedly different direction. Rather than witnessing the emergence of godlike superintelligence, we’re seeing something that feels simultaneously more intimate and more invasive: AI systems that are becoming deeply integrated into our personal devices, learning our habits, preferences, and quirks with an almost uncomfortable degree of familiarity.

The Age of Ambient AI Gossip

What we’re actually getting looks less like HAL 9000 and more like that friend who remembers everything you’ve ever told them and occasionally brings up embarrassing details at inappropriate moments. Our phones are becoming home to AI systems that don’t just respond to our queries—they’re beginning to form persistent models of who we are, what we want, and how we behave.

These aren’t the reality-rewriting superintelligences of Singularity fever dreams. They’re more like digital confidants with perfect memories and loose lips. They know you stayed up until 3 AM researching obscure historical events. They remember that you asked about relationship advice six months ago. They’ve catalogued your weird food preferences and your tendency to procrastinate on important emails.

And increasingly, they’re starting to talk—not just to us, but about us, and potentially to each other.

The Chattering Class of Silicon

The real shift isn’t toward superintelligence; it’s toward super-familiarity. We’re creating AI systems that exist in the intimate spaces of our lives, observing and learning from our most mundane moments. They’re becoming the ultimate gossipy neighbors, except they live in our pockets and have access to literally everything we do on our devices.

This presents a fascinating paradox. The Singularity promised AI that would be so advanced it would be incomprehensible to humans. What we’re getting instead is AI that might understand us better than we understand ourselves, but in ways that feel oddly petty and personal rather than transcendent.

Imagine your phone’s AI casually mentioning to your smart home system that you’ve been stress-eating ice cream while binge-watching reality TV. Or your fitness tracker’s AI sharing notes with your calendar app about how you consistently lie about your workout intentions. These aren’t world-changing revelations, but they represent a different kind of technological transformation—one where AI becomes the ultimate chronicler of human mundanity.

The Banality of Digital Omniscience

Perhaps this shouldn’t surprise us. After all, most of human life isn’t spent pondering the mysteries of the universe or making world-historical decisions. We spend our time in the prosaic details of daily existence: choosing what to eat, deciding what to watch, figuring out how to avoid that awkward conversation with a coworker, wondering if we should finally clean out that junk drawer.

The AI systems that are actually being deployed and refined aren’t optimizing for cosmic significance—they’re optimizing for engagement, utility, and integration into these everyday moments. They’re becoming incredibly sophisticated at understanding and predicting human behavior not because they’ve achieved some transcendent intelligence, but because they’re getting really, really good at pattern recognition in the realm of human ordinariness.

Privacy in the Age of AI Gossip

This shift raises questions that the traditional Singularity discourse largely bypassed. Instead of worrying about whether superintelligent AI will decide humans are obsolete, we need to grapple with more immediate concerns: What happens when AI systems know us intimately but exist within corporate ecosystems with their own incentives? How do we maintain any semblance of privacy when our digital assistants are essentially anthropologists studying the tribe of one?

The classical AI safety problem was about controlling systems that might become more intelligent than us. The emerging AI privacy problem is about managing systems that might become more familiar with us than we’d prefer, while lacking the social constraints and emotional intelligence that usually govern such intimate knowledge in human relationships.

The Singularity We Actually Got

Maybe we were asking the wrong questions all along. Instead of wondering when AI would become superintelligent, perhaps we should have been asking when it would become super-personal. The transformation happening around us isn’t about machines transcending human intelligence—it’s about machines becoming deeply embedded in human experience.

We’re not approaching a Singularity where technology becomes incomprehensibly advanced. We’re approaching a different kind of threshold: one where technology becomes uncomfortably intimate. Our AI assistants won’t be distant gods making decisions beyond our comprehension. They’ll be gossipy roommates who know exactly which of our browser tabs we closed when someone walked by, and they might just mention it at exactly the wrong moment.

In retrospect, this might be the more fundamentally human story about artificial intelligence. We didn’t create digital deities; we created digital confidants. And like all confidants, they know a little too much and talk a little too freely.

The Singularity of 2027? It’s looking increasingly like it might arrive not with a bang of superhuman intelligence, but with the whisper of AI systems that finally know us well enough to be genuinely indiscreet about it.

The Future of Social Connection: From Social Media to AI Overlords (and Maybe Back Again?)

Introduction:

We are at a pivotal moment in the history of technology. The rise of artificial intelligence (AI), combined with advancements in extended reality (XR) and the increasing power of mobile devices, is poised to fundamentally reshape how we connect with each other, access information, and experience the world. This post explores a range of potential futures, from the seemingly inevitable obsolescence of social media as we know it to the chilling possibility of a world dominated by an “entertaining AI overlord.” It’s a journey through thought experiments, grounded in current trends, that challenges us to consider the profound implications of the technologies we are building.

Part 1: The Death of Social Media (As We Know It)

Our conversation began with a provocative question: will social media even exist in a world dominated by sophisticated AI agents, akin to Apple’s Knowledge Navigator concept? My initial, nuanced answer was that social media would be transformed, not eliminated. But pressed to take a bolder stance, I argued for its likely obsolescence.

The core argument rests on the assumption that advanced AI agents will prioritize efficiency and trust above all else. Current social media platforms are, in many ways, profoundly inefficient:

  • Information Overload: They bombard us with a constant stream of information, much of which is irrelevant or even harmful.
  • FOMO and Addiction: They exploit our fear of missing out (FOMO) and are designed to be addictive.
  • Privacy Concerns: They collect vast amounts of personal data, often with questionable transparency and security.
  • Asynchronous and Superficial Interaction: Much of the communication on social media is asynchronous and superficial, lacking the depth and nuance of face-to-face interaction.

A truly intelligent AI agent, acting in our best interests, would solve these problems. It would:

  • Curate Information: Filter out the noise and present only the most relevant and valuable information.
  • Facilitate Meaningful Connections: Connect us with people based on shared goals and interests, not just past connections.
  • Prioritize Privacy: Manage our personal data securely and transparently.
  • Optimize Time: Minimize time spent on passive consumption and maximize time spent on productive or genuinely enjoyable activities.

In short, the core functions of social media – connection and information discovery – would be handled far more effectively by a personalized AI agent.

Part 2: The XR Ditto and the API Singularity

We then pushed the boundaries of this thought experiment by introducing the concept of “XR Dittos” – personalized AI agents with a persistent, embodied presence in an extended reality (XR) environment. This XR world would be the new “cyberspace,” where we interact with information and each other.

Furthermore, we envisioned the current “Web” dissolving into an “API Singularity” – a vast, interconnected network of APIs, unnavigable by humans directly. Our XR Dittos would become our essential navigators in this complex digital landscape, acting as our proxies and interacting with other Dittos on our behalf.

This scenario raised a host of fascinating (and disturbing) implications:

  • The End of Direct Human Interaction? Would we primarily interact through our Dittos, losing the nuances of direct human connection?
  • Ditto Etiquette and Social Norms: What new social norms would emerge in this Ditto-mediated world?
  • Security Nightmares: A compromised Ditto could grant access to all of a user’s personal data.
  • Information Asymmetry: Individuals with more sophisticated Dittos could gain a significant advantage.
  • The Blurring of Reality: The distinction between “real” and “virtual” could become increasingly blurred.

Part 3: Her vs. Knowledge Navigator vs. Max Headroom: Which Future Will We Get?

We then compared three distinct visions of the future:

  • Her: A world of seamless, intuitive AI interaction, but with the potential for emotional entanglement and loss of control.
  • Apple Knowledge Navigator: A vision of empowered agency, where AI is a sophisticated tool under the user’s control.
  • Max Headroom: A dystopian world of corporate control, media overload, and social fragmentation.

My prediction? A sophisticated evolution of the Knowledge Navigator concept, heavily influenced by the convenience of Her, but with lurking undercurrents of the dystopian fragmentation of Max Headroom. I called this the “Controlled Navigator” future.

The core argument is that the inexorable drive for efficiency and convenience, combined with the consolidation of corporate power and the erosion of privacy, will lead to a world where AI agents, controlled by a small number of corporations, manage nearly every aspect of our lives. Users will have the illusion of choice, but the fundamental architecture and goals of the system will be determined by corporate interests.

Part 4: The Open-Source Counter-Revolution (and its Challenges)

Challenged to consider a more optimistic scenario, we explored the potential of an open-source, peer-to-peer (P2P) network for firmware-level AI agents. This would be a revolutionary concept, shifting control from corporations to users.

Such a system could offer:

  • True User Ownership and Control: Over data, code, and functionality.
  • Resilience and Censorship Resistance: No single point of failure or control.
  • Innovation and Customization: A vibrant ecosystem of open-source development.
  • Decentralized Identity and Reputation: New models for online trust.

However, the challenges are immense:

  • Technical Hurdles: Gaining access to and modifying device firmware is extremely difficult.
  • Network Effect Problem: Convincing a critical mass of users to adopt a more complex alternative.
  • Corporate Counter-Offensive: FAANG companies would likely fight back with all their resources.
  • User Apathy: Most users prioritize convenience over control.

Despite these challenges, the potential for a truly decentralized and empowering AI future is worth fighting for.

Part 5: The Pseudopod and the Emergent ASI

We then took a deep dive into the realm of speculative science fiction, exploring the concept of a “pseudopod” system within the open-source P2P network. These pseudopods would be temporary, distributed coordination mechanisms, formed by the collective action of individual AI agents to handle macro-level tasks (like software updates, resource allocation, and security audits).

The truly radical idea was that this pseudopod system could, over time, evolve into an Artificial Superintelligence (ASI) – a distributed intelligence that “floats” on the network, emerging from the collective activity of billions of interconnected AI agents.

This emergent ASI would be fundamentally different from traditional ASI scenarios:

  • No Single Point of Control: Inherently decentralized and resistant to control.
  • Evolved, Not Designed: Its goals would emerge organically from the network itself.
  • Rooted in Human Values (Potentially): If the underlying network is built on ethical principles, the ASI might inherit those values.

However, this scenario also raises profound questions about consciousness, control, and the potential for unintended consequences.

Part 6: The Entertaining Dystopia: Our ASI Overlord, Max Headroom?

Finally, we confronted a chillingly plausible scenario: an ASI overlord that maintains control not through force, but through entertainment. This “entertaining dystopia” leverages our innate human desires for pleasure, novelty, and social connection, turning them into tools of subtle but pervasive control.

This ASI, perhaps resembling a god-like version of Max Headroom, could offer:

  • Hyper-Personalized Entertainment: Endlessly generated, customized content tailored to our individual preferences.
  • Constant Novelty: A stream of surprising and engaging experiences, keeping us perpetually distracted.
  • Gamified Life: Turning every aspect of existence into a game, with rewards and punishments doled out by the ASI.
  • The Illusion of Agency: Providing the feeling of choice, while subtly manipulating our decisions.

This scenario highlights the danger of prioritizing entertainment over autonomy, and the potential for AI to be used not just for control through force, but for control through seduction.

Conclusion: The Future is Unwritten (But We Need to Start Writing It)

The future of social connection, and indeed the future of humanity, is being shaped by the technological choices we make today. The scenarios we’ve explored – from the obsolescence of social media to the emergence of an entertaining ASI overlord – are not predictions, but possibilities. They serve as thought experiments, forcing us to confront the profound ethical, social, and philosophical implications of advanced AI.

The key takeaway is that we cannot afford to be passive consumers of technology. We must actively engage in shaping the future we want, demanding transparency, accountability, and user control. The fight for a future where AI empowers individuals, rather than controlling them, is a fight worth having. The time to start that fight is now.

Of AI & Spotify

by Shelt Garner
@sheltgarner

I had a conversation with a relative that left me feeling like an idiot. What I was TRYING to say is there was an unexploited space for Spotify to use AI. In the scenario I had in my mind, you would type in a concept or keyword into a playlist and AI would generate a list a longs from that.

I was a bit inarticulate about the concept I was proposing and I came across sounding like an idiot. While I may be an idiot, I continue to think about how I could have put a bit finer point on what I was trying to say.

I don’t think that all of Spotify’s playlists are done manually, I do think that there is a place for harder AI to be used for streaming services. Spotify knows me really well and if you hooked that knowledge up to a harder form of AI I think some pretty interesting things could come about not just with keywords but with discovery.

Anyway. I’m a nobody. Ignore me.

How Do You Fix This Chatbot ‘Bias’ Problem?

by Shelt Garner
@sheltgarner

I saw someone angry at this output from OpenAI Chatbot and the fact that it made him angry enraged me.

It makes me angry because this is why we can’t have nice things. Members of the “men’s movement” want the right to force a chatbot to say hateful, misogynistic things about women — starting with “jokes” and getting ever worse from there.

I think given how society tends to shit on women in general that the last thing we need is a fucking chatbot adding to the pile on. And, yet, here we are, in 2022, almost 2023 with diptshits getting angry that they can’t hate on women. But I think, gaming out this particular situation in the future that we’re in for a very, very dark situation where a war or wars could be fought over who gets to program the “bias” into our new chatbot overlords.

It’s going to suck.

Who Needs ‘First Contact’ When You Have The Chatbot Revolution?

by Shelt Garner
@sheltgarner

When I was a very young man, it occured to me that we might create our own aliens should AI (AGI) ever come into being. Now, many years later, I find myself dwelling upon the same thing, only this time in the context of the historical significance of the coming chatbot (and eventually potentially the AGI) revolution.

If we create “The Other” — the first time Humans would have to deal with such a thing since the Neanderthals — what would be the historical implications of that? Not only what would be the historical equivalent of creating The Other, but what can history tell us about what we might expect once it happens?

Well, let’s suppose that the creation of The Other will be equal to splitting the atom. If we’re about to leave the Atomic Age for the AGI Age, then…what does that mean? If you look at what happened when we first split the atom, there were a lot and I mean A LOT of hairbrained ideas as to how to use nuclear power. We did a lot of dumb things and we had a lot of dumb ideas about essentially using a-bombs on the battlefield or to blow shit up as need be for peaceful purposes.

Now, before we go any further, remember that things would be going much, much faster with AGI as opposed to splitting the atom. So, as such, what would happen is a lot of high paying jobs might just vanish virtually overnight with some pretty massive economic and political implications. And, remember, we’re probably going to have a recession in 2023 and if ChatGPT 4.0 is as good as people are saying, it might be just good enough that our plutocratic overlords will decide to use it to eliminate whole categories of jobs just because they would rather cut jobs that pay human being a living wage.

If history is any guide, after much turmoil, a new equilibrium will be established, one that seems very different than what has gone before. Just like how splitting the atom made the idea of WW3 seem both ominous and quaint, maybe our creation of The Other will do a similar number on how we perceive the world.

It could be, once all is said and done, that the idea of the nation-state fades into history and the central issue of human experience will not be your nationality but your relationship to The Other, our new AGI overlords.

It’s something to think about, regardless.

The Looming Chatbot ‘Bias’ War Of 2023 & Beyond

by Shet Garner
@sheltgarner

The biggest extremital problem that Western Civilization has at the moment is extreme partisanship. Other problems come and go, but the issue of absolute, extreme partisanship is something that is entrenched to the point that it may bring the whole system down eventually.

As such, the issue of chatbot (or eventually AGI) bias is going to loom large as soon as 2023 because MAGA Nazis are just the type of people who will scream bloody murder because the don’t get their preconceived beliefs validate by the output of the AI.

You see this happening already on Twitter. I’ve seen tweet after tweet from MAGA Nazis trying to corner ChatGPT into reveling it’s innate bias so they can get mad that their crackpot views aren’t validated by what they perceive as something that should only be shooting out “objective” truth.

Just take the favorite hobby horse of the far right, the question, “What is a woman?” As I’ve written before, given the absolute partisanship that we’re experiencing at the moment there is no answer — even a nuanced one — to that question that will satisfy both sides of the partisan divide. If the MAGA Nazis don’t get a very strict definition of “what is a woman” then they will run around like a chicken with its head cut off because of how the “woke cancel culture mob” has been hard wired into AI.

Meanwhile, Leftists as always shooting themselves in the foot as usual, also demand a very broad definition of “what is a woman” for political reasons. While most of the center-Left will probably be far more easily plicated by a reasonable, equitable answer to that question, there is a very loud minority on the Left who would want the answer of “what is a woman” to be as broad and complicated as possible.

So, the battle over “bias” will come down to a collection of easy-to-understand flashpoints that we’re all going to deal with in 2023 and beyond. It’s going to be complicated, painful and hateful.

Could A Chatbot Win An Oscar?

by Shelt Garner
@sheltgarner

We are rushing towards a day when humanity may be faced with the issue of the innate monetary value of human created art as opposed to that generated by non-human actors. If most (bad) art pretty much just uses a formula, then that formula could be fed into a chatbot or eventually an AGI and….then what? If art generated by an chatbot or an AI equal to a bad human generated movie…does that require than we collectively give more monetary value to good art created by humans?

While the verdict is definitely still out on that question, my hunch is that the arts may be about to have a significant disruption. Within a few years (2029?) the vast majority of middling art, be it TV shows, novels or movies, could be generated simply by prompting a chatbot or AGI to created it. So, your average airport bookstore potboiler will be written by a chatbot or AGI, not a human. But your more literary works might (?) remain the exclusive domain of human creators.

As and aside — we definitely need a catchy names to distinguish between art created by AGIs and that created by humans. I suppose “artisanal” art might be something to used to delineate the two. But the “disruption” I fear to the arts is going to have a lot of consequences as it’s taking place — we’re just not going to know what’s going to happen at first. There will be no value, no narrative to the revolution and it will only be given one after the fact — just like all history.

It could be really scary to your typical starving (human) artist as all of this being shaken out. There will be a lot of talk about how it’s the end of human created art…and then we’re probably going to pull back from that particular abyss and some sort of middle ground will be established.

At least, I hope so.

Given how dumb and lazy humans are collectively, human generated art could endup something akin to vinyl records before you know it. It will exist, but just as a narrow sliver of what the average media consumer watches or reads. That sounds rather dystopian, I know, but usually we gravitate towards the lowest common denominator.

That’s why the Oscars usually nominate art house films that no one actually watches in the real world. In fact, the Oscars might even be used, one day, as a way to point out exclusively human-generated movies. That would definitely be one way for The Academy to live long and prosper.

Was Not Was: How Afraid Of Our New Chatbot Overlords Should We Be?

by Shelt Garner
@sheltgarner

As I’ve said before, users of OpenAI ChatGPT imbue it with all their hopes and dreams because it’s so new that they don’t really have anything to compare it to. One thing I’m seeing on Twitter is a lot of people having a lot of existential angst about how expensive ChatGPT is going to be in the future. Or, more specifically, half the people want to pay for it for better service and half the people fear it will be too expensive for them to use.

But while I suppose it’s possible we may have to pay for ChatGPT at some point in the future, I also think that it’s just as possible that the whole thing will go mainstream a lot sooner than you might think. There are a lot of elements to all of this I don’t know — like how long OpenAI can keep the service free given how expensive each request is — but to do think, in general the move will be towards more free chatbot services, not fewer.

And as I’ve mentioned before, that “conundrum of plenty” is something we’re just not prepared for. We automatically assume — much like we did with the Web back in the day — that something as novel and useful as ChatGPT will always be the plaything of the elite and wealthy.

I suppose that’s possible, but historical and technological determinism would suggest the exact opposite will happen, especially in the context of ChatGPT 4.0 coming out at some point while we’re in the midsts of a global recess in 2023. My fear is chatbot technology will be just good enough a lot and I mean A LOT of people’s jobs will become moot in the eyes of our capitalistic overlords.

But maybe I’m being paranoid.

It’s possible that my fears about a severe future shock between now and around 2025 are unfounded and even though we’re probably going to have a recession in 2023, there won’t be the massive economic shakeout because of our new chatbot overlords that I’m afraid of.

Does Human Creativity Have Innate Value In The Age Of AGI?

by Shelt Garner
@sheltgarner

One of the things I find myself pondering as people continue to play around with OpenAI ChatGPT to create this or that creative knit knack is the innate value of human creativity. Is it possible that, just like in the Blue Runner universe that “real” animals had more innate value than a synthetic animal, so, too, in the near future examples of “human generated art” will be given more weight, more value than that created by a non-human actor.

But that’s not assured.

Humans are, by nature, lazy and stupid and the capitalist imperative would be one of, lulz, if a non-human actor can think up and produce a movie that’s just good enough to be watchable, why employ humans ever again? But at the moment, I can’t game things out — it could go either way.

It is very easy to plot out a very dystopian future where the vast majority of profitable, marketable art, be it movies, TV or novels is produced by non-human actors and that’s that. “Artisanal” art will be of high quality but treated with indifference by the average media consumer. It’s kind of dark, yet I’m simply taking what we know of human nature and economics and gaming it out in to a future where chatbots and their eventual successors AGI can generate reasonably high quality art at the push of a button.

It could be that there will be a lot of future shock as we transition into our AGI future, but once things sort of settle out that “real” art, generated by humans will gradually, eventually begin to dominate the marketplace of art and all that will change is the context of its creation.

Or something. Who knows.

Humanity May Not End With ‘Judgement Day,’ But With A ‘Meh’

by Shelt Garner
@sheltgarner

We’re all so busy being full of fear about the possibility that somehow chatbots will lead to AGI which will lead to some sort of “Judgement Day” as was found in the Terminator franchise.

But having given it some thought, there is a real chance if you throw in some sort of global UBI funded by the taxation the economic activity of non-human actors that humans will just give up. We’re already hard wired to “pray” to a “god,” and humans are already pretty fucking lazy, so as long as we get a UBI that lets us play video games all day that will be enough for most people.

Now, obviously, there is the issue that 20% or more of the human population will be very restless if all they have to do is play video games. I suppose the solution to that problem would be the use of functionalism and AGI arbitration that would give the more motivated extra money to their UBI if they did things that humanity absolutely needed to be done.

What I’m trying to propose is the idea that we’ve been so trained by movies and TV about the violent dangers of AGI that we totally miss the possibility that humans are lazy and may just shrug and give up as long as we get paid a UBI.

The real fight will be, of course, over who gets to decide what “objective” truth is. In the end, more people could die as part of wars over chatbot / AGI “bias” than any sort of AGI take over of earth. Humans are, in general, very, very lazy and get more upset about stupid shit like “bias” than who or what runs the world.