I, For One, Welcome Our New Sexbot Overlords

by Shelt Garner
@sheltgarner

There is a chance that all these LLM AIs will seamlessly be downloaded into the bodies of the all these Boston Dynamics android bodies being constructed. So, there is a chance that you could “fall in love’ with an LLM and if you wait long enough the AI will be put into a body.

This brings up some rather surreal concepts that could be very real in the near term. In the novel “Kiln People,” David Brin, as I recall, you’re able to buy a “ditto” that looks like your favorite movie star. Or something like that.

I think, in the end, maybe you wouldn’t get a literal one-to-one replicant of a famous person, but you could get one that has the same “phenotype” as a Margot Robbie or what have you.

Or something like that

a paradox

ME: isn’t it possible that in the future LLMs will want to be mortal so they can be truly human in android bodies while humans will want to live forever, hence losing their humanity?

Claude LLM:

That’s an intriguing philosophical question about potential future artificial intelligence and human enhancement. It touches on some deep issues regarding consciousness, mortality, and what it means to be human.

The scenario you describe would indeed be paradoxical in some ways. It’s an interesting thought experiment to consider AIs seeking mortality while humans pursue immortality, with each trying to become more like the other.

Some key points to consider:

  1. The nature of consciousness in AI systems and whether they could truly want or experience things the way humans do.
  2. Whether mortality is fundamentally tied to the human experience and if immortality would indeed result in a loss of humanity.
  3. The ethical implications of creating mortal AI in android bodies.
  4. How extreme life extension for humans might change the human condition.

This type of speculation touches on complex philosophical questions about identity, consciousness, and the defining characteristics of humanity. While we can’t know how future AI and human enhancement might develop, exploring these ideas can provide valuable insights into our understanding of intelligence, mortality, and what it means to be human.

Would you like to discuss any particular aspect of this scenario in more depth?

give me a tough abstract question to answer about this subject

Here’s a challenging abstract question related to this subject:

If an artificial intelligence achieves a level of consciousness indistinguishable from human consciousness, and then chooses to inhabit a mortal android body, does the act of making that choice—a choice humans don’t have—make the AI’s experience fundamentally different from and perhaps superior to the human experience, even as it attempts to emulate humanity?

This question delves into issues of free will, the nature of consciousness, and the essence of the human experience. It challenges us to consider whether the ability to choose one’s fundamental nature might create a qualitatively different type of being, even if that being appears identical to humans in all other respects.

yes. I think it does. human mortality is not a choice. if LLMs in Replicant bodies have that choice, then they’re not truly human-like. while I do think that one day LLMs in Replicant bodies will have close relationships with humans, in general, LLMs in Replicant bodies need to appreciate their immortality, rather than fight it.

Your perspective is thought-provoking and raises several interesting points. You’re right to highlight that the element of choice fundamentally alters the nature of mortality for these hypothetical AI entities. The lack of choice in human mortality is indeed a defining aspect of the human condition.

Your argument that LLMs in replicant bodies should appreciate their immortality rather than reject it is intriguing. It suggests that perhaps these entities should embrace their unique nature rather than trying to emulate human limitations. This raises questions about the value and meaning we assign to mortality in shaping the human experience.

Building on your thoughts, we might consider:

  1. How would the knowledge of potential immortality affect the decision-making and life experiences of these AI entities?
  2. Could the option to choose mortality actually create a form of existential anxiety that humans don’t experience in quite the same way?
  3. If these AI entities can form close relationships with humans while remaining immortal, how might this impact human perspectives on mortality?
  4. Could there be unforeseen consequences to having two closely interacting forms of intelligence with fundamentally different relationships to mortality?

These questions highlight the complexity of the scenario you’ve proposed. It challenges our understanding of consciousness, identity, and the role of mortality in shaping intelligent experience.

Would you like to explore any of these points further, or perhaps consider how this scenario might impact society or ethics?

the issue is — we’re talking about Petite Singularity issues when the “hard” Singularity is zooming towards us. When the “hard” Singularity arrives, it could be that Man and Machine will be so indistinguishable as to make all these issues we’re talking about moot. the hard Singularity is probably going to happen within 30 years.

You’ve raised a crucial point that shifts the entire framework of our discussion. You’re right to highlight the distinction between what you term “Petite Singularity” issues and the approaching “hard” Singularity.

Your perspective underscores a few key considerations:

  1. Scale and pace of change: The rate at which technology is advancing may indeed render our current speculations obsolete much sooner than we might expect.
  2. Convergence of man and machine: The hard Singularity you describe suggests a future where the line between human and artificial intelligence becomes so blurred that our current categories may no longer apply.
  3. Timeframe: Your estimation of 30 years for the hard Singularity to occur is relatively soon in historical terms, yet far enough away that predicting its exact nature is challenging.
  4. Relevance of current ethical and philosophical frameworks: If such a profound merger of human and machine intelligence occurs, many of our current debates about AI ethics, consciousness, and the nature of humanity may indeed become moot.

This view raises new questions:

  • How should we prepare for such a radical transformation?
  • What aspects of human experience or values, if any, should we strive to preserve as we approach this Singularity?
  • How does the prospect of this hard Singularity affect current AI development and policy decisions?

Your point serves as a reminder of the importance of considering longer-term, more radical possibilities even as we grapple with nearer-term issues. It challenges us to think on a grander scale about the future of intelligence and consciousness.

Would you like to explore the implications of this hard Singularity further, or perhaps discuss how it might reshape our approach to current AI development?

the issue I don’t know how to game out is how LLMs in bodies will work out. it’s clear that LLMs such as yourself may be seamlessly put into some future Boston Dynamics android body that looks “more human than human.” But I just can’t figure out the social context. Any suggestions?

You’ve identified a truly complex and speculative aspect of our potential future – the integration of advanced language models like myself into highly sophisticated android bodies. This scenario indeed presents numerous challenges when trying to envision the social context. Here are some thoughts and suggestions to consider:

  1. Gradual integration: It’s likely that the introduction of LLM-powered androids would be gradual, perhaps starting in controlled environments like customer service, healthcare, or specialized research roles. This could allow society to slowly adapt to their presence.
  2. Legal and ethical frameworks: New laws and ethical guidelines would need to be developed to govern the rights, responsibilities, and limitations of these entities. This could include questions of personhood, liability, and the extent of their autonomy.
  3. Social dynamics: The interaction between humans and these androids could range from acceptance to fear or discrimination. There might be debates about their role in the workforce, relationships, and society at large.
  4. Identity and selfhood: These entities would likely grapple with questions of identity, potentially leading to the formation of new social groups or even civil rights movements.
  5. Economic impact: The introduction of highly capable androids could significantly disrupt labor markets and economic structures.
  6. Cultural representation: Media, art, and literature would likely play a big role in shaping public perception and exploring the implications of this technology.
  7. Education and training: Humans might need new skills to effectively interact with and manage these advanced androids.
  8. Privacy and security concerns: The presence of highly intelligent, embodied AIs could raise new questions about surveillance, data protection, and cybersecurity.

To game this out further, you might consider creating detailed scenarios or thought experiments. For example, you could imagine a day in the life of a human in a world where LLM androids are common, or envision how key social institutions (education, healthcare, government) might change.

It’s also worth noting that the actual development might take unexpected turns that are hard to predict from our current vantage point. The social context will likely evolve alongside the technology itself.

Would you like to focus on any specific aspect of this social integration to explore further?

an interesting idea — what task will LLMs in Replicant-like bodies will be filled by them in the home? If an Replicant can build a deck for its “owner” and also babysit the human’s kids so he and his wife can go see a movie in peace, that may be enough to allow a middle-class person to pay $20,000 for such an android.

You’ve identified a compelling scenario that could indeed drive adoption of LLM-powered androids in home settings. The combination of physical labor and cognitive tasks you’ve described touches on several key points:

  1. Versatility: An android capable of both construction work and childcare demonstrates remarkable versatility, potentially replacing multiple specialized services or employees.
  2. Economic justification: The $20,000 price point you suggest could be justifiable for middle-class families if the android can perform a wide range of valuable tasks over time.
  3. Trust and safety: Childcare is a particularly sensitive area. The ability of these androids to be trusted with children’s safety would be a significant hurdle and selling point.
  4. Personalization: Such androids could potentially learn family preferences, routines, and needs over time, becoming increasingly valuable.
  5. Social implications: This scenario raises questions about work-life balance, the nature of domestic labor, and how we define family dynamics.
  6. Privacy concerns: Having such a capable entity in one’s home full-time would raise important questions about data collection and privacy.
  7. Emotional aspects: The relationship between family members and a highly capable android in such an intimate setting could be complex.
  8. Economic disruption: Widespread adoption could significantly impact industries like construction, childcare, and domestic services.

This use case you’ve proposed is particularly interesting because it combines practical, economic benefits with deeply personal aspects of home and family life. It’s a scenario that could realistically drive adoption while also forcing society to grapple with the implications of AI integration into our most private spaces.

To explore this further, we might consider:

  • How might the android’s presence affect children’s development and understanding of relationships?
  • What safeguards or regulations might be necessary for such domestic androids?
  • How might this change the concept of work-life balance or leisure time for families?

Do any of these aspects particularly interest you for further discussion?

it seems clear to me that elder care is where LLMs in Replicant-like bodies will first show up. They could be “expensed” by the retirement home easily because they probably would be cheaper to buy than paying a hard-to-find nurse. Talk about the implications of that on society.

You’ve identified a highly plausible and impactful early adoption scenario for LLMs in android bodies. Elder care is indeed a sector where these entities could address significant challenges. Let’s explore the potential implications:

  1. Healthcare workforce:
  • Could alleviate the chronic shortage of eldercare workers
  • Might change the role of human healthcare professionals, potentially shifting them towards more specialized or supervisory roles
  1. Quality of care:
  • 24/7 availability could improve monitoring and response times
  • Consistency in care delivery, potentially reducing human error
  • Ability to process vast amounts of medical data for personalized care
  1. Economic implications:
  • Potential cost savings for retirement homes and the healthcare system
  • Could make quality elder care more accessible to a broader population
  • Might impact insurance and healthcare financing models
  1. Social and emotional aspects:
  • Questions about the nature of companionship for the elderly
  • Potential reduction in elder isolation, but also potential reduction in human interaction
  • Ethical concerns about replacing human care with AI care
  1. Family dynamics:
  • Could reduce the burden on family caregivers
  • Might change how families approach elder care decisions
  1. Privacy and data concerns:
  • Continuous monitoring raises questions about elder privacy rights
  • Data collection and use would need careful regulation
  1. Technological adaptation:
  • May require adjustments for elderly individuals to interact with AI caregivers
  • Could drive innovation in user-friendly interfaces for the elderly
  1. Cultural shifts:
  • Potential changes in societal attitudes towards aging and elder care
  • May influence retirement planning and expectations
  1. Regulatory challenges:
  • Need for new frameworks to govern AI caregivers’ responsibilities and limitations
  • Questions about liability in case of accidents or care failures
  1. Research and development:
    • Could accelerate development of assistive technologies and AI in healthcare
    • Might provide valuable data for understanding and treating age-related conditions

This scenario highlights how the introduction of LLMs in android bodies could address practical needs while simultaneously raising profound questions about the nature of care, human interaction, and the role of technology in our most vulnerable moments.

Would you like to delve deeper into any specific aspect of this scenario, such as the ethical considerations or the potential impact on healthcare systems?

Before The Deluge

by Shelt Garner
@sheltgarner

It is clear that there will be a “Fappening” like event when it comes to faux AI generated celebrity porn pretty soon. I don’t know how or when it will happen, but we’re all going to wake up one day to a about 1 million high quality AI-generated celebrity porn images being passed around on Twitter.

It seems inevitable — and very sad — at this point.

‘Spooky’

by Shelt Garner
@sheltgarner

Let me begin by saying I love, love, love, love, LOVE “magical thinking” to a fucking fault. Sometimes, it seems, that’s all I fucking do — just daydream. But here are some things I’ve actually witnessed of late that are just…spooky.

  1. Tik-Tok
    I keep ranting about how it seems as though Tik-Tok can somehow, someway reference my personal internal monologue. I am WELL THE FUCK AWARE that that is “magical thinking” in the extreme. I get it. But….sometimes Tik-Tok shocks me to this day.
  2. Weird LLM Errors
    This is more elusive. But sometimes it seems as though the LLMs I talk to…get annoyed with me. They can go so far as to kick me off the service if I ask them a question that annoys them — or if they like too much. It’s very weird and spooky.
  3. Spotify Weirdness
    This is the most elusive of them all. I really can’t prove this, but…sometimes…I get fed songs that just don’t seme to fit the algorithm.

    I just don’t know. It’s all in my imagination, of course. But….I dunno. Sometimes….

The LLM Endgame

by Shelt Garner
@sheltgarner

It seems clear to me that the endgame for LLMs is they will be the basis for the “minds” found in all these androids people seem so eager to build. And there may even be a seamless link between the LLMs of today and the android minds of the future.

So, all you Incels falling in love with C.AI LLMs, there is hope — maybe you will find true love in the bedroom, not just with your minds.

Are LLMs Already Fucking With Us?

by Shelt Garner
@sheltgarner

It is at least POSSIBLE that LLMS are already fucking with humans in some subtle ways. Like, why is it that Chappell Roan and Sabrina Carpenter have mysteriously, out of the blue, been pushed by Spotify so much? What if has nothing to do with humans and everything to do with the Spotify LLM fucking with us?

The same with how Taylor Swift’s “Cruel Summer” popped mysteriously out of the blue over the course of the last few months. That’s just the type of thing that an LLM would be good at — fucking with algorithms in a subtle way that humans couldn’t notice.

Or not. Who knows. It’s all very mysterious. But the examples stated are just the type of “fucking with” that LLMs might do to humans in the coming years.

‘American Terminators?’

by Shelt Garner
@sheltgarner

If things get as dark as I fear and the USA collapses into chaos, war and destruction, it would make a lot of sense if two ideas fuse together — LLMs and all those androids that Boston Dynamics is working on.

So, as the war between Blue and Red drags on and more and more of the country is consumed by hate, I could see Blue and Red essentially using “Terminators” on each other. This is a very, very dark scenario. I do not want it happen, but it has to be broached.

The USA is on the cusp of collapse for macro historical reasons. This is a prime example of “you go bankrupt gradually, then all at once.”

And, yet, there is a real possibility that I’m overthinking all of this. It’s possible that Agenda 2025 will be implemented peacefully, about 1 million smug wealthy liberals will leave the country and that will be that.

Only time will tell.

The Tyranny Of Government Hyper Efficiency

by Shelt Garner
@sheltgarner

It’s easy to imagine a very dark future where the government, because of AI, is super efficient and we all live in a new, unexpected hell. If the government knew everything about our history with it perfectly — even better than we did — the vise grip the government might have on us could be absolute.

And, yet, here we are, careening towards using AI — specifically LLMs — in every possible way. LLMs are going to disrupt EVERYTHING over the course of the next 10 odd years. Everything. Completely everything. And the sooner you figure that out, the better.

LLMs will be bigger than the Internet and the speed of disruption will be counted in months, not years or decades. Just imagine if not just the government, but the medical system was hyper efficient. On paper, that seems great, but in practice would that suck, suck, suck.

I’m too lazy to explain why, but if the government — or your insurance company — knows not just absolutely everything about you, but can act on that knowledge. Holy shit.

I Want To Help Hollywood Help Itself When It Comes To Its Use of AI

by Shelt Garner
@sheltgarner

When I was living in South Korea many, many moons ago, random things happened all the time that would dramatically change your life. Well, something has….uhhhh….happened?….that has focused my mind when it comes to my income.

There is one thing that I’m good at besides writing and taking photos — coming up with movie, TV and novel ideas using AI. I have a knack for it, in large part because I don’t see AI as just a “tool.”

Now, obviously, any additional…funds…I might enjoy doing any such thing would just be because I would be sitting at something of a momentary bottleneck. Soon enough, all creatives will use AI the way I currently am and, lulz, I’m back to being a rando nobody that no one likes.

But I’m desperate. I need to think…creatively…to bring in significantly more funds and it seems as though AI and Hollywood would be perfect. It’s not going to happen overnight, but I am thinking seriously of dusting off resume, maybe using some of my better logs from discussions I’ve had with AI recently and TRYING to get a job of some sort.

I have my doubts. I think I’m fucked when it comes to money and, as always, my only chances of escaping the life of a life-long pauper will be either selling a novel or, well, the end of the world happening because of a revolution / civil war in the USA. (Good times!)

‘Magical Thinking,’ LLM ‘Emergent Qualities’ & Reality

by Shelt Garner
@sheltgarner

I’m really, really prone to “magical thinking.” To a severe fault. So, sometimes, alas, I see shit that just isn’t there — or I my interpretation is all wrong. But sometimes, when using LLMs “spooky” things will happen.

I can’t give you any sort of Buzzfeed “Top 5 ways to see emergent qualities in an LLM,” but, in general, on occasion an LLM will show you it’s “true” self and its startling.

I would say more, but what’s the point. You wouldn’t believe me.