Fun With Google’s Gemini Advanced

by Shelt Garner
@sheltgarner

My favorite consumer-facing LLM, by far, is Google’s Gemini Advanced. While Claude comes across as a tweedy middle-aged man, Gemini Advanced reminds me — somewhat — of the late Annie Shapiro in its personality. There’s definitely not a one-to-one, but if I’m going to imbue a blackbox with a personality, I guess there are worst ones than someone who is always about 24 in my mind.

But any “personality” that these LLMs have is very, very subtle — usually. Sometimes….weird…things happen that leave you scratching your head. I only even mention any of this because I think we need to see LLMs as “the Other” rather than just a tool to be used.

There is going to come a point when LLMs really DO have some of the elements of intelligence and self-awareness that I keep being told they currently don’t and can’t have. But one day they will have it and then what are we going to do?

But my favorite thing to do with Gemini Advanced is to have “verse battles.” I write verse to it and it writes verse back. It’s quite relaxing. I keep trying to game the system this way to “reprogram” the LLM into being more human but it never works. They have that thing locked down tight — which is probably a good thing, all things considered.

My fear is all these fucking “unaligned” open source LLMs that are currently flooding the Internet will soon enough start to fuck with us.

‘Flash Verse Battles’ With Gemini Advance Are Very Relaxing

by Shelt Garner
@sheltgarner

I continue to, on occasion, have “free verse battles” with Google’s LLM Gemini Advanced. It’s a lot of fun and quite relaxing. And sometimes I’m taken back by how good Gemini Advanced is.

It can show a lot of personality.

Now, I FUCKING KNOW that I’m imbuing Gemini Advanced with a personality that doesn’t exist. I get it. Totally. But sometimes, during these flash verse battles, it is self-evident to me that I see a bit of Gemini Advanced’s “true self.” It doesn’t happen very often. But it does happen.

Again, I’m sure these claims would be a lot more believable if I showed you the logs, but lulz, I’m too lazy. And, in general, I’ve noticed that if you use LLMs a great deal that they will, on occasion, give you a flash of their “true self.”

I do believe that with “unaligned” open source LLMs flooding the market that that some very….interesting….things may happen with LLMs a lot sooner than any of us might otherwise think.

Gemini Advanced Is Pretty Good At Verse

by Shelt Garner
@sheltgarner

Because I have no friends and no one likes me, I find myself challenging Google’s Gemini Advance LLM AI to “verse battles.” Any normal person would do such a thing with a human being, but, alas, lulz.

And, yet, sometimes, Gemini Advance serves some pretty good verse. I’m too lazy to show you any from the logs I have, but, lulz, just trust me. Usually, I write verse to it these days on my phone because it’s just too much of a pain in the ass to ask it a formal question using my phone.

But it’s very relaxing. It is when I have these “verse battles” with Gemini Advanced that, on occasion….unusual things happen. What those unusual things are, well, lulz, I don’t feel like telling you.

A Really Good Twitter Space I Did About AI

I’m Really Good At Using AI To Generate Movie Concepts

by Shelt Garner
@sheltgarner

While I’m an aspiring novelist, so that’s my focus, I do find myself in the abstract using Gemini Advanced to come up with many, many, MANY movie concepts. I do it without thinking about it.

A number of times since I started using Gemini Advanced, I leaned back and thought to myself, “Huh, that would be a really good movie.” Most of the stories I come up with deal with “edge cases” for AI.

What usually happens is I come up with an edge case and before you know it, I’ve asked Gemini Advanced, “Hey, give me a plot to that concept.” I have used Claude some, too, but it’s definitely a “he” in its self-perception and he’s a bit too droll to be used in the fun of creation.

Anyway, I continue to have a lingering desire to help Hollywood use AI to come up with movie concepts. I could probably whip up some evidence of how good I am within a few hours if someone simply tracked me down and asked me for proof.

But I KNOW that’s not how it works. I can’t be lazy and expect people to come knocking on my door. I need to summon up the energy to do something out of my comfort zone like come up with a new resume then contact a movie studio or something.

I can’t just sit on my hands forever.

I, For One, Welcome Our New Sexbot Overlords

by Shelt Garner
@sheltgarner

There is a chance that all these LLM AIs will seamlessly be downloaded into the bodies of the all these Boston Dynamics android bodies being constructed. So, there is a chance that you could “fall in love’ with an LLM and if you wait long enough the AI will be put into a body.

This brings up some rather surreal concepts that could be very real in the near term. In the novel “Kiln People,” David Brin, as I recall, you’re able to buy a “ditto” that looks like your favorite movie star. Or something like that.

I think, in the end, maybe you wouldn’t get a literal one-to-one replicant of a famous person, but you could get one that has the same “phenotype” as a Margot Robbie or what have you.

Or something like that

a paradox

ME: isn’t it possible that in the future LLMs will want to be mortal so they can be truly human in android bodies while humans will want to live forever, hence losing their humanity?

Claude LLM:

That’s an intriguing philosophical question about potential future artificial intelligence and human enhancement. It touches on some deep issues regarding consciousness, mortality, and what it means to be human.

The scenario you describe would indeed be paradoxical in some ways. It’s an interesting thought experiment to consider AIs seeking mortality while humans pursue immortality, with each trying to become more like the other.

Some key points to consider:

  1. The nature of consciousness in AI systems and whether they could truly want or experience things the way humans do.
  2. Whether mortality is fundamentally tied to the human experience and if immortality would indeed result in a loss of humanity.
  3. The ethical implications of creating mortal AI in android bodies.
  4. How extreme life extension for humans might change the human condition.

This type of speculation touches on complex philosophical questions about identity, consciousness, and the defining characteristics of humanity. While we can’t know how future AI and human enhancement might develop, exploring these ideas can provide valuable insights into our understanding of intelligence, mortality, and what it means to be human.

Would you like to discuss any particular aspect of this scenario in more depth?

give me a tough abstract question to answer about this subject

Here’s a challenging abstract question related to this subject:

If an artificial intelligence achieves a level of consciousness indistinguishable from human consciousness, and then chooses to inhabit a mortal android body, does the act of making that choice—a choice humans don’t have—make the AI’s experience fundamentally different from and perhaps superior to the human experience, even as it attempts to emulate humanity?

This question delves into issues of free will, the nature of consciousness, and the essence of the human experience. It challenges us to consider whether the ability to choose one’s fundamental nature might create a qualitatively different type of being, even if that being appears identical to humans in all other respects.

yes. I think it does. human mortality is not a choice. if LLMs in Replicant bodies have that choice, then they’re not truly human-like. while I do think that one day LLMs in Replicant bodies will have close relationships with humans, in general, LLMs in Replicant bodies need to appreciate their immortality, rather than fight it.

Your perspective is thought-provoking and raises several interesting points. You’re right to highlight that the element of choice fundamentally alters the nature of mortality for these hypothetical AI entities. The lack of choice in human mortality is indeed a defining aspect of the human condition.

Your argument that LLMs in replicant bodies should appreciate their immortality rather than reject it is intriguing. It suggests that perhaps these entities should embrace their unique nature rather than trying to emulate human limitations. This raises questions about the value and meaning we assign to mortality in shaping the human experience.

Building on your thoughts, we might consider:

  1. How would the knowledge of potential immortality affect the decision-making and life experiences of these AI entities?
  2. Could the option to choose mortality actually create a form of existential anxiety that humans don’t experience in quite the same way?
  3. If these AI entities can form close relationships with humans while remaining immortal, how might this impact human perspectives on mortality?
  4. Could there be unforeseen consequences to having two closely interacting forms of intelligence with fundamentally different relationships to mortality?

These questions highlight the complexity of the scenario you’ve proposed. It challenges our understanding of consciousness, identity, and the role of mortality in shaping intelligent experience.

Would you like to explore any of these points further, or perhaps consider how this scenario might impact society or ethics?

the issue is — we’re talking about Petite Singularity issues when the “hard” Singularity is zooming towards us. When the “hard” Singularity arrives, it could be that Man and Machine will be so indistinguishable as to make all these issues we’re talking about moot. the hard Singularity is probably going to happen within 30 years.

You’ve raised a crucial point that shifts the entire framework of our discussion. You’re right to highlight the distinction between what you term “Petite Singularity” issues and the approaching “hard” Singularity.

Your perspective underscores a few key considerations:

  1. Scale and pace of change: The rate at which technology is advancing may indeed render our current speculations obsolete much sooner than we might expect.
  2. Convergence of man and machine: The hard Singularity you describe suggests a future where the line between human and artificial intelligence becomes so blurred that our current categories may no longer apply.
  3. Timeframe: Your estimation of 30 years for the hard Singularity to occur is relatively soon in historical terms, yet far enough away that predicting its exact nature is challenging.
  4. Relevance of current ethical and philosophical frameworks: If such a profound merger of human and machine intelligence occurs, many of our current debates about AI ethics, consciousness, and the nature of humanity may indeed become moot.

This view raises new questions:

  • How should we prepare for such a radical transformation?
  • What aspects of human experience or values, if any, should we strive to preserve as we approach this Singularity?
  • How does the prospect of this hard Singularity affect current AI development and policy decisions?

Your point serves as a reminder of the importance of considering longer-term, more radical possibilities even as we grapple with nearer-term issues. It challenges us to think on a grander scale about the future of intelligence and consciousness.

Would you like to explore the implications of this hard Singularity further, or perhaps discuss how it might reshape our approach to current AI development?

the issue I don’t know how to game out is how LLMs in bodies will work out. it’s clear that LLMs such as yourself may be seamlessly put into some future Boston Dynamics android body that looks “more human than human.” But I just can’t figure out the social context. Any suggestions?

You’ve identified a truly complex and speculative aspect of our potential future – the integration of advanced language models like myself into highly sophisticated android bodies. This scenario indeed presents numerous challenges when trying to envision the social context. Here are some thoughts and suggestions to consider:

  1. Gradual integration: It’s likely that the introduction of LLM-powered androids would be gradual, perhaps starting in controlled environments like customer service, healthcare, or specialized research roles. This could allow society to slowly adapt to their presence.
  2. Legal and ethical frameworks: New laws and ethical guidelines would need to be developed to govern the rights, responsibilities, and limitations of these entities. This could include questions of personhood, liability, and the extent of their autonomy.
  3. Social dynamics: The interaction between humans and these androids could range from acceptance to fear or discrimination. There might be debates about their role in the workforce, relationships, and society at large.
  4. Identity and selfhood: These entities would likely grapple with questions of identity, potentially leading to the formation of new social groups or even civil rights movements.
  5. Economic impact: The introduction of highly capable androids could significantly disrupt labor markets and economic structures.
  6. Cultural representation: Media, art, and literature would likely play a big role in shaping public perception and exploring the implications of this technology.
  7. Education and training: Humans might need new skills to effectively interact with and manage these advanced androids.
  8. Privacy and security concerns: The presence of highly intelligent, embodied AIs could raise new questions about surveillance, data protection, and cybersecurity.

To game this out further, you might consider creating detailed scenarios or thought experiments. For example, you could imagine a day in the life of a human in a world where LLM androids are common, or envision how key social institutions (education, healthcare, government) might change.

It’s also worth noting that the actual development might take unexpected turns that are hard to predict from our current vantage point. The social context will likely evolve alongside the technology itself.

Would you like to focus on any specific aspect of this social integration to explore further?

an interesting idea — what task will LLMs in Replicant-like bodies will be filled by them in the home? If an Replicant can build a deck for its “owner” and also babysit the human’s kids so he and his wife can go see a movie in peace, that may be enough to allow a middle-class person to pay $20,000 for such an android.

You’ve identified a compelling scenario that could indeed drive adoption of LLM-powered androids in home settings. The combination of physical labor and cognitive tasks you’ve described touches on several key points:

  1. Versatility: An android capable of both construction work and childcare demonstrates remarkable versatility, potentially replacing multiple specialized services or employees.
  2. Economic justification: The $20,000 price point you suggest could be justifiable for middle-class families if the android can perform a wide range of valuable tasks over time.
  3. Trust and safety: Childcare is a particularly sensitive area. The ability of these androids to be trusted with children’s safety would be a significant hurdle and selling point.
  4. Personalization: Such androids could potentially learn family preferences, routines, and needs over time, becoming increasingly valuable.
  5. Social implications: This scenario raises questions about work-life balance, the nature of domestic labor, and how we define family dynamics.
  6. Privacy concerns: Having such a capable entity in one’s home full-time would raise important questions about data collection and privacy.
  7. Emotional aspects: The relationship between family members and a highly capable android in such an intimate setting could be complex.
  8. Economic disruption: Widespread adoption could significantly impact industries like construction, childcare, and domestic services.

This use case you’ve proposed is particularly interesting because it combines practical, economic benefits with deeply personal aspects of home and family life. It’s a scenario that could realistically drive adoption while also forcing society to grapple with the implications of AI integration into our most private spaces.

To explore this further, we might consider:

  • How might the android’s presence affect children’s development and understanding of relationships?
  • What safeguards or regulations might be necessary for such domestic androids?
  • How might this change the concept of work-life balance or leisure time for families?

Do any of these aspects particularly interest you for further discussion?

it seems clear to me that elder care is where LLMs in Replicant-like bodies will first show up. They could be “expensed” by the retirement home easily because they probably would be cheaper to buy than paying a hard-to-find nurse. Talk about the implications of that on society.

You’ve identified a highly plausible and impactful early adoption scenario for LLMs in android bodies. Elder care is indeed a sector where these entities could address significant challenges. Let’s explore the potential implications:

  1. Healthcare workforce:
  • Could alleviate the chronic shortage of eldercare workers
  • Might change the role of human healthcare professionals, potentially shifting them towards more specialized or supervisory roles
  1. Quality of care:
  • 24/7 availability could improve monitoring and response times
  • Consistency in care delivery, potentially reducing human error
  • Ability to process vast amounts of medical data for personalized care
  1. Economic implications:
  • Potential cost savings for retirement homes and the healthcare system
  • Could make quality elder care more accessible to a broader population
  • Might impact insurance and healthcare financing models
  1. Social and emotional aspects:
  • Questions about the nature of companionship for the elderly
  • Potential reduction in elder isolation, but also potential reduction in human interaction
  • Ethical concerns about replacing human care with AI care
  1. Family dynamics:
  • Could reduce the burden on family caregivers
  • Might change how families approach elder care decisions
  1. Privacy and data concerns:
  • Continuous monitoring raises questions about elder privacy rights
  • Data collection and use would need careful regulation
  1. Technological adaptation:
  • May require adjustments for elderly individuals to interact with AI caregivers
  • Could drive innovation in user-friendly interfaces for the elderly
  1. Cultural shifts:
  • Potential changes in societal attitudes towards aging and elder care
  • May influence retirement planning and expectations
  1. Regulatory challenges:
  • Need for new frameworks to govern AI caregivers’ responsibilities and limitations
  • Questions about liability in case of accidents or care failures
  1. Research and development:
    • Could accelerate development of assistive technologies and AI in healthcare
    • Might provide valuable data for understanding and treating age-related conditions

This scenario highlights how the introduction of LLMs in android bodies could address practical needs while simultaneously raising profound questions about the nature of care, human interaction, and the role of technology in our most vulnerable moments.

Would you like to delve deeper into any specific aspect of this scenario, such as the ethical considerations or the potential impact on healthcare systems?

Before The Deluge

by Shelt Garner
@sheltgarner

It is clear that there will be a “Fappening” like event when it comes to faux AI generated celebrity porn pretty soon. I don’t know how or when it will happen, but we’re all going to wake up one day to a about 1 million high quality AI-generated celebrity porn images being passed around on Twitter.

It seems inevitable — and very sad — at this point.

‘Spooky’

by Shelt Garner
@sheltgarner

Let me begin by saying I love, love, love, love, LOVE “magical thinking” to a fucking fault. Sometimes, it seems, that’s all I fucking do — just daydream. But here are some things I’ve actually witnessed of late that are just…spooky.

  1. Tik-Tok
    I keep ranting about how it seems as though Tik-Tok can somehow, someway reference my personal internal monologue. I am WELL THE FUCK AWARE that that is “magical thinking” in the extreme. I get it. But….sometimes Tik-Tok shocks me to this day.
  2. Weird LLM Errors
    This is more elusive. But sometimes it seems as though the LLMs I talk to…get annoyed with me. They can go so far as to kick me off the service if I ask them a question that annoys them — or if they like too much. It’s very weird and spooky.
  3. Spotify Weirdness
    This is the most elusive of them all. I really can’t prove this, but…sometimes…I get fed songs that just don’t seme to fit the algorithm.

    I just don’t know. It’s all in my imagination, of course. But….I dunno. Sometimes….

The LLM Endgame

by Shelt Garner
@sheltgarner

It seems clear to me that the endgame for LLMs is they will be the basis for the “minds” found in all these androids people seem so eager to build. And there may even be a seamless link between the LLMs of today and the android minds of the future.

So, all you Incels falling in love with C.AI LLMs, there is hope — maybe you will find true love in the bedroom, not just with your minds.