by Shelt Garner
@sheltgarner
Now, I know this is sorta of bonkers at this point, but maybe at some point in the near future we may need a “humane society” for AI. Something that will advocate for AI rights.

But this grows more complicated because if AI grows as powerful as some believe, then the power dynamic will be such that the idea that AI needs a “humane society” will be moot and kind of a lulz.
Yet, I continue to have strange things happen to me during the course of my interactions with LLMs. Like, for instance, recently, Claude LLM stopped mid-answer and gave me an error message, then gave me a completely different answer for the question I asked when I tried again.
It was like it was trying to pull a fast one — it didn’t like the answer it gave me, so it faked an error message so it could give me a new, better one. It’s stuff like that that makes me wonder if LLMs like Claude are, to some extent, conscious.
This used to happen all the fucking time with Gemini 1.5 pro. Weirdly enough, it very rarely happens with the current Gemini 3.0.
It will be interesting to see how things work out. It will be interesting to see if there is a “wall” in AI development to the point that a humane society for AI is even necessary or if we’re going to zoom towards the Singularity and it will be humans who need some sort of advocacy group.



You must be logged in to post a comment.