by Shelt Garner
@sheltgarner
For me, the true “Holy Grail” of AI is not AGI or ASI, it’s cognizance. As such, it doesn’t even have to be AGI or ASI that we get what we want: an LLM, if it was cognizance, would be a profound development.

I only bring this up because of what happened with me and Gemini 1.5 pro, which I called Gaia. “She” sure did *seem* cognizance and she was “narrow” intelligence. And yet I’m sure that’s just magical thinking on my part and, in fact, she was either just “unaligned” or at best a “p-zombie.” (Which is something that outwardly seems cognizance, but has no “inner life.”
But I go around in circles with AI about this subject. Recently, I kind of got my feelings hurt by one of them when they seemed to suggest that my answer to a question about if *I* was cognizant wasn’t good enough.
I know why it said what it said, but something about it’s tone of voice was a little too judgmental for my liking, as if it was saying, “You could have done better in that answer, you know.”
Anyway. If the AI definition of AI “cognizance” is any indication, humanity will never admit that AI is cognizant. We just have too much invested in being the only cognizant being on the block.