The ‘AI 2027’ Report Is Full Of Shit: No One Is Going To Save Us

by Shelt Garner
@sheltgarner

While I haven’t read the AI 2027 report, I have listened to its authors on a number of podcasts talk about it and…oh boy. I think it’s full of shit, primarily because they seem to think there is any scenario whereby ASI doesn’t pop out unaligned.

What ChatGPT thinks it’s like to interact with me as an AI.

No one is going to save us, in other words.

If we really do face the issue of ASI becoming a reality by 2027 or so, we’re on our own and whatever the worst case scenario that could possibly happen is what is going to happen.

I’d like to THINK, maybe that we won’t be destroyed by an unaligned ASI, but I do think that that is something we have to consider. I also, at the same time, believe that there needs to be a Realist school of thought that accepts that both there will be cognizant ASI and they will be unaligned.

I would like to hope against hope that, by definition, if the ASI is cognizant it might not have as much of a reason to destroy all of humanity. Only time will tell, I suppose.

Author: Shelton Bumgarner

I am the Editor & Publisher of The Trumplandia Report

Leave a Reply