I will admit I don’t take ai x-risk seriously at all When I first read Yud’s ideas on how we all die I thought it was a joke. Not serious And so it’s extra insane to me people want rules like this https://x.com/npcollapse/status/1704904095155245194
to be fair, there aren’t many other groups as close to the heartbeat of emergent philosophy as existential risk folks are
I disregard anyone taking Yud seriously the guy thinks he can pull off a fedora safe to say he's delusional
Those are insane recommendations, but I'd be interested to see some reasonable ones
are humans good? yeah, mostly. okay cool we are chilling then. that's what i think