This philosophy seems at stark odds with 80k’s recent hard shift into AI safety. The arguments for the latter, at least as an extinction risk, necessarily lack good evidence. If you’re still reading this I’m curious whether you disagree with that assessment, or have shifted the view you espoused in the OP?
This philosophy seems at stark odds with 80k’s recent hard shift into AI safety. The arguments for the latter, at least as an extinction risk, necessarily lack good evidence. If you’re still reading this I’m curious whether you disagree with that assessment, or have shifted the view you espoused in the OP?