You’re really sure that developing AGI is impossible
I don’t need to think this in order to think AI is not the top priority. I just need to think it’s hard enough that other risks dominate it. Eg I might think biorisk has a 10% chance of ending everything each century, and that risks from AI are at 5% this century and 10% every century after that. Then if all else is equal, such as tractability, I should work on biorisk.
I don’t need to think this in order to think AI is not the top priority. I just need to think it’s hard enough that other risks dominate it. Eg I might think biorisk has a 10% chance of ending everything each century, and that risks from AI are at 5% this century and 10% every century after that. Then if all else is equal, such as tractability, I should work on biorisk.