I want to say thank you for holding the pole of these perspectives and keeping them in the dialogue. I think that they are important and it’s underappreciated in EA circles how plausible they are.
(I definitely don’t agree with everything you have here, but typically my view is somewhere between what you’ve expressed and what is commonly expressed in x-risk focused spaces. Often also I’m drawn to say “yeah, but …”—e.g. I agree that a treacherous turn is not so likely at global scale, but I don’t think it’s completely out of the question, and given that I think it’s worth serious attention safeguarding against.)
I want to say thank you for holding the pole of these perspectives and keeping them in the dialogue. I think that they are important and it’s underappreciated in EA circles how plausible they are.
(I definitely don’t agree with everything you have here, but typically my view is somewhere between what you’ve expressed and what is commonly expressed in x-risk focused spaces. Often also I’m drawn to say “yeah, but …”—e.g. I agree that a treacherous turn is not so likely at global scale, but I don’t think it’s completely out of the question, and given that I think it’s worth serious attention safeguarding against.)
Explicit +1 to what Owen is saying here.
(Given that I commented with some counterarguments, I thought I would explicitly note my +1 here.)