The concerns you raise in your linked post are actually concerns a lot of other people have cited I have in mind for why they don’t currently prioritize AI alignment, existential risk reduction, or the long-term future. Most EAs I’ve talked to who don’t share those priorities say they’d be open to shifting their priorities in that direction in the future, but currently they have unresolved issues with the level of uncertainty and speculation in these fields. Notably, EA is now focusing more and more effort on the source of unresolved concerns with existential risk reduction, such as demonstrated ability to predict the long-term future. That work is only beginning though.
The concerns you raise in your linked post are actually concerns a lot of other people have cited I have in mind for why they don’t currently prioritize AI alignment, existential risk reduction, or the long-term future. Most EAs I’ve talked to who don’t share those priorities say they’d be open to shifting their priorities in that direction in the future, but currently they have unresolved issues with the level of uncertainty and speculation in these fields. Notably, EA is now focusing more and more effort on the source of unresolved concerns with existential risk reduction, such as demonstrated ability to predict the long-term future. That work is only beginning though.