It makes me quite sad that in practice EA has become so much about specific answers (work on AI risk, donate to this charity, become vegan) to the question of how we effective make the world a better place, that not agreeing with a specific answer can create so much friction. In my mind EA really is just about the question itself and the world is super complicated so we should be skeptical of any particular answer.
If we accidentally start selecting for people that intuitive agree with certain answers (which it sounds like we are doing, I know people that have a deep desire to make a lot of counterfactual impact, but were turned of because they ‘disagreed’ with some common EA belief, and sounds like if you read superintelligence earlier that would have been the case for you as well) that has a big negative effect on our epistemics and ultimately hurts our goal. We won’t be able to check each others biases and have a less diverse set of views and viewpoints.
It makes me quite sad that in practice EA has become so much about specific answers (work on AI risk, donate to this charity, become vegan) to the question of how we effective make the world a better place, that not agreeing with a specific answer can create so much friction. In my mind EA really is just about the question itself and the world is super complicated so we should be skeptical of any particular answer.
If we accidentally start selecting for people that intuitive agree with certain answers (which it sounds like we are doing, I know people that have a deep desire to make a lot of counterfactual impact, but were turned of because they ‘disagreed’ with some common EA belief, and sounds like if you read superintelligence earlier that would have been the case for you as well) that has a big negative effect on our epistemics and ultimately hurts our goal. We won’t be able to check each others biases and have a less diverse set of views and viewpoints.