-It reduces the degree to which the community is an epistemic bubble.
-On priors, it would be surprising if the best opportunities to do good for everyone all involved being part of one pretty small, eccentric community. (Even if you think AI risk is more important than everything else, you don’t need to be “in EA” to help with AI risk. You could just be a civil servant working on governance or something like that.)
Further reasons this is a good idea:
-It reduces the degree to which the community is an epistemic bubble.
-On priors, it would be surprising if the best opportunities to do good for everyone all involved being part of one pretty small, eccentric community. (Even if you think AI risk is more important than everything else, you don’t need to be “in EA” to help with AI risk. You could just be a civil servant working on governance or something like that.)