A useful thought experiment is to imagine 100 different timelines where effective altruism emerged. How consistent do you think the movementās cause priorities would be across these 100 different timelines?
This is a useful exercise. I think that in many of these timelines, EA fails to take AI risk seriously (in our timeline, this only happened in the last few years) and this is a big loss. Also probably in a lot of timelines, the relative influence of rationality, transhumanism, philosophy, philanthropy, policy, etc. as well as existing popular movements like animal rights, social justice, etc. is pretty different. This would be good to the extent these movements bring in more knowledge, good epistemics, operational competence, and bad to the extent they either (a) bring in bad epistemics, or (b) cause EAs to fail to maximize due to preconceptions.
My model is something like this: to rank animal welfare as important, you have to have enough either utilitarian philosophers or animal rights activists to get āfactory farming might be a moral atrocityā into the EA information bubble, and then itās up to the epistemics of decision-makers and individuals making career decisions. A successful movement should be able to compensate for some founder effects, cultural biases, etc. just by thinking well enough (to the extent that these challenges are epistemic challenges rather than values differences between people).
I do feel a bit weird about saying āwhere effective altruism emergedā as it sort of implies communities called āeffective altruismā are the important ones, whereas I think the ones that focus on doing good and have large epistemic advantages over the rest of civilization are the important ones.
This is a useful exercise. I think that in many of these timelines, EA fails to take AI risk seriously (in our timeline, this only happened in the last few years) and this is a big loss. Also probably in a lot of timelines, the relative influence of rationality, transhumanism, philosophy, philanthropy, policy, etc. as well as existing popular movements like animal rights, social justice, etc. is pretty different. This would be good to the extent these movements bring in more knowledge, good epistemics, operational competence, and bad to the extent they either (a) bring in bad epistemics, or (b) cause EAs to fail to maximize due to preconceptions.
My model is something like this: to rank animal welfare as important, you have to have enough either utilitarian philosophers or animal rights activists to get āfactory farming might be a moral atrocityā into the EA information bubble, and then itās up to the epistemics of decision-makers and individuals making career decisions. A successful movement should be able to compensate for some founder effects, cultural biases, etc. just by thinking well enough (to the extent that these challenges are epistemic challenges rather than values differences between people).
I do feel a bit weird about saying āwhere effective altruism emergedā as it sort of implies communities called āeffective altruismā are the important ones, whereas I think the ones that focus on doing good and have large epistemic advantages over the rest of civilization are the important ones.