Thanks, Nick, that’s helpful. I’m not sure how much we actually disagree — in particular, I wasn’t meaning this post to be a general assessment of EA as a movement, rather than pointing to one major issue — but I’ll use the opportunity to clarify my position at least.
The EA movement is not (and should not be) dependent on continuous intellectual advancement and breakthrough for success. When I look at your 3 categories for the “future” of EA, they seem to refer more to our relevance as thought leaders, rather than what we actually achieve in the world
It’s true in principle that EA needn’t be dependent in that way. If we really had found the best focus areas, had broadly allocated right % of labour to each, and have prioritised within them well, too, and the best focus areas didn’t change over time, then we could just focus on doing and we wouldn’t need any more intellectual advancement. But I don’t think we’re at that point. Two arguments:
1. An outside view argument: In my view, we’re more likely than not to see more change and more intellectual development in the next two decades than we saw in the last couple of centuries. (I think we’ve already seen major strategically-relevant change in the last few years.) It would be very surprising if the right prioritisation prior to this point is the right prioritisation through this period, too. 2. An inside view argument: Look at my list of other cause areas. Some might still turn out to be damp squibs, but I’m confident some aren’t. The ideal portfolio involves a lot of effort on some of these areas, and we need thought and research in order to know whichn ones and how best to address them.
I love your list of achievements—I agree the EA movement has had a lot of wins and we should celebrate that. But EA is about asking whether we’re doing the most good, not just a lot of good. And, given the classic arguments around fat-tailed distributions and diminishing returns within any one area, I think if we mis-prioritise we lose a meaningful % of the impact we could have had.
So, I don’t care about intellectual progress intrinsically. I’m making the case that we need it in order to do as much good as we could.
More generally, I think a lot of social movements lose out on a lot of the impact they could have had (even on their own terms) via “ossification”—getting stuck on a set of ideas or priorities that it becomes hard, culturally, to change. E.g. environmentalists opposing nuclear, animal welfare advocates focusing on veganism, workers’ rights opposing capitalism, etc. I think this occurs for structural reasons that we should expect to apply to EA, too.
Thanks, Nick, that’s helpful. I’m not sure how much we actually disagree — in particular, I wasn’t meaning this post to be a general assessment of EA as a movement, rather than pointing to one major issue — but I’ll use the opportunity to clarify my position at least.
It’s true in principle that EA needn’t be dependent in that way. If we really had found the best focus areas, had broadly allocated right % of labour to each, and have prioritised within them well, too, and the best focus areas didn’t change over time, then we could just focus on doing and we wouldn’t need any more intellectual advancement. But I don’t think we’re at that point. Two arguments:
1. An outside view argument: In my view, we’re more likely than not to see more change and more intellectual development in the next two decades than we saw in the last couple of centuries. (I think we’ve already seen major strategically-relevant change in the last few years.) It would be very surprising if the right prioritisation prior to this point is the right prioritisation through this period, too.
2. An inside view argument: Look at my list of other cause areas. Some might still turn out to be damp squibs, but I’m confident some aren’t. The ideal portfolio involves a lot of effort on some of these areas, and we need thought and research in order to know whichn ones and how best to address them.
I love your list of achievements—I agree the EA movement has had a lot of wins and we should celebrate that. But EA is about asking whether we’re doing the most good, not just a lot of good. And, given the classic arguments around fat-tailed distributions and diminishing returns within any one area, I think if we mis-prioritise we lose a meaningful % of the impact we could have had.
So, I don’t care about intellectual progress intrinsically. I’m making the case that we need it in order to do as much good as we could.
More generally, I think a lot of social movements lose out on a lot of the impact they could have had (even on their own terms) via “ossification”—getting stuck on a set of ideas or priorities that it becomes hard, culturally, to change. E.g. environmentalists opposing nuclear, animal welfare advocates focusing on veganism, workers’ rights opposing capitalism, etc. I think this occurs for structural reasons that we should expect to apply to EA, too.