“The greater the degree of spread, the more you’re giving up by not searching.”
This makes sense. But I don’t think you have to agree with the “big” part of premise 1 to support and engage with the project of effective altruism, e.g. you could think that there are small differences but those differences are worth pursuing anyway.
The “big” part seems like a common claim within effective altruism but not necessarily a core component?
(You didn’t claim explicitly in the post above that you have to agree with the “big” part but I think it’s implied? I also haven’t listened to the episode yet.)
I’d say that pursuing the project of effective altruism is worthwhile, only if the opportunity cost of searching C is justified by the amount of additional good you do as a result of searching for better ways to do good, rather then go by common sense A. It seems to me that if C>= A, then pursuing the project of EA wouldn’t be worth it. If, however, C< A, then pursuing the project of EA would be worth it, right?
To be more concrete let us say that the difference in value between the commonsense distribution of resources to do good and the ideal might be only 0.5%. Let us also assume it would cost you only a minute to find out the ideal distribution and that the value of spending that minute in your commonsense way is smaller than getting that 0.5% increase. Surely it would still be worth seeking the ideal distribution (=engaging in the project of EA), right?
I don’t think I would have the patience for EA thinking if the spread weren’t big. Why bother with a bunch of sophisticated-looking models and arguments to only make a small improvement in impact? Surely it’s better to just get out there and do good?
Surely it’s better to just get out there and do good?
Depends. As Ben and Aaron explain in their comments, high identifiability should in theory be able to offset low spread. In other words, if the opportunity cost of engaging in EA thinking is small enough, it might be worth engaging in it even if the gain from doing so is also small.
Certainly there’s a risk that it turns into a community wide equivalent of procrastination if the spreads are low. Would love someone to tackle that rigorously and empirically!
I think it’s best to think about the importance of EA as a matter of degree. I briefly mention this in the post:
Moreover, we can say that it’s more of a mistake not to pursue the project of effective altruism the greater the degree to which each of the premises hold. For instance, the greater the degree of spread, the more you’re giving up by not searching (and same for the other two premises).
I agree that if there were only, say, 2x differences in the impact of actions, EA could still be very worthwhile. But it wouldn’t be as important as in a world where there are 100x differences. I talk about this a little more in the podcast.
I think ideally I’d reframe the whole argument to be about how important EA is rather than whether it’s important or not, but the phrasing gets tricky.
“The greater the degree of spread, the more you’re giving up by not searching.” This makes sense. But I don’t think you have to agree with the “big” part of premise 1 to support and engage with the project of effective altruism, e.g. you could think that there are small differences but those differences are worth pursuing anyway. The “big” part seems like a common claim within effective altruism but not necessarily a core component?
(You didn’t claim explicitly in the post above that you have to agree with the “big” part but I think it’s implied? I also haven’t listened to the episode yet.)
I’d say that pursuing the project of effective altruism is worthwhile, only if the opportunity cost of searching C is justified by the amount of additional good you do as a result of searching for better ways to do good, rather then go by common sense A. It seems to me that if C>= A, then pursuing the project of EA wouldn’t be worth it. If, however, C< A, then pursuing the project of EA would be worth it, right?
To be more concrete let us say that the difference in value between the commonsense distribution of resources to do good and the ideal might be only 0.5%. Let us also assume it would cost you only a minute to find out the ideal distribution and that the value of spending that minute in your commonsense way is smaller than getting that 0.5% increase. Surely it would still be worth seeking the ideal distribution (=engaging in the project of EA), right?
I like the idea of thinking about it quantitatively like this.
I also agree with the second paragraph. One way of thinking about this is that if identifiability is high enough, it can offset low spread.
The importance of EA is proportional to the multiple of the degree to which the three premises hold.
I don’t think I would have the patience for EA thinking if the spread weren’t big. Why bother with a bunch of sophisticated-looking models and arguments to only make a small improvement in impact? Surely it’s better to just get out there and do good?
Depends. As Ben and Aaron explain in their comments, high identifiability should in theory be able to offset low spread. In other words, if the opportunity cost of engaging in EA thinking is small enough, it might be worth engaging in it even if the gain from doing so is also small.
Certainly there’s a risk that it turns into a community wide equivalent of procrastination if the spreads are low. Would love someone to tackle that rigorously and empirically!
Hi Jamie,
I think it’s best to think about the importance of EA as a matter of degree. I briefly mention this in the post:
I agree that if there were only, say, 2x differences in the impact of actions, EA could still be very worthwhile. But it wouldn’t be as important as in a world where there are 100x differences. I talk about this a little more in the podcast.
I think ideally I’d reframe the whole argument to be about how important EA is rather than whether it’s important or not, but the phrasing gets tricky.