Thanks for the post, I really like the attempt to use survey data to ensure that the definition reflects the views of the leaders and members of the EA community.
I agree that the maximizing nature of effective altruism is an important part of its public value. EA has made most of its strides in my mind because it wasn’t satisfied with merely providing a non-zero amount of help to people. Although we often use examples like Play Pumps that were probably net negative, the founders of GiveWell would have had a much easier time if they were just trying to find net positive charities.
However, I’m not sure that maximizing is as clearly uncontroversial as you believe. I would guess that if surveys asked about it, leaders would be fairly united behind it, but it would get something in the range 75%-50% support from the community at large.
I can do an informal poll of this group and report back.
I’d also be interested a discussion of the limits to maximizing. For example, if an EA is already working on something in the 80th percentile of effectiveness, do they find it compelling to switch to something in the 90th percentile?
My informal poll of the Effective Altruism Polls group asked
Does your working definition of effective altruism definite it as a “maximizing”, at least in large part?
And got 30 votes Yes, and 3 votes No. There’s various problems with the informality of the poll, but I’m updating towards this being less controversial than I thought.
I was one of the non-maximizers in the poll. To expand on what I wrote there, I hold the following interpretation of this bit of Will’s article:
(i) the use of evidence and careful reasoning to work out how to maximize the good with a given unit of resources, tentatively understanding ‘the good’ in impartial welfarist terms, and
(ii) the use of the findings from (i) to try to improve the world.
When I see “the use of the findings from (i)”, I don’t take that to necessarily mean “acting to maximize good with a high level of certainty”. Instead, I interpret it as “using what we understand about maximizing good in order to make decisions that help you do more good, even if you are not actively trying to maximize your own impact”.
To use an example:
Let’s say that someone donates 10% of their income to AMF, because they read GiveWell’s research and believe that AMF is among the best opportunities in global health/development.
Let’s say that this person also hasn’t carried out explicit expected-value calculations. If you ask them whether they think AMF is the best option for doing good, they say:
“Probably not. There are hundreds of plausible ways I could donate my money, and the odds that I’ve chosen the right one are quite low. But I don’t want to spend a lot of time trying to combine my values and the available evidence to find a better option, because I don’t think that I’ll increase my certainty/impact enough for that use of time to feel valuable.”
I still consider this person to be “practicing effective altruism”. There are probably ways they could do more good even accounting for the time/energy/happiness costs of learning more. One could think of them as slightly “lazy” in their approach. Even so, they used evidence and reasoning to evaluate GiveWell’s AMF writeup and determine that AMF was an unusually good option, compared to other options they knew about. They then acted on this finding to try improving the world through a donation. This feels EA-aligned to me, even if they aren’t actually trying to “maximize” their impact.
There is a spectrum of “maximization”, ranging from people who follow EA charity evaluators’ suggestions without doing any original research to people who conduct hundreds of hours of research per year and are constantly seeking better options. I think that people in EA cover the full spectrum, so my definition of “EA” doesn’t emphasize “maximizing”—it’s about “using evidence and reason to do better, even if you stop trying to do better after a certain point, as long as you aren’t committing grave epistemic sins along the way”.
...or something like that. I haven’t yet worked this out rigorously.
Thanks for the post, I really like the attempt to use survey data to ensure that the definition reflects the views of the leaders and members of the EA community.
I agree that the maximizing nature of effective altruism is an important part of its public value. EA has made most of its strides in my mind because it wasn’t satisfied with merely providing a non-zero amount of help to people. Although we often use examples like Play Pumps that were probably net negative, the founders of GiveWell would have had a much easier time if they were just trying to find net positive charities.
However, I’m not sure that maximizing is as clearly uncontroversial as you believe. I would guess that if surveys asked about it, leaders would be fairly united behind it, but it would get something in the range 75%-50% support from the community at large.
I can do an informal poll of this group and report back.
I’d also be interested a discussion of the limits to maximizing. For example, if an EA is already working on something in the 80th percentile of effectiveness, do they find it compelling to switch to something in the 90th percentile?
My informal poll of the Effective Altruism Polls group asked
And got 30 votes Yes, and 3 votes No. There’s various problems with the informality of the poll, but I’m updating towards this being less controversial than I thought.
I was one of the non-maximizers in the poll. To expand on what I wrote there, I hold the following interpretation of this bit of Will’s article:
When I see “the use of the findings from (i)”, I don’t take that to necessarily mean “acting to maximize good with a high level of certainty”. Instead, I interpret it as “using what we understand about maximizing good in order to make decisions that help you do more good, even if you are not actively trying to maximize your own impact”.
To use an example:
Let’s say that someone donates 10% of their income to AMF, because they read GiveWell’s research and believe that AMF is among the best opportunities in global health/development.
Let’s say that this person also hasn’t carried out explicit expected-value calculations. If you ask them whether they think AMF is the best option for doing good, they say:
“Probably not. There are hundreds of plausible ways I could donate my money, and the odds that I’ve chosen the right one are quite low. But I don’t want to spend a lot of time trying to combine my values and the available evidence to find a better option, because I don’t think that I’ll increase my certainty/impact enough for that use of time to feel valuable.”
I still consider this person to be “practicing effective altruism”. There are probably ways they could do more good even accounting for the time/energy/happiness costs of learning more. One could think of them as slightly “lazy” in their approach. Even so, they used evidence and reasoning to evaluate GiveWell’s AMF writeup and determine that AMF was an unusually good option, compared to other options they knew about. They then acted on this finding to try improving the world through a donation. This feels EA-aligned to me, even if they aren’t actually trying to “maximize” their impact.
There is a spectrum of “maximization”, ranging from people who follow EA charity evaluators’ suggestions without doing any original research to people who conduct hundreds of hours of research per year and are constantly seeking better options. I think that people in EA cover the full spectrum, so my definition of “EA” doesn’t emphasize “maximizing”—it’s about “using evidence and reason to do better, even if you stop trying to do better after a certain point, as long as you aren’t committing grave epistemic sins along the way”.
...or something like that. I haven’t yet worked this out rigorously.