I was one of the non-maximizers in the poll. To expand on what I wrote there, I hold the following interpretation of this bit of Will’s article:
(i) the use of evidence and careful reasoning to work out how to maximize the good with a given unit of resources, tentatively understanding ‘the good’ in impartial welfarist terms, and
(ii) the use of the findings from (i) to try to improve the world.
When I see “the use of the findings from (i)”, I don’t take that to necessarily mean “acting to maximize good with a high level of certainty”. Instead, I interpret it as “using what we understand about maximizing good in order to make decisions that help you do more good, even if you are not actively trying to maximize your own impact”.
To use an example:
Let’s say that someone donates 10% of their income to AMF, because they read GiveWell’s research and believe that AMF is among the best opportunities in global health/development.
Let’s say that this person also hasn’t carried out explicit expected-value calculations. If you ask them whether they think AMF is the best option for doing good, they say:
“Probably not. There are hundreds of plausible ways I could donate my money, and the odds that I’ve chosen the right one are quite low. But I don’t want to spend a lot of time trying to combine my values and the available evidence to find a better option, because I don’t think that I’ll increase my certainty/impact enough for that use of time to feel valuable.”
I still consider this person to be “practicing effective altruism”. There are probably ways they could do more good even accounting for the time/energy/happiness costs of learning more. One could think of them as slightly “lazy” in their approach. Even so, they used evidence and reasoning to evaluate GiveWell’s AMF writeup and determine that AMF was an unusually good option, compared to other options they knew about. They then acted on this finding to try improving the world through a donation. This feels EA-aligned to me, even if they aren’t actually trying to “maximize” their impact.
There is a spectrum of “maximization”, ranging from people who follow EA charity evaluators’ suggestions without doing any original research to people who conduct hundreds of hours of research per year and are constantly seeking better options. I think that people in EA cover the full spectrum, so my definition of “EA” doesn’t emphasize “maximizing”—it’s about “using evidence and reason to do better, even if you stop trying to do better after a certain point, as long as you aren’t committing grave epistemic sins along the way”.
...or something like that. I haven’t yet worked this out rigorously.
I was one of the non-maximizers in the poll. To expand on what I wrote there, I hold the following interpretation of this bit of Will’s article:
When I see “the use of the findings from (i)”, I don’t take that to necessarily mean “acting to maximize good with a high level of certainty”. Instead, I interpret it as “using what we understand about maximizing good in order to make decisions that help you do more good, even if you are not actively trying to maximize your own impact”.
To use an example:
Let’s say that someone donates 10% of their income to AMF, because they read GiveWell’s research and believe that AMF is among the best opportunities in global health/development.
Let’s say that this person also hasn’t carried out explicit expected-value calculations. If you ask them whether they think AMF is the best option for doing good, they say:
“Probably not. There are hundreds of plausible ways I could donate my money, and the odds that I’ve chosen the right one are quite low. But I don’t want to spend a lot of time trying to combine my values and the available evidence to find a better option, because I don’t think that I’ll increase my certainty/impact enough for that use of time to feel valuable.”
I still consider this person to be “practicing effective altruism”. There are probably ways they could do more good even accounting for the time/energy/happiness costs of learning more. One could think of them as slightly “lazy” in their approach. Even so, they used evidence and reasoning to evaluate GiveWell’s AMF writeup and determine that AMF was an unusually good option, compared to other options they knew about. They then acted on this finding to try improving the world through a donation. This feels EA-aligned to me, even if they aren’t actually trying to “maximize” their impact.
There is a spectrum of “maximization”, ranging from people who follow EA charity evaluators’ suggestions without doing any original research to people who conduct hundreds of hours of research per year and are constantly seeking better options. I think that people in EA cover the full spectrum, so my definition of “EA” doesn’t emphasize “maximizing”—it’s about “using evidence and reason to do better, even if you stop trying to do better after a certain point, as long as you aren’t committing grave epistemic sins along the way”.
...or something like that. I haven’t yet worked this out rigorously.