I was one of the non-maximizers in the poll. To expand on what I wrote there, I hold the following interpretation of this bit of Willâs article:
(i) the use of evidence and careful reasoning to work out how to maximize the good with a given unit of resources, tentatively understanding âthe goodâ in impartial welfarist terms, and
(ii) the use of the findings from (i) to try to improve the world.
When I see âthe use of the findings from (i)â, I donât take that to necessarily mean âacting to maximize good with a high level of certaintyâ. Instead, I interpret it as âusing what we understand about maximizing good in order to make decisions that help you do more good, even if you are not actively trying to maximize your own impactâ.
To use an example:
Letâs say that someone donates 10% of their income to AMF, because they read GiveWellâs research and believe that AMF is among the best opportunities in global health/âdevelopment.
Letâs say that this person also hasnât carried out explicit expected-value calculations. If you ask them whether they think AMF is the best option for doing good, they say:
âProbably not. There are hundreds of plausible ways I could donate my money, and the odds that Iâve chosen the right one are quite low. But I donât want to spend a lot of time trying to combine my values and the available evidence to find a better option, because I donât think that Iâll increase my certainty/âimpact enough for that use of time to feel valuable.â
I still consider this person to be âpracticing effective altruismâ. There are probably ways they could do more good even accounting for the time/âenergy/âhappiness costs of learning more. One could think of them as slightly âlazyâ in their approach. Even so, they used evidence and reasoning to evaluate GiveWellâs AMF writeup and determine that AMF was an unusually good option, compared to other options they knew about. They then acted on this finding to try improving the world through a donation. This feels EA-aligned to me, even if they arenât actually trying to âmaximizeâ their impact.
There is a spectrum of âmaximizationâ, ranging from people who follow EA charity evaluatorsâ suggestions without doing any original research to people who conduct hundreds of hours of research per year and are constantly seeking better options. I think that people in EA cover the full spectrum, so my definition of âEAâ doesnât emphasize âmaximizingââitâs about âusing evidence and reason to do better, even if you stop trying to do better after a certain point, as long as you arenât committing grave epistemic sins along the wayâ.
...or something like that. I havenât yet worked this out rigorously.
I was one of the non-maximizers in the poll. To expand on what I wrote there, I hold the following interpretation of this bit of Willâs article:
When I see âthe use of the findings from (i)â, I donât take that to necessarily mean âacting to maximize good with a high level of certaintyâ. Instead, I interpret it as âusing what we understand about maximizing good in order to make decisions that help you do more good, even if you are not actively trying to maximize your own impactâ.
To use an example:
Letâs say that someone donates 10% of their income to AMF, because they read GiveWellâs research and believe that AMF is among the best opportunities in global health/âdevelopment.
Letâs say that this person also hasnât carried out explicit expected-value calculations. If you ask them whether they think AMF is the best option for doing good, they say:
âProbably not. There are hundreds of plausible ways I could donate my money, and the odds that Iâve chosen the right one are quite low. But I donât want to spend a lot of time trying to combine my values and the available evidence to find a better option, because I donât think that Iâll increase my certainty/âimpact enough for that use of time to feel valuable.â
I still consider this person to be âpracticing effective altruismâ. There are probably ways they could do more good even accounting for the time/âenergy/âhappiness costs of learning more. One could think of them as slightly âlazyâ in their approach. Even so, they used evidence and reasoning to evaluate GiveWellâs AMF writeup and determine that AMF was an unusually good option, compared to other options they knew about. They then acted on this finding to try improving the world through a donation. This feels EA-aligned to me, even if they arenât actually trying to âmaximizeâ their impact.
There is a spectrum of âmaximizationâ, ranging from people who follow EA charity evaluatorsâ suggestions without doing any original research to people who conduct hundreds of hours of research per year and are constantly seeking better options. I think that people in EA cover the full spectrum, so my definition of âEAâ doesnât emphasize âmaximizingââitâs about âusing evidence and reason to do better, even if you stop trying to do better after a certain point, as long as you arenât committing grave epistemic sins along the wayâ.
...or something like that. I havenât yet worked this out rigorously.