Let me try to illustrate how I think about this with an example. Imagine the following:
Nearterm effects on humans are equal to 1 in expectation.
This estimate is very resilient, i.e. it will not change much in response to new evidence.
Other effects (on animals and in the longterm) are ā1 k with 50 % likelihood, and 1 k with 50 % likelihood, so they are equal to 0 in expectation.
These estimates are not resilient, and, in response to new evidence, there is a 50 % chance the other effects will be negative in expectation, and 50 % chance they will be positive in expectation.
However, it is very unlikely that the other effects will in expectation be between ā1 and 1, i.e. they will most likely dominate the expected nearterm effects.
What do you think is a better description of our situation?
The expected overall effect is 1 (= 1 + 0) in expectation. This is positive, so the intervention is robustly good.
The overall effects is ā999 (= 1 ā 1 k) with 50 % likelihood, and 1,001 (= 1 + 1 k) with 50 % likelihood. This means the expected value is positive. However, given the lack of resilience of the other effects, we have little idea whether it will continue to be positive, or turn out negative in response to new evidence. So we should not act as if the intervention is robustly good. Instead, it would be good to investigate the other effects further, especially because we have not even tried any hard to do that in the past.
Iām curious: How do you feel about hyperfocused neartermist interventions which alter as little of the rest of the world as possible?
An example of this would be humane slaughter, which shouldnāt have much affect on farmed animal, wild animal, or human populations, other than reducing a farmed animalās suffering at the moment of death.
Itās plausible that certain hyperfocused neartermist interventions can be precisely targeted enough that the overall effect is more like ā1 with 50% likelihood, or 3 with 50% likelihood. A portfolio of independent hyperfocused interventions could be shown to have quite strong robustness.
Thanks for asking! I have not thought much about it, but I feel like neartermist approaches which focus on increasing (animal/āhuman) welfare per individual are more robustly good. Interventions which change human population size will lead to a different number of wild animals, which might dominate the overall nearterm effect while having an unclear sign.
I disagree with the assumption that those +1000/ā-1000 longterm effects can be known with any certainty, no matter how many resources you spend on studying them.
The world is a chaotic system. Trying to predict where the storm will land as the butterfly flaps its wings is unreasonable. Also, some of the measures youāre trying to account for (e.g. the utility of a wild animalās life) are probably not even measurable. The combination of these two difficulties makes me very dubious about the value of trying to do things like factor in long-term mosquito wellbeing to bednet effectiveness calculations, or trying to account for the far-future risks/ābenefits of population growth when assessing the value of vitamin supplementation.
I disagree with the assumption that those +1000/ā-1000 longterm effects can be known with any certainty, no matter many resources you spend on studying them.
I agree there will always be lots of uncertainty, even after spending tons of resources investigating the longterm effects. However, we do not need to be certain about the longterm effects. We only have to study them enough to ensure our best estimate of their expected value is resilient, i.e. that it will not change much in response to new information.
If people at Open Philanthropy and Rethink Priorities spent 10 kh researching the animal and longterm effects of GiveWellās top charities, are you confident their best estimate for the expected animal and longterm effects would be negligible in comparison with the expected nearterm human effects? I am quite open to this possibility, but I do not understand how it is possible to be confident either way, given very little research has been done so far on animal and longterm effects.
The world is a chaotic system, trying to predict where the storm will land as the butterfly flaps its wings is unreasonably.
A butterfly flapping its wings can cause a storm, but it can just as well prevent a storm. These are cases of simple cluelessness in which there is evidential symmetry, so they are not problematic. The animal and longterm effects of saving lives are not symmetric in that way. For example, we can predict that humans work and eat, so increasing population will tend to grow the economy and food production.
Also, some of the measures youāre trying to account for (e.g. the utility of a wild animalās life) are probably not even measurable.
For intuitions that measuring wild animal welfare is not impossible, you can check research from Wild Animal Initiative (one of ACEās top charities, so they are presumably doing something valuable), and Welfare Footprint Projectās research on assessing wild animal welfare.
āestimateā¦ will not change much in response to new informationā seems like the definition of certainty.
It seems very optimistic to think that by doing enough calculations and data analysis we can overcome the butterfly effect. Even your example of the correlation between population and economic growth is difficult to predict (e.g. Concentrating wealth by reducing family size might have positive effects on economic growth)
Thanks for engaging, Henry!
Let me try to illustrate how I think about this with an example. Imagine the following:
Nearterm effects on humans are equal to 1 in expectation.
This estimate is very resilient, i.e. it will not change much in response to new evidence.
Other effects (on animals and in the longterm) are ā1 k with 50 % likelihood, and 1 k with 50 % likelihood, so they are equal to 0 in expectation.
These estimates are not resilient, and, in response to new evidence, there is a 50 % chance the other effects will be negative in expectation, and 50 % chance they will be positive in expectation.
However, it is very unlikely that the other effects will in expectation be between ā1 and 1, i.e. they will most likely dominate the expected nearterm effects.
What do you think is a better description of our situation?
The expected overall effect is 1 (= 1 + 0) in expectation. This is positive, so the intervention is robustly good.
The overall effects is ā999 (= 1 ā 1 k) with 50 % likelihood, and 1,001 (= 1 + 1 k) with 50 % likelihood. This means the expected value is positive. However, given the lack of resilience of the other effects, we have little idea whether it will continue to be positive, or turn out negative in response to new evidence. So we should not act as if the intervention is robustly good. Instead, it would be good to investigate the other effects further, especially because we have not even tried any hard to do that in the past.
Iām curious: How do you feel about hyperfocused neartermist interventions which alter as little of the rest of the world as possible?
An example of this would be humane slaughter, which shouldnāt have much affect on farmed animal, wild animal, or human populations, other than reducing a farmed animalās suffering at the moment of death.
Itās plausible that certain hyperfocused neartermist interventions can be precisely targeted enough that the overall effect is more like ā1 with 50% likelihood, or 3 with 50% likelihood. A portfolio of independent hyperfocused interventions could be shown to have quite strong robustness.
Thanks for asking! I have not thought much about it, but I feel like neartermist approaches which focus on increasing (animal/āhuman) welfare per individual are more robustly good. Interventions which change human population size will lead to a different number of wild animals, which might dominate the overall nearterm effect while having an unclear sign.
I disagree with the assumption that those +1000/ā-1000 longterm effects can be known with any certainty, no matter how many resources you spend on studying them.
The world is a chaotic system. Trying to predict where the storm will land as the butterfly flaps its wings is unreasonable. Also, some of the measures youāre trying to account for (e.g. the utility of a wild animalās life) are probably not even measurable. The combination of these two difficulties makes me very dubious about the value of trying to do things like factor in long-term mosquito wellbeing to bednet effectiveness calculations, or trying to account for the far-future risks/ābenefits of population growth when assessing the value of vitamin supplementation.
Thanks for following up!
I agree there will always be lots of uncertainty, even after spending tons of resources investigating the longterm effects. However, we do not need to be certain about the longterm effects. We only have to study them enough to ensure our best estimate of their expected value is resilient, i.e. that it will not change much in response to new information.
If people at Open Philanthropy and Rethink Priorities spent 10 kh researching the animal and longterm effects of GiveWellās top charities, are you confident their best estimate for the expected animal and longterm effects would be negligible in comparison with the expected nearterm human effects? I am quite open to this possibility, but I do not understand how it is possible to be confident either way, given very little research has been done so far on animal and longterm effects.
A butterfly flapping its wings can cause a storm, but it can just as well prevent a storm. These are cases of simple cluelessness in which there is evidential symmetry, so they are not problematic. The animal and longterm effects of saving lives are not symmetric in that way. For example, we can predict that humans work and eat, so increasing population will tend to grow the economy and food production.
For intuitions that measuring wild animal welfare is not impossible, you can check research from Wild Animal Initiative (one of ACEās top charities, so they are presumably doing something valuable), and Welfare Footprint Projectās research on assessing wild animal welfare.
āestimateā¦ will not change much in response to new informationā seems like the definition of certainty.
It seems very optimistic to think that by doing enough calculations and data analysis we can overcome the butterfly effect. Even your example of the correlation between population and economic growth is difficult to predict (e.g. Concentrating wealth by reducing family size might have positive effects on economic growth)