Thank you for contributing this. I enjoyed reading it and thought that it made some people’s tendency in EA (which I might be imagining) to “look at other cause-areas with Global Health goggles” more explicit.
Here are some notes I’ve taken to try to put everything you’ve said together. Please update me if what I’ve written here omits certain things, or presents things inadequately. I’ve also included additional remarks to some of these things.
Central Idea: [EA’s claim that some pathways to good are much better than others] is not obvious, but widely believed (why?).
Idea Support 1: The expected goodness of available actions in the altruistic market differs (across several orders of magnitude) based on the state of the world, which changes over time.
If the altruistic market was made efficient (which EA might achieve), then the available actions with the highest expected goodness, which change with the changing state of the world, would routinely be located in any world state. Some things don’t generalize.
Idea Support 2: Hindsight bias routinely warps our understanding of which investments, decisions, or beliefs were best made at the time, by having us believe that the best actions were more predictable than they were in actuality. It is plausible that this generalizes to altruism. As such, we run the risk of being overconfident that, despite the changing state of the world, the actions with the highest expected goodness presently will still be the actions with the highest expected goodness in the future, be that the long-term one or the near-term one.
(why?): The cause-area of global health has well defined metrics of goodness, i.e. the subset of the altruistic market that deals with altruism in global health is likely close to being efficient.
Idea Support 3: There is little cause to suspect that since altruism within global health is likely close to being efficient, altruism within other cause-areas are close to efficient or can even be made efficient, given their domain-specific uncertainties.
Idea Support 4: How well “it’s possible to do a lot of good with a relatively small expenditure of resources” generalizes beyond global health is unclear, and should likely not be a standard belief for other cause-areas. The expected goodness of actions in global health is contingent upon the present world state, which will change (as altruism in global health progresses and becomes more efficient, there will be diminishing returns in the expected goodness of the actions we take today to further global health)
Action Update 1: Given the altruistic efficiency and clarity within global health, and given people’s support for it, it makes sense to introduce EA’s altruist market in global health to newcomers; however, we should not “trick” them into thinking EA is solely or mostly about altruism in global heath—rather, we should frame EA’s altruist market in global health as an example of what a market likely close to being efficient can look like.
I think the main thing this seems to be missing is that I’m not saying global health has an efficient altruistic market—I’m saying that if anything does you should expect to see it there. But actually we don’t even see it there … reasonable-looking health interventions vary by ~four orders of magnitude in cost-effectiveness, and the most cost-effective are not fully funded.
Thank you for contributing this. I enjoyed reading it and thought that it made some people’s tendency in EA (which I might be imagining) to “look at other cause-areas with Global Health goggles” more explicit.
Here are some notes I’ve taken to try to put everything you’ve said together. Please update me if what I’ve written here omits certain things, or presents things inadequately. I’ve also included additional remarks to some of these things.
Central Idea: [EA’s claim that some pathways to good are much better than others] is not obvious, but widely believed (why?).
Idea Support 1: The expected goodness of available actions in the altruistic market differs (across several orders of magnitude) based on the state of the world, which changes over time.
If the altruistic market was made efficient (which EA might achieve), then the available actions with the highest expected goodness, which change with the changing state of the world, would routinely be located in any world state. Some things don’t generalize.
Idea Support 2: Hindsight bias routinely warps our understanding of which investments, decisions, or beliefs were best made at the time, by having us believe that the best actions were more predictable than they were in actuality. It is plausible that this generalizes to altruism. As such, we run the risk of being overconfident that, despite the changing state of the world, the actions with the highest expected goodness presently will still be the actions with the highest expected goodness in the future, be that the long-term one or the near-term one.
(why?): The cause-area of global health has well defined metrics of goodness, i.e. the subset of the altruistic market that deals with altruism in global health is likely close to being efficient.
Idea Support 3: There is little cause to suspect that since altruism within global health is likely close to being efficient, altruism within other cause-areas are close to efficient or can even be made efficient, given their domain-specific uncertainties.
Idea Support 4: How well “it’s possible to do a lot of good with a relatively small expenditure of resources” generalizes beyond global health is unclear, and should likely not be a standard belief for other cause-areas. The expected goodness of actions in global health is contingent upon the present world state, which will change (as altruism in global health progresses and becomes more efficient, there will be diminishing returns in the expected goodness of the actions we take today to further global health)
Action Update 1: Given the altruistic efficiency and clarity within global health, and given people’s support for it, it makes sense to introduce EA’s altruist market in global health to newcomers; however, we should not “trick” them into thinking EA is solely or mostly about altruism in global heath—rather, we should frame EA’s altruist market in global health as an example of what a market likely close to being efficient can look like.
I think the main thing this seems to be missing is that I’m not saying global health has an efficient altruistic market—I’m saying that if anything does you should expect to see it there. But actually we don’t even see it there … reasonable-looking health interventions vary by ~four orders of magnitude in cost-effectiveness, and the most cost-effective are not fully funded.