The same can hardly be said for AI safety, wild animal welfare, or (until this year, perhaps) pandemic prevention. - Will
Otherwise, looking at malaria interventions, to take just one example, makes no sense. Billions have and will continue to go in that direction even without GiveWellâUri
I noticed Will listed AI safety and wild animal welfare (WAW), and you mentioned malaria. Iâm curious if this is the crux â I would guess that Will agrees (certain types of)
climate change work is plausibly as good as anti-malaria, and I wonder if you agree that the sort of person who (perhaps incorrectly) cares about WAW should consider that to be more impactful than climate change.
It is worth noting that a lot of core EAs have pivoted from global poverty to X-risk, a major shift in priorities, without ever changing their position on climate change (something that a priori seems important from both perspectives). This isnât necessarily wrong, but does seem a bit suspicious.
Given the fact that climate change is somewhat GCR/âX-risky, it wouldnât surprise me if it were more valuable on the margin than anti-malaria work. But both the X-risk people and the global poverty people seem sceptical about climate change work; that intersection is somewhat surprising, but I think is a major part of my own scepticism.
Like, if you have two groups of people, and one group says âwe should definitely prioritise A and B, but not C or D, and probably not E eitherâ, and the other group says âwe should definitely prioritise C and D, but not A or B, and probably not E eitherâ, it doesnât seem like itâs looking good for E.
But I might be reading that all wrong, and everyone things that climate change is, like, the fourth best cause, and as a result it should get more points even though nobody thinks itâs top? This sounds like one of those moral uncertainty questions.
While I never considered poverty reduction a top cause, I do consider climate change work to be quite a bit more important than poverty reduction in terms of direct impact, because of GCR-ish concerns (though overall still very unimportant compared to more direct GCR-ish concerns). My guess is that this is also true of most people I work with who are also primarily concerned about GCR-type things, though the topic hasnât come up very often, so I am not very confident about this.
I do actually think there is value on poverty-reduction like work, but that comes primarily from an epistemic perspective where poverty-reduction requires making many fewer long-chained inferences about the world, in a way that seems more robustly good to me than all the GCR perspectives, and also seems like it would allow better learning about how the world works than working on climate change. So broadly I think I am more excited about working with people who work on global poverty than people who work on climate change (since I think the epistemic effects dominate the actual impact calculation here).
This is perhaps a bit off-topic, but I have a question about this sentence:
I do actually think there is value on poverty-reduction like work
Would it be correct to say that poverty-reduction work isnât less valuable in absolute terms in a longtermist worldview than it is in a near-termist worldview?
One reason that poverty-reduction is great is because returns to income seem roughly logarithmic. This applies to both worldviews. The difference in a longtermist worldview is that causes like x-risk reduction gain a lot in value. This makes poverty reduction seem less valuable relative to the best things we can do. But, since thereâs no reason to think individual utility functions are different in long- and near-termist worldviews, in absolute terms the utility gain from transferring resources from high-income to low-income people is the same.
Yes, you are correct and thank you for forcing me to further clarify my position (in what follows I leave out WAW since I know absolutely nothing about it):
EA funds, which I will assume is representative of EA priorities has these funds a) âGlobal Health and Developmentâ; b) âAnimal Welfareâ; c) âLong-Term Futureâ; d) âEA Metaâ. Letâs leave D aside for the purposes of this discussion.
There is good reason to believe the importance and tractability of specific climate change interventions can equal or even exceed those of A & B. We have not done enough research to determine if this is the case.
The arguments in favor of C being the only area we should be concerned with, or the area we should be most concerned with, are:
I) reminiscent of other arguments in the history of thought that compel us (humans) because we do not account for the limits of our own rationality. I could say a lot more about this another time, suffice it to say here that in the end I cautiously accept these arguments and believe x-risk deserves a lot of our attention.
II) are popular within this community for psychological as well as purely rational reasons. There is nothing wrong with that and it might even be needed to build a dedicated community.
III) For these reasons I think we are biased towards C, and should employ measurements to correct for this bias.
None of these priorities is neglected by the world, but certain interventions or research opportunities within them are. EA has spent an enormous amount of effort finding opportunities for marginal value add in A, B & C.
Climate change should be researched just as much as A & B. One way of accounting for the bias I see in C is to divert a certain portion of resources to climate change research despite our strongly held beliefs. I simply cannot accept the conclusion that unless climate change renders our planet uninhabitable before we colonize Mars, we have better things to worry about. That sounds absurd in light of the fact that certain detrimental effects of climate change are already happening, and even the best case future scenarios include a lot of suffering. It might still be right, but itâs absurdity means we need to give it more attention.
What surprises me the most from the discussion of this post (and I realize itâs readers are a tiny sample size of the larger community) is that no one has come back with: âwe did the research years ago, we could find no marginal value add. Please read this article for all the detailsâ.
I noticed Will listed AI safety and wild animal welfare (WAW), and you mentioned malaria. Iâm curious if this is the crux â I would guess that Will agrees (certain types of) climate change work is plausibly as good as anti-malaria, and I wonder if you agree that the sort of person who (perhaps incorrectly) cares about WAW should consider that to be more impactful than climate change.
It is worth noting that a lot of core EAs have pivoted from global poverty to X-risk, a major shift in priorities, without ever changing their position on climate change (something that a priori seems important from both perspectives). This isnât necessarily wrong, but does seem a bit suspicious.
Given the fact that climate change is somewhat GCR/âX-risky, it wouldnât surprise me if it were more valuable on the margin than anti-malaria work. But both the X-risk people and the global poverty people seem sceptical about climate change work; that intersection is somewhat surprising, but I think is a major part of my own scepticism.
Like, if you have two groups of people, and one group says âwe should definitely prioritise A and B, but not C or D, and probably not E eitherâ, and the other group says âwe should definitely prioritise C and D, but not A or B, and probably not E eitherâ, it doesnât seem like itâs looking good for E.
But I might be reading that all wrong, and everyone things that climate change is, like, the fourth best cause, and as a result it should get more points even though nobody thinks itâs top? This sounds like one of those moral uncertainty questions.
While I never considered poverty reduction a top cause, I do consider climate change work to be quite a bit more important than poverty reduction in terms of direct impact, because of GCR-ish concerns (though overall still very unimportant compared to more direct GCR-ish concerns). My guess is that this is also true of most people I work with who are also primarily concerned about GCR-type things, though the topic hasnât come up very often, so I am not very confident about this.
I do actually think there is value on poverty-reduction like work, but that comes primarily from an epistemic perspective where poverty-reduction requires making many fewer long-chained inferences about the world, in a way that seems more robustly good to me than all the GCR perspectives, and also seems like it would allow better learning about how the world works than working on climate change. So broadly I think I am more excited about working with people who work on global poverty than people who work on climate change (since I think the epistemic effects dominate the actual impact calculation here).
This is perhaps a bit off-topic, but I have a question about this sentence:
Would it be correct to say that poverty-reduction work isnât less valuable in absolute terms in a longtermist worldview than it is in a near-termist worldview?
One reason that poverty-reduction is great is because returns to income seem roughly logarithmic. This applies to both worldviews. The difference in a longtermist worldview is that causes like x-risk reduction gain a lot in value. This makes poverty reduction seem less valuable relative to the best things we can do. But, since thereâs no reason to think individual utility functions are different in long- and near-termist worldviews, in absolute terms the utility gain from transferring resources from high-income to low-income people is the same.
Yes, you are correct and thank you for forcing me to further clarify my position (in what follows I leave out WAW since I know absolutely nothing about it):
EA funds, which I will assume is representative of EA priorities has these funds a) âGlobal Health and Developmentâ; b) âAnimal Welfareâ; c) âLong-Term Futureâ; d) âEA Metaâ. Letâs leave D aside for the purposes of this discussion.
There is good reason to believe the importance and tractability of specific climate change interventions can equal or even exceed those of A & B. We have not done enough research to determine if this is the case.
The arguments in favor of C being the only area we should be concerned with, or the area we should be most concerned with, are:
I) reminiscent of other arguments in the history of thought that compel us (humans) because we do not account for the limits of our own rationality. I could say a lot more about this another time, suffice it to say here that in the end I cautiously accept these arguments and believe x-risk deserves a lot of our attention.
II) are popular within this community for psychological as well as purely rational reasons. There is nothing wrong with that and it might even be needed to build a dedicated community.
III) For these reasons I think we are biased towards C, and should employ measurements to correct for this bias.
None of these priorities is neglected by the world, but certain interventions or research opportunities within them are. EA has spent an enormous amount of effort finding opportunities for marginal value add in A, B & C.
Climate change should be researched just as much as A & B. One way of accounting for the bias I see in C is to divert a certain portion of resources to climate change research despite our strongly held beliefs. I simply cannot accept the conclusion that unless climate change renders our planet uninhabitable before we colonize Mars, we have better things to worry about. That sounds absurd in light of the fact that certain detrimental effects of climate change are already happening, and even the best case future scenarios include a lot of suffering. It might still be right, but itâs absurdity means we need to give it more attention.
What surprises me the most from the discussion of this post (and I realize itâs readers are a tiny sample size of the larger community) is that no one has come back with: âwe did the research years ago, we could find no marginal value add. Please read this article for all the detailsâ.