Wildlife conservation and wild animal welfare are emphatically not the same thing. “Tech safety” (which isn’t a term I’ve heard before, and which on googling seems to mostly refer to tech in the context of domestic abuse) and AI safety are just as emphatically not the same thing.
Anyway, yes, in most areas EAs care about they are a minority of the people who care about that thing. Those areas still differ hugely in terms of neglectedness, both in terms of total attention and in terms of expertise. Assuming one doesn’t believe that EAs are the only people who can make progress in an area, this is important.
In climate change it counts the lawyers already engaged in changing the recycling laws of San Francisco as sufficent for the task at hand.
This is (a) uncharitable sarcasm, and (b) obviously false. There are enormous numbers of very smart scientists, journalists, lawyers, activists, etc etc. working on climate change. Every general science podcast I listen to covers climate change regularly, and they aren’t doing so to talk about Bay-Area over-regulation. It’s been a major issue in the domestic politics in every country I’ve lived in for over a decade. The consensus among left-leaning intellectual types (who are the main group EA recruits from) in favour of acting against climate change is total.
Now, none of this means there’s nothing EA could contribute to the climate field. Probably there’s plenty of valuable work that could be done. If more climate-change work started showing up on the EA Forum, I’d be fine with that the same way I’m fine with EAs doing work in poverty, animal welfare, mental health, and lots of other areas I don’t personally prioritise. But would I believe that climate change work is the most good they could do? In most cases, probably not.
The assumption is not that people outside EA cannot do good, it is merely that we should not take it for granted that they are doing good, and doing it effectively, no matter their number. Otherwise, looking at malaria interventions, to take just one example, makes no sense. Billions have and will continue to go in that direction even without GiveWell. So the claim that climate change work is or is not the most good has no merit without a deeper dive into the field and a search for incredible giving / working opportunities. Any shallow dive into this cause reveals further attention and concern are warranted. I do not know what the results of a deeper dive might show, but am fairly confident we can at least be as effective working on climate change as working on some of the other present day welfare causes.
I do believe that there is strong bias towards the far future in many EA discussions. I am not unsympathetic to the rational behind this, but since it seems to override everything else, and present day welfare (as your reply implies) is merely tolerated, I am cautious about it.
The same can hardly be said for AI safety, wild animal welfare, or (until this year, perhaps) pandemic prevention. - Will
Otherwise, looking at malaria interventions, to take just one example, makes no sense. Billions have and will continue to go in that direction even without GiveWell—Uri
I noticed Will listed AI safety and wild animal welfare (WAW), and you mentioned malaria. I’m curious if this is the crux – I would guess that Will agrees (certain types of)
climate change work is plausibly as good as anti-malaria, and I wonder if you agree that the sort of person who (perhaps incorrectly) cares about WAW should consider that to be more impactful than climate change.
It is worth noting that a lot of core EAs have pivoted from global poverty to X-risk, a major shift in priorities, without ever changing their position on climate change (something that a priori seems important from both perspectives). This isn’t necessarily wrong, but does seem a bit suspicious.
Given the fact that climate change is somewhat GCR/X-risky, it wouldn’t surprise me if it were more valuable on the margin than anti-malaria work. But both the X-risk people and the global poverty people seem sceptical about climate change work; that intersection is somewhat surprising, but I think is a major part of my own scepticism.
Like, if you have two groups of people, and one group says “we should definitely prioritise A and B, but not C or D, and probably not E either”, and the other group says “we should definitely prioritise C and D, but not A or B, and probably not E either”, it doesn’t seem like it’s looking good for E.
But I might be reading that all wrong, and everyone things that climate change is, like, the fourth best cause, and as a result it should get more points even though nobody thinks it’s top? This sounds like one of those moral uncertainty questions.
While I never considered poverty reduction a top cause, I do consider climate change work to be quite a bit more important than poverty reduction in terms of direct impact, because of GCR-ish concerns (though overall still very unimportant compared to more direct GCR-ish concerns). My guess is that this is also true of most people I work with who are also primarily concerned about GCR-type things, though the topic hasn’t come up very often, so I am not very confident about this.
I do actually think there is value on poverty-reduction like work, but that comes primarily from an epistemic perspective where poverty-reduction requires making many fewer long-chained inferences about the world, in a way that seems more robustly good to me than all the GCR perspectives, and also seems like it would allow better learning about how the world works than working on climate change. So broadly I think I am more excited about working with people who work on global poverty than people who work on climate change (since I think the epistemic effects dominate the actual impact calculation here).
This is perhaps a bit off-topic, but I have a question about this sentence:
I do actually think there is value on poverty-reduction like work
Would it be correct to say that poverty-reduction work isn’t less valuable in absolute terms in a longtermist worldview than it is in a near-termist worldview?
One reason that poverty-reduction is great is because returns to income seem roughly logarithmic. This applies to both worldviews. The difference in a longtermist worldview is that causes like x-risk reduction gain a lot in value. This makes poverty reduction seem less valuable relative to the best things we can do. But, since there’s no reason to think individual utility functions are different in long- and near-termist worldviews, in absolute terms the utility gain from transferring resources from high-income to low-income people is the same.
Yes, you are correct and thank you for forcing me to further clarify my position (in what follows I leave out WAW since I know absolutely nothing about it):
EA funds, which I will assume is representative of EA priorities has these funds a) “Global Health and Development”; b) “Animal Welfare”; c) “Long-Term Future”; d) “EA Meta”. Let’s leave D aside for the purposes of this discussion.
There is good reason to believe the importance and tractability of specific climate change interventions can equal or even exceed those of A & B. We have not done enough research to determine if this is the case.
The arguments in favor of C being the only area we should be concerned with, or the area we should be most concerned with, are:
I) reminiscent of other arguments in the history of thought that compel us (humans) because we do not account for the limits of our own rationality. I could say a lot more about this another time, suffice it to say here that in the end I cautiously accept these arguments and believe x-risk deserves a lot of our attention.
II) are popular within this community for psychological as well as purely rational reasons. There is nothing wrong with that and it might even be needed to build a dedicated community.
III) For these reasons I think we are biased towards C, and should employ measurements to correct for this bias.
None of these priorities is neglected by the world, but certain interventions or research opportunities within them are. EA has spent an enormous amount of effort finding opportunities for marginal value add in A, B & C.
Climate change should be researched just as much as A & B. One way of accounting for the bias I see in C is to divert a certain portion of resources to climate change research despite our strongly held beliefs. I simply cannot accept the conclusion that unless climate change renders our planet uninhabitable before we colonize Mars, we have better things to worry about. That sounds absurd in light of the fact that certain detrimental effects of climate change are already happening, and even the best case future scenarios include a lot of suffering. It might still be right, but it’s absurdity means we need to give it more attention.
What surprises me the most from the discussion of this post (and I realize it’s readers are a tiny sample size of the larger community) is that no one has come back with: “we did the research years ago, we could find no marginal value add. Please read this article for all the details”.
Wildlife conservation and wild animal welfare are emphatically not the same thing. “Tech safety” (which isn’t a term I’ve heard before, and which on googling seems to mostly refer to tech in the context of domestic abuse) and AI safety are just as emphatically not the same thing.
Anyway, yes, in most areas EAs care about they are a minority of the people who care about that thing. Those areas still differ hugely in terms of neglectedness, both in terms of total attention and in terms of expertise. Assuming one doesn’t believe that EAs are the only people who can make progress in an area, this is important.
This is (a) uncharitable sarcasm, and (b) obviously false. There are enormous numbers of very smart scientists, journalists, lawyers, activists, etc etc. working on climate change. Every general science podcast I listen to covers climate change regularly, and they aren’t doing so to talk about Bay-Area over-regulation. It’s been a major issue in the domestic politics in every country I’ve lived in for over a decade. The consensus among left-leaning intellectual types (who are the main group EA recruits from) in favour of acting against climate change is total.
Now, none of this means there’s nothing EA could contribute to the climate field. Probably there’s plenty of valuable work that could be done. If more climate-change work started showing up on the EA Forum, I’d be fine with that the same way I’m fine with EAs doing work in poverty, animal welfare, mental health, and lots of other areas I don’t personally prioritise. But would I believe that climate change work is the most good they could do? In most cases, probably not.
The assumption is not that people outside EA cannot do good, it is merely that we should not take it for granted that they are doing good, and doing it effectively, no matter their number. Otherwise, looking at malaria interventions, to take just one example, makes no sense. Billions have and will continue to go in that direction even without GiveWell. So the claim that climate change work is or is not the most good has no merit without a deeper dive into the field and a search for incredible giving / working opportunities. Any shallow dive into this cause reveals further attention and concern are warranted. I do not know what the results of a deeper dive might show, but am fairly confident we can at least be as effective working on climate change as working on some of the other present day welfare causes.
I do believe that there is strong bias towards the far future in many EA discussions. I am not unsympathetic to the rational behind this, but since it seems to override everything else, and present day welfare (as your reply implies) is merely tolerated, I am cautious about it.
I noticed Will listed AI safety and wild animal welfare (WAW), and you mentioned malaria. I’m curious if this is the crux – I would guess that Will agrees (certain types of) climate change work is plausibly as good as anti-malaria, and I wonder if you agree that the sort of person who (perhaps incorrectly) cares about WAW should consider that to be more impactful than climate change.
It is worth noting that a lot of core EAs have pivoted from global poverty to X-risk, a major shift in priorities, without ever changing their position on climate change (something that a priori seems important from both perspectives). This isn’t necessarily wrong, but does seem a bit suspicious.
Given the fact that climate change is somewhat GCR/X-risky, it wouldn’t surprise me if it were more valuable on the margin than anti-malaria work. But both the X-risk people and the global poverty people seem sceptical about climate change work; that intersection is somewhat surprising, but I think is a major part of my own scepticism.
Like, if you have two groups of people, and one group says “we should definitely prioritise A and B, but not C or D, and probably not E either”, and the other group says “we should definitely prioritise C and D, but not A or B, and probably not E either”, it doesn’t seem like it’s looking good for E.
But I might be reading that all wrong, and everyone things that climate change is, like, the fourth best cause, and as a result it should get more points even though nobody thinks it’s top? This sounds like one of those moral uncertainty questions.
While I never considered poverty reduction a top cause, I do consider climate change work to be quite a bit more important than poverty reduction in terms of direct impact, because of GCR-ish concerns (though overall still very unimportant compared to more direct GCR-ish concerns). My guess is that this is also true of most people I work with who are also primarily concerned about GCR-type things, though the topic hasn’t come up very often, so I am not very confident about this.
I do actually think there is value on poverty-reduction like work, but that comes primarily from an epistemic perspective where poverty-reduction requires making many fewer long-chained inferences about the world, in a way that seems more robustly good to me than all the GCR perspectives, and also seems like it would allow better learning about how the world works than working on climate change. So broadly I think I am more excited about working with people who work on global poverty than people who work on climate change (since I think the epistemic effects dominate the actual impact calculation here).
This is perhaps a bit off-topic, but I have a question about this sentence:
Would it be correct to say that poverty-reduction work isn’t less valuable in absolute terms in a longtermist worldview than it is in a near-termist worldview?
One reason that poverty-reduction is great is because returns to income seem roughly logarithmic. This applies to both worldviews. The difference in a longtermist worldview is that causes like x-risk reduction gain a lot in value. This makes poverty reduction seem less valuable relative to the best things we can do. But, since there’s no reason to think individual utility functions are different in long- and near-termist worldviews, in absolute terms the utility gain from transferring resources from high-income to low-income people is the same.
Yes, you are correct and thank you for forcing me to further clarify my position (in what follows I leave out WAW since I know absolutely nothing about it):
EA funds, which I will assume is representative of EA priorities has these funds a) “Global Health and Development”; b) “Animal Welfare”; c) “Long-Term Future”; d) “EA Meta”. Let’s leave D aside for the purposes of this discussion.
There is good reason to believe the importance and tractability of specific climate change interventions can equal or even exceed those of A & B. We have not done enough research to determine if this is the case.
The arguments in favor of C being the only area we should be concerned with, or the area we should be most concerned with, are:
I) reminiscent of other arguments in the history of thought that compel us (humans) because we do not account for the limits of our own rationality. I could say a lot more about this another time, suffice it to say here that in the end I cautiously accept these arguments and believe x-risk deserves a lot of our attention.
II) are popular within this community for psychological as well as purely rational reasons. There is nothing wrong with that and it might even be needed to build a dedicated community.
III) For these reasons I think we are biased towards C, and should employ measurements to correct for this bias.
None of these priorities is neglected by the world, but certain interventions or research opportunities within them are. EA has spent an enormous amount of effort finding opportunities for marginal value add in A, B & C.
Climate change should be researched just as much as A & B. One way of accounting for the bias I see in C is to divert a certain portion of resources to climate change research despite our strongly held beliefs. I simply cannot accept the conclusion that unless climate change renders our planet uninhabitable before we colonize Mars, we have better things to worry about. That sounds absurd in light of the fact that certain detrimental effects of climate change are already happening, and even the best case future scenarios include a lot of suffering. It might still be right, but it’s absurdity means we need to give it more attention.
What surprises me the most from the discussion of this post (and I realize it’s readers are a tiny sample size of the larger community) is that no one has come back with: “we did the research years ago, we could find no marginal value add. Please read this article for all the details”.