I disagree with the way “neglectedness” is conceptualised in this post.
Climate change is not neglected among the demographics EA tends to recruit from. There are many, many scientists, activists, lawyers, policymakers, journalists, researchers of every stripe working on this issue. It has comprehensively (and justifiably) suffused the narrative of modern life. As another commenter here puts it, it is “The Big Issue Of Our Time”. The same simply cannot be said of other cause areas, despite many of those problems matching or exceeding climate change in scale.
When I was running a local group, climate change issues were far and away the #1 thing new potential members wanted to talk about / complained wasn’t on the agenda. This happened so often that “how do I deal with all the people who just want to talk about recycling” was a recurring question among other organisers I knew. I’d be willing to bet that >80% of other student group organisers have had similar experiences.
This post itself argues that EA is losing potential members by not focusing on climate change. But this claim is in direct tension with claims that climate change is neglected. If there are droves of potential EAs who only want to talk about climate change, then there are droves of new people eager to contribute to the climate change movement. The same can hardly be said for AI safety, wild animal welfare, or (until this year, perhaps) pandemic prevention.
Many of the claims cited here as reasons to work on climate change could be applied equally well to other cause areas. I don’t think there’s any reason to think simple models capture climate change less well than they do biosecurity, great power conflict, or transformative AI: these are all complex, systemic, “wicked problems” with many moving parts, where failure would have “a broad and effectively permanent impact”.
This is why I object so strongly to the “war” framing used here. In (just) war, there is typically one default problem that must be solved, and that everyone must co-ordinate on solving or face destruction. But here and now we face dozens of “wars”, all of which need attention, and many of which are far more neglected than climate change. Framing climate change as the default problem, and working on other cause areas as defecting from the co-ordination needed to solve it, impedes the essential work of cause-impartial prioritisation that is fundamental to doing good in a world like ours.
Framing climate change as the default problem, and working on other cause areas as defecting from the co-ordination needed to solve it, impedes the essential work of cause-impartial prioritisation that is fundamental to doing good in a world like ours.
I think it’s worth emphasizing that the title of this post is “Climate Change Is Neglected By EA”, rather than “Climate Change Is Ignored By EA”, or “Climate Change Is the Single Most Important Cause Above All Others”. I am strongly in favor of cause-impartial prioritisation.
In “Updated Climate Change Problem Profile” I argued that Climate Change should receive an overall score of 24 rather than 20. That’s a fairly modest increase.
This post itself argues that EA is losing potential members by not focusing on climate change. But this claim is in direct tension with claims that climate change is neglected. If there are droves of potential EAs who only want to talk about climate change, then there are droves of new people eager to contribute to the climate change movement. The same can hardly be said for AI safety, wild animal welfare, or (until this year, perhaps) pandemic prevention.
I don’t agree with this “direct tension”. I’m arguing that (A) Climate Change really is more important than EA often makes it out to be, and that (B) EA would benefit from engaging with people about climate change from an EA perspective. Perhaps as part of this engagement you can encourage them to also consider other causes. However, starting out from an EA position which downplays climate change is both factually wrong and alienating to potential EA community members.
Thanks for the reply. As I recently commented on a different post, engagement with commenters is a crucial part of a post like this, and I’m glad you’re doing that even though a lot of the response has been negative (and some of it has been mean, which I don’t support). That isn’t easy.
I think it’s worth emphasizing that the title of this post is “Climate Change Is Neglected By EA”, rather than “Climate Change Is Ignored By EA”, or “Climate Change Is the Single Most Important Cause Above All Others”. I am strongly in favor of cause-impartial prioritisation.
This sounds good to me. However, you don’t actually give much indication indication in the post about how you think climate change stacks up compared to other cause areas. Though you do implicitly do so here:
In “Updated Climate Change Problem Profile” I argued that Climate Change should receive an overall score of 24 rather than 20. That’s a fairly modest increase.
For comparison, on 80K’s website right now, AI risk, global priorities research and meta-EA are currently at 26, biosecurity and ending factory farming are at 23, and nuclear security and global health are at 21. So your implicit claim is that, on the margin, climate change is less important than AI and GPR, slightly more important than biosecurity and farmed animal welfare, and much more important than nuclear security and global health (of the bednets and deworming variety). Does that sound right to you? That isn’t a gotcha, I am genuinely asking, though I do think some elaboration on the comparisons would be valuable.
I don’t agree with this “direct tension”. I’m arguing that (A) Climate Change really is more important than EA often makes it out to be, and that (B) EA would benefit from engaging with people about climate change from an EA perspective. Perhaps as part of this engagement you can encourage them to also consider other causes. However, starting out from an EA position which downplays climate change is both factually wrong and alienating to potential EA community members.
I think I’m sticking to the “direct tension” claim. If oodles of smart, motivated young people are super-excited about climate change work, a decent chunk of them will end up doing climate change work. We’re proposing that these are the sorts of people who would otherwise make a good fit for EA, so we can assume they’re fairly smart and numerate. I’d guess they’d have less impact working on climate change outside EA than within it, but they won’t totally waste their time. So if there are lots of these people, then lots of valuable climate change work will be done with or without EA’s involvement. Conversely, if there aren’t lots of these people (which seems false), the fact that we’re alienating (some of) them by not prioritising climate change isn’t a big issue.
I think you could argue that climate change work remains competitive with other top causes despite not being neglected (I’m sceptical, but it wouldn’t astound me if this were the case). I think you could argue that the gains in recruitment from small increases in perceived openness to climate change work are worth it, despite climate change not being neglected (this is fairly plausible to me). But I don’t think you can simultaneously argue that climate change is badly neglected, and that we’re alienating loads of people by not focusing on it.
For comparison, on 80K’s website right now, AI risk, global priorities research and meta-EA are currently at 26, biosecurity and ending factory farming are at 23, and nuclear security and global health are at 21. So your implicit claim is that, on the margin, climate change is less important than AI and GPR, slightly more important than biosecurity and farmed animal welfare, and much more important than nuclear security and global health (of the bednets and deworming variety). Does that sound right to you? That isn’t a gotcha, I am genuinely asking, though I do think some elaboration on the comparisons would be valuable.
That does sound about right to me.
If oodles of smart, motivated young people are super-excited about climate change work, a decent chunk of them will end up doing climate change work. We’re proposing that these are the sorts of people who would otherwise make a good fit for EA, so we can assume they’re fairly smart and numerate. I’d guess they’d have less impact working on climate change outside EA than within it, but they won’t totally waste their time. So if there are lots of these people, then lots of valuable climate change work will be done with or without EA’s involvement. Conversely, if there aren’t lots of these people (which seems false), the fact that we’re alienating (some of) them by not prioritising climate change isn’t a big issue.
My claim is that EA currently (1) downplays the impact of climate change (e.g. focusing on x-risk, downplaying mainstream impacts) and (2) downplays the value of working on climate change (e.g. low neglectedness, low tractability). If you agree that (1, 2) are true, then EA is misleading its members about climate change and biasing them to work on other issues.
Perhaps I have misunderstood your argument, but I think you’re saying that (1, 2) don’t matter because lots of people already care about climate change, so EA doesn’t need to influence more people to work on climate change. I would argue that regardless of how many people already care about climate change, EA should seek to communicate accurately about the impact and importance of work on different cause areas.
Could you elaborate a bit on why? This doesn’t sound insane to me, but it is a pretty big disagreement with 80,000 Hours, and I am more sympathetic to 80K’s position on this.
My claim is that EA currently (1) downplays the impact of climate change (e.g. focusing on x-risk, downplaying mainstream impacts) and (2) downplays the value of working on climate change (e.g. low neglectedness, low tractability). If you agree that (1, 2) are true, then EA is misleading its members about climate change and biasing them to work on other issues.
Perhaps I have misunderstood your argument, but I think you’re saying that (1, 2) don’t matter because lots of people already care about climate change, so EA doesn’t need to influence more people to work on climate change. I would argue that regardless of how many people already care about climate change, EA should seek to communicate accurately about the impact and importance of work on different cause areas.
My claim is that the fact that so many (smart, capable) people care about climate change work directly causes it to have lower expected value (on the margin). The “impact and importance of work on different cause areas” intimately depends on how many (smart, capable) people are already working or planning to work in those areas, so trying to communicate that impact and importance without taking into account “how many people already care” is fundamentally misguided.
The claim that climate change is a major PR issue for EA, if true, is evidence that EA’s position on climate change is (in at least this one respect) correct.
The claim that climate change is a major PR issue for EA, if true, is evidence that EA’s position on climate change is (in at least this one respect) correct.
I’d like to extend my previous model to have three steps:
(1) EA downplays the impact of climate change (e.g. focusing on x-risk, downplaying mainstream impacts).
(2) EA downplays the value of working on climate change (e.g. low neglectedness, low tractability).
(3) EA discourages people from working on climate change in favor of other causes.
I think you are arguing that since lots of people already care about climate change, (3) is a sensible outcome for EA. To put this more explicitly, I think you are supportive of (2), likely due to you perceiving it as being not neglected. As I’ve stated in this post, I think it is possible to argue that climate change is not actually neglected. In fact you suggested below some good arguments for this:
“Even though lots of work has gone into solving climate change, the problem is so vastly complex and multidimensional that there’s still lots of low-hanging fruit left unpicked, so tractability remains high (and scale is large, so the two together are sufficient for impact).”
“Although lots of work has gone into solving climate change, partial solutions aren’t very valuable: most of the impact comes from the last few % of solving the problem. Also [for some reason] we can’t expect future work to continue at the same rate as current work, so more marginal work now is especially valuable.”
“While lots of resources have already gone into solving climate change, the problem is actually getting bigger all the time! So even though neglectedness is low and falling, returns from scale are high and rising, so marginal work on the problem remains valuable.”
But let’s put that to one side for a moment, because we haven’t talked about (1) yet. Even if you think the best conclusion for EA to make is (3), I still think it’s important that this conclusion is visibly drawn from the best possible information about the expected impacts of climate change. Sections (1), (4), (5), and (7) in my post speak directly to this point.
I look at the way that EA talks about climate change and I think it misses some important points (particularly see section (4) of my post). These gaps in EA’s approach to climate change cause me to have lower trust in EA cause prioritization, and at the more extreme end make me think “EA is the community who don’t seem to care as much about climate change—they don’t seem to think the impact will be so bad”. I think that’s a PR issue for EA.
In fact you suggested below some good arguments for this
Two things:
I made those up on the spur of the moment. Possibly you’re just being polite, but I would be very suspicious if all three turned out to be good arguments supporting work on climate change. I said immediately below that I don’t especially believe any of them in the case of climate change.
More importantly, the whole point of coming up with those arguments was that they didn’t depend on claims about neglectedness! None of those are arguments that climate change is neglected, they are potential shapes of arguments for why you might want to prioritise it despite it not being neglected.
I feel like we’re still not connecting regarding the basic definition of neglectedness. You seem to be mixing it up with scale and tractability in a way that isn’t helpful to precise communication.
This seems a bit of an obvious point to make but there are many more people working on a) global poverty; b) animal welfare; c) wildlife conservation; d) nuclear proliferation; e) biosaftey and f) tech safety then there are EAs in the world. This movement’s claim is that it can find ways to 100x the impact of skill and funding. In every other field it does so by researching the field in as much detail as possible and encouraging risk tolernce to unproven interventions showing promise. It often finds neglected interventions / solutions / research areas, not causes. In climate change it counts the lawyers already engaged in changing the recycling laws of San Francisco as sufficent for the task at hand.
Wildlife conservation and wild animal welfare are emphatically not the same thing. “Tech safety” (which isn’t a term I’ve heard before, and which on googling seems to mostly refer to tech in the context of domestic abuse) and AI safety are just as emphatically not the same thing.
Anyway, yes, in most areas EAs care about they are a minority of the people who care about that thing. Those areas still differ hugely in terms of neglectedness, both in terms of total attention and in terms of expertise. Assuming one doesn’t believe that EAs are the only people who can make progress in an area, this is important.
In climate change it counts the lawyers already engaged in changing the recycling laws of San Francisco as sufficent for the task at hand.
This is (a) uncharitable sarcasm, and (b) obviously false. There are enormous numbers of very smart scientists, journalists, lawyers, activists, etc etc. working on climate change. Every general science podcast I listen to covers climate change regularly, and they aren’t doing so to talk about Bay-Area over-regulation. It’s been a major issue in the domestic politics in every country I’ve lived in for over a decade. The consensus among left-leaning intellectual types (who are the main group EA recruits from) in favour of acting against climate change is total.
Now, none of this means there’s nothing EA could contribute to the climate field. Probably there’s plenty of valuable work that could be done. If more climate-change work started showing up on the EA Forum, I’d be fine with that the same way I’m fine with EAs doing work in poverty, animal welfare, mental health, and lots of other areas I don’t personally prioritise. But would I believe that climate change work is the most good they could do? In most cases, probably not.
The assumption is not that people outside EA cannot do good, it is merely that we should not take it for granted that they are doing good, and doing it effectively, no matter their number. Otherwise, looking at malaria interventions, to take just one example, makes no sense. Billions have and will continue to go in that direction even without GiveWell. So the claim that climate change work is or is not the most good has no merit without a deeper dive into the field and a search for incredible giving / working opportunities. Any shallow dive into this cause reveals further attention and concern are warranted. I do not know what the results of a deeper dive might show, but am fairly confident we can at least be as effective working on climate change as working on some of the other present day welfare causes.
I do believe that there is strong bias towards the far future in many EA discussions. I am not unsympathetic to the rational behind this, but since it seems to override everything else, and present day welfare (as your reply implies) is merely tolerated, I am cautious about it.
The same can hardly be said for AI safety, wild animal welfare, or (until this year, perhaps) pandemic prevention. - Will
Otherwise, looking at malaria interventions, to take just one example, makes no sense. Billions have and will continue to go in that direction even without GiveWell—Uri
I noticed Will listed AI safety and wild animal welfare (WAW), and you mentioned malaria. I’m curious if this is the crux – I would guess that Will agrees (certain types of)
climate change work is plausibly as good as anti-malaria, and I wonder if you agree that the sort of person who (perhaps incorrectly) cares about WAW should consider that to be more impactful than climate change.
It is worth noting that a lot of core EAs have pivoted from global poverty to X-risk, a major shift in priorities, without ever changing their position on climate change (something that a priori seems important from both perspectives). This isn’t necessarily wrong, but does seem a bit suspicious.
Given the fact that climate change is somewhat GCR/X-risky, it wouldn’t surprise me if it were more valuable on the margin than anti-malaria work. But both the X-risk people and the global poverty people seem sceptical about climate change work; that intersection is somewhat surprising, but I think is a major part of my own scepticism.
Like, if you have two groups of people, and one group says “we should definitely prioritise A and B, but not C or D, and probably not E either”, and the other group says “we should definitely prioritise C and D, but not A or B, and probably not E either”, it doesn’t seem like it’s looking good for E.
But I might be reading that all wrong, and everyone things that climate change is, like, the fourth best cause, and as a result it should get more points even though nobody thinks it’s top? This sounds like one of those moral uncertainty questions.
While I never considered poverty reduction a top cause, I do consider climate change work to be quite a bit more important than poverty reduction in terms of direct impact, because of GCR-ish concerns (though overall still very unimportant compared to more direct GCR-ish concerns). My guess is that this is also true of most people I work with who are also primarily concerned about GCR-type things, though the topic hasn’t come up very often, so I am not very confident about this.
I do actually think there is value on poverty-reduction like work, but that comes primarily from an epistemic perspective where poverty-reduction requires making many fewer long-chained inferences about the world, in a way that seems more robustly good to me than all the GCR perspectives, and also seems like it would allow better learning about how the world works than working on climate change. So broadly I think I am more excited about working with people who work on global poverty than people who work on climate change (since I think the epistemic effects dominate the actual impact calculation here).
This is perhaps a bit off-topic, but I have a question about this sentence:
I do actually think there is value on poverty-reduction like work
Would it be correct to say that poverty-reduction work isn’t less valuable in absolute terms in a longtermist worldview than it is in a near-termist worldview?
One reason that poverty-reduction is great is because returns to income seem roughly logarithmic. This applies to both worldviews. The difference in a longtermist worldview is that causes like x-risk reduction gain a lot in value. This makes poverty reduction seem less valuable relative to the best things we can do. But, since there’s no reason to think individual utility functions are different in long- and near-termist worldviews, in absolute terms the utility gain from transferring resources from high-income to low-income people is the same.
Yes, you are correct and thank you for forcing me to further clarify my position (in what follows I leave out WAW since I know absolutely nothing about it):
EA funds, which I will assume is representative of EA priorities has these funds a) “Global Health and Development”; b) “Animal Welfare”; c) “Long-Term Future”; d) “EA Meta”. Let’s leave D aside for the purposes of this discussion.
There is good reason to believe the importance and tractability of specific climate change interventions can equal or even exceed those of A & B. We have not done enough research to determine if this is the case.
The arguments in favor of C being the only area we should be concerned with, or the area we should be most concerned with, are:
I) reminiscent of other arguments in the history of thought that compel us (humans) because we do not account for the limits of our own rationality. I could say a lot more about this another time, suffice it to say here that in the end I cautiously accept these arguments and believe x-risk deserves a lot of our attention.
II) are popular within this community for psychological as well as purely rational reasons. There is nothing wrong with that and it might even be needed to build a dedicated community.
III) For these reasons I think we are biased towards C, and should employ measurements to correct for this bias.
None of these priorities is neglected by the world, but certain interventions or research opportunities within them are. EA has spent an enormous amount of effort finding opportunities for marginal value add in A, B & C.
Climate change should be researched just as much as A & B. One way of accounting for the bias I see in C is to divert a certain portion of resources to climate change research despite our strongly held beliefs. I simply cannot accept the conclusion that unless climate change renders our planet uninhabitable before we colonize Mars, we have better things to worry about. That sounds absurd in light of the fact that certain detrimental effects of climate change are already happening, and even the best case future scenarios include a lot of suffering. It might still be right, but it’s absurdity means we need to give it more attention.
What surprises me the most from the discussion of this post (and I realize it’s readers are a tiny sample size of the larger community) is that no one has come back with: “we did the research years ago, we could find no marginal value add. Please read this article for all the details”.
I disagree with the way “neglectedness” is conceptualised in this post.
Climate change is not neglected among the demographics EA tends to recruit from. There are many, many scientists, activists, lawyers, policymakers, journalists, researchers of every stripe working on this issue. It has comprehensively (and justifiably) suffused the narrative of modern life. As another commenter here puts it, it is “The Big Issue Of Our Time”. The same simply cannot be said of other cause areas, despite many of those problems matching or exceeding climate change in scale.
When I was running a local group, climate change issues were far and away the #1 thing new potential members wanted to talk about / complained wasn’t on the agenda. This happened so often that “how do I deal with all the people who just want to talk about recycling” was a recurring question among other organisers I knew. I’d be willing to bet that >80% of other student group organisers have had similar experiences.
This post itself argues that EA is losing potential members by not focusing on climate change. But this claim is in direct tension with claims that climate change is neglected. If there are droves of potential EAs who only want to talk about climate change, then there are droves of new people eager to contribute to the climate change movement. The same can hardly be said for AI safety, wild animal welfare, or (until this year, perhaps) pandemic prevention.
Many of the claims cited here as reasons to work on climate change could be applied equally well to other cause areas. I don’t think there’s any reason to think simple models capture climate change less well than they do biosecurity, great power conflict, or transformative AI: these are all complex, systemic, “wicked problems” with many moving parts, where failure would have “a broad and effectively permanent impact”.
This is why I object so strongly to the “war” framing used here. In (just) war, there is typically one default problem that must be solved, and that everyone must co-ordinate on solving or face destruction. But here and now we face dozens of “wars”, all of which need attention, and many of which are far more neglected than climate change. Framing climate change as the default problem, and working on other cause areas as defecting from the co-ordination needed to solve it, impedes the essential work of cause-impartial prioritisation that is fundamental to doing good in a world like ours.
Thanks for your feedback.
I think it’s worth emphasizing that the title of this post is “Climate Change Is Neglected By EA”, rather than “Climate Change Is Ignored By EA”, or “Climate Change Is the Single Most Important Cause Above All Others”. I am strongly in favor of cause-impartial prioritisation.
In “Updated Climate Change Problem Profile” I argued that Climate Change should receive an overall score of 24 rather than 20. That’s a fairly modest increase.
I don’t agree with this “direct tension”. I’m arguing that (A) Climate Change really is more important than EA often makes it out to be, and that (B) EA would benefit from engaging with people about climate change from an EA perspective. Perhaps as part of this engagement you can encourage them to also consider other causes. However, starting out from an EA position which downplays climate change is both factually wrong and alienating to potential EA community members.
Thanks for the reply. As I recently commented on a different post, engagement with commenters is a crucial part of a post like this, and I’m glad you’re doing that even though a lot of the response has been negative (and some of it has been mean, which I don’t support). That isn’t easy.
This sounds good to me. However, you don’t actually give much indication indication in the post about how you think climate change stacks up compared to other cause areas. Though you do implicitly do so here:
For comparison, on 80K’s website right now, AI risk, global priorities research and meta-EA are currently at 26, biosecurity and ending factory farming are at 23, and nuclear security and global health are at 21. So your implicit claim is that, on the margin, climate change is less important than AI and GPR, slightly more important than biosecurity and farmed animal welfare, and much more important than nuclear security and global health (of the bednets and deworming variety). Does that sound right to you? That isn’t a gotcha, I am genuinely asking, though I do think some elaboration on the comparisons would be valuable.
I think I’m sticking to the “direct tension” claim. If oodles of smart, motivated young people are super-excited about climate change work, a decent chunk of them will end up doing climate change work. We’re proposing that these are the sorts of people who would otherwise make a good fit for EA, so we can assume they’re fairly smart and numerate. I’d guess they’d have less impact working on climate change outside EA than within it, but they won’t totally waste their time. So if there are lots of these people, then lots of valuable climate change work will be done with or without EA’s involvement. Conversely, if there aren’t lots of these people (which seems false), the fact that we’re alienating (some of) them by not prioritising climate change isn’t a big issue.
I think you could argue that climate change work remains competitive with other top causes despite not being neglected (I’m sceptical, but it wouldn’t astound me if this were the case). I think you could argue that the gains in recruitment from small increases in perceived openness to climate change work are worth it, despite climate change not being neglected (this is fairly plausible to me). But I don’t think you can simultaneously argue that climate change is badly neglected, and that we’re alienating loads of people by not focusing on it.
That does sound about right to me.
My claim is that EA currently (1) downplays the impact of climate change (e.g. focusing on x-risk, downplaying mainstream impacts) and (2) downplays the value of working on climate change (e.g. low neglectedness, low tractability). If you agree that (1, 2) are true, then EA is misleading its members about climate change and biasing them to work on other issues.
Perhaps I have misunderstood your argument, but I think you’re saying that (1, 2) don’t matter because lots of people already care about climate change, so EA doesn’t need to influence more people to work on climate change. I would argue that regardless of how many people already care about climate change, EA should seek to communicate accurately about the impact and importance of work on different cause areas.
Could you elaborate a bit on why? This doesn’t sound insane to me, but it is a pretty big disagreement with 80,000 Hours, and I am more sympathetic to 80K’s position on this.
My claim is that the fact that so many (smart, capable) people care about climate change work directly causes it to have lower expected value (on the margin). The “impact and importance of work on different cause areas” intimately depends on how many (smart, capable) people are already working or planning to work in those areas, so trying to communicate that impact and importance without taking into account “how many people already care” is fundamentally misguided.
The claim that climate change is a major PR issue for EA, if true, is evidence that EA’s position on climate change is (in at least this one respect) correct.
I’d like to extend my previous model to have three steps:
(1) EA downplays the impact of climate change (e.g. focusing on x-risk, downplaying mainstream impacts).
(2) EA downplays the value of working on climate change (e.g. low neglectedness, low tractability).
(3) EA discourages people from working on climate change in favor of other causes.
I think you are arguing that since lots of people already care about climate change, (3) is a sensible outcome for EA. To put this more explicitly, I think you are supportive of (2), likely due to you perceiving it as being not neglected. As I’ve stated in this post, I think it is possible to argue that climate change is not actually neglected. In fact you suggested below some good arguments for this:
But let’s put that to one side for a moment, because we haven’t talked about (1) yet. Even if you think the best conclusion for EA to make is (3), I still think it’s important that this conclusion is visibly drawn from the best possible information about the expected impacts of climate change. Sections (1), (4), (5), and (7) in my post speak directly to this point.
I look at the way that EA talks about climate change and I think it misses some important points (particularly see section (4) of my post). These gaps in EA’s approach to climate change cause me to have lower trust in EA cause prioritization, and at the more extreme end make me think “EA is the community who don’t seem to care as much about climate change—they don’t seem to think the impact will be so bad”. I think that’s a PR issue for EA.
Two things:
I made those up on the spur of the moment. Possibly you’re just being polite, but I would be very suspicious if all three turned out to be good arguments supporting work on climate change. I said immediately below that I don’t especially believe any of them in the case of climate change.
More importantly, the whole point of coming up with those arguments was that they didn’t depend on claims about neglectedness! None of those are arguments that climate change is neglected, they are potential shapes of arguments for why you might want to prioritise it despite it not being neglected.
I feel like we’re still not connecting regarding the basic definition of neglectedness. You seem to be mixing it up with scale and tractability in a way that isn’t helpful to precise communication.
This seems a bit of an obvious point to make but there are many more people working on a) global poverty; b) animal welfare; c) wildlife conservation; d) nuclear proliferation; e) biosaftey and f) tech safety then there are EAs in the world. This movement’s claim is that it can find ways to 100x the impact of skill and funding. In every other field it does so by researching the field in as much detail as possible and encouraging risk tolernce to unproven interventions showing promise. It often finds neglected interventions / solutions / research areas, not causes. In climate change it counts the lawyers already engaged in changing the recycling laws of San Francisco as sufficent for the task at hand.
Wildlife conservation and wild animal welfare are emphatically not the same thing. “Tech safety” (which isn’t a term I’ve heard before, and which on googling seems to mostly refer to tech in the context of domestic abuse) and AI safety are just as emphatically not the same thing.
Anyway, yes, in most areas EAs care about they are a minority of the people who care about that thing. Those areas still differ hugely in terms of neglectedness, both in terms of total attention and in terms of expertise. Assuming one doesn’t believe that EAs are the only people who can make progress in an area, this is important.
This is (a) uncharitable sarcasm, and (b) obviously false. There are enormous numbers of very smart scientists, journalists, lawyers, activists, etc etc. working on climate change. Every general science podcast I listen to covers climate change regularly, and they aren’t doing so to talk about Bay-Area over-regulation. It’s been a major issue in the domestic politics in every country I’ve lived in for over a decade. The consensus among left-leaning intellectual types (who are the main group EA recruits from) in favour of acting against climate change is total.
Now, none of this means there’s nothing EA could contribute to the climate field. Probably there’s plenty of valuable work that could be done. If more climate-change work started showing up on the EA Forum, I’d be fine with that the same way I’m fine with EAs doing work in poverty, animal welfare, mental health, and lots of other areas I don’t personally prioritise. But would I believe that climate change work is the most good they could do? In most cases, probably not.
The assumption is not that people outside EA cannot do good, it is merely that we should not take it for granted that they are doing good, and doing it effectively, no matter their number. Otherwise, looking at malaria interventions, to take just one example, makes no sense. Billions have and will continue to go in that direction even without GiveWell. So the claim that climate change work is or is not the most good has no merit without a deeper dive into the field and a search for incredible giving / working opportunities. Any shallow dive into this cause reveals further attention and concern are warranted. I do not know what the results of a deeper dive might show, but am fairly confident we can at least be as effective working on climate change as working on some of the other present day welfare causes.
I do believe that there is strong bias towards the far future in many EA discussions. I am not unsympathetic to the rational behind this, but since it seems to override everything else, and present day welfare (as your reply implies) is merely tolerated, I am cautious about it.
I noticed Will listed AI safety and wild animal welfare (WAW), and you mentioned malaria. I’m curious if this is the crux – I would guess that Will agrees (certain types of) climate change work is plausibly as good as anti-malaria, and I wonder if you agree that the sort of person who (perhaps incorrectly) cares about WAW should consider that to be more impactful than climate change.
It is worth noting that a lot of core EAs have pivoted from global poverty to X-risk, a major shift in priorities, without ever changing their position on climate change (something that a priori seems important from both perspectives). This isn’t necessarily wrong, but does seem a bit suspicious.
Given the fact that climate change is somewhat GCR/X-risky, it wouldn’t surprise me if it were more valuable on the margin than anti-malaria work. But both the X-risk people and the global poverty people seem sceptical about climate change work; that intersection is somewhat surprising, but I think is a major part of my own scepticism.
Like, if you have two groups of people, and one group says “we should definitely prioritise A and B, but not C or D, and probably not E either”, and the other group says “we should definitely prioritise C and D, but not A or B, and probably not E either”, it doesn’t seem like it’s looking good for E.
But I might be reading that all wrong, and everyone things that climate change is, like, the fourth best cause, and as a result it should get more points even though nobody thinks it’s top? This sounds like one of those moral uncertainty questions.
While I never considered poverty reduction a top cause, I do consider climate change work to be quite a bit more important than poverty reduction in terms of direct impact, because of GCR-ish concerns (though overall still very unimportant compared to more direct GCR-ish concerns). My guess is that this is also true of most people I work with who are also primarily concerned about GCR-type things, though the topic hasn’t come up very often, so I am not very confident about this.
I do actually think there is value on poverty-reduction like work, but that comes primarily from an epistemic perspective where poverty-reduction requires making many fewer long-chained inferences about the world, in a way that seems more robustly good to me than all the GCR perspectives, and also seems like it would allow better learning about how the world works than working on climate change. So broadly I think I am more excited about working with people who work on global poverty than people who work on climate change (since I think the epistemic effects dominate the actual impact calculation here).
This is perhaps a bit off-topic, but I have a question about this sentence:
Would it be correct to say that poverty-reduction work isn’t less valuable in absolute terms in a longtermist worldview than it is in a near-termist worldview?
One reason that poverty-reduction is great is because returns to income seem roughly logarithmic. This applies to both worldviews. The difference in a longtermist worldview is that causes like x-risk reduction gain a lot in value. This makes poverty reduction seem less valuable relative to the best things we can do. But, since there’s no reason to think individual utility functions are different in long- and near-termist worldviews, in absolute terms the utility gain from transferring resources from high-income to low-income people is the same.
Yes, you are correct and thank you for forcing me to further clarify my position (in what follows I leave out WAW since I know absolutely nothing about it):
EA funds, which I will assume is representative of EA priorities has these funds a) “Global Health and Development”; b) “Animal Welfare”; c) “Long-Term Future”; d) “EA Meta”. Let’s leave D aside for the purposes of this discussion.
There is good reason to believe the importance and tractability of specific climate change interventions can equal or even exceed those of A & B. We have not done enough research to determine if this is the case.
The arguments in favor of C being the only area we should be concerned with, or the area we should be most concerned with, are:
I) reminiscent of other arguments in the history of thought that compel us (humans) because we do not account for the limits of our own rationality. I could say a lot more about this another time, suffice it to say here that in the end I cautiously accept these arguments and believe x-risk deserves a lot of our attention.
II) are popular within this community for psychological as well as purely rational reasons. There is nothing wrong with that and it might even be needed to build a dedicated community.
III) For these reasons I think we are biased towards C, and should employ measurements to correct for this bias.
None of these priorities is neglected by the world, but certain interventions or research opportunities within them are. EA has spent an enormous amount of effort finding opportunities for marginal value add in A, B & C.
Climate change should be researched just as much as A & B. One way of accounting for the bias I see in C is to divert a certain portion of resources to climate change research despite our strongly held beliefs. I simply cannot accept the conclusion that unless climate change renders our planet uninhabitable before we colonize Mars, we have better things to worry about. That sounds absurd in light of the fact that certain detrimental effects of climate change are already happening, and even the best case future scenarios include a lot of suffering. It might still be right, but it’s absurdity means we need to give it more attention.
What surprises me the most from the discussion of this post (and I realize it’s readers are a tiny sample size of the larger community) is that no one has come back with: “we did the research years ago, we could find no marginal value add. Please read this article for all the details”.