This example of a potentially impactful and neglected climate change intervention seems like good evidence that EAs should put substantially more energy towards researching other such examples. In particular, I’m concerned that the neglect of climate change has more to do with the lack of philosophically attractive problems relative to e.g. AI risk, and less to do with marginal impact of working on the cause area.
My impression is that few people are researching new interventions in general, whether in climate change or other areas (I could name many promising ideas in global development that haven’t been written up by anyone with a strong connection to EA).
I can’t speak for people who individually choose to work on topics like AI, animal welfare, or nuclear policy, and what their impressions of marginal impact may be, but it seems like EA is just… small, without enough research-hours available to devote to everything worth exploring.
(Especially considering the specialization that often occurs before research topics are chosen; someone who discovers EA in the first year of their machine-learning PhD, after they’ve earned an undergrad CS degree, has a strong reason to research AI risk rather than other topics.)
Perhaps we should be doing more to reach out to talented researchers in fields more closely related to climate change, or students who might someday become those researchers? (As is often the case, “EAs should do more X” means something like “these specific people and organizations should do more X and less Y”, unless we grow the pool of available people/organizations.)
An example of what I had in mind was focusing more on climate change when running events like Raemon’s Question Answering hackathons. My intuition says that it would be much easier to turn up insights like the OP than insights of “equal importance to EA” (however that’s defined) in e.g. technical AI safety.
Reducing global poverty, and improving farming practices, lack philosophically attractive problems (for a consequentialist, at least) - yet EAs work heavily on them all the same. And climate change does have some philosophical issues with model parameters like discount rates. Admittedly, they are a little more messy and applied in nature than talking about formal agent behavior.
Reducing global poverty, and improving farming practices, lack philosophically attractive problems (for a consequentialist, at least) - yet EAs work heavily on them all the same.
I think this comes from an initial emphasis towards short-term, easily measured interventions (promoted by the $x saves a life meme, drowning child argument, etc.) among the early cluster of EA advocates. Obviously, the movement has since branched out into cause areas that trade certainty and immediate benefit for the chance of higher impact, but these tend to be clustered in “philosophically attractive” fields. It seems plausible to me that climate change has fallen between two stools: not concrete enough to appeal to the instinct for quantified altruism, but not intellectually attractive enough to compete with AI risk and other long-termist interventions.
This example of a potentially impactful and neglected climate change intervention seems like good evidence that EAs should put substantially more energy towards researching other such examples. In particular, I’m concerned that the neglect of climate change has more to do with the lack of philosophically attractive problems relative to e.g. AI risk, and less to do with marginal impact of working on the cause area.
My impression is that few people are researching new interventions in general, whether in climate change or other areas (I could name many promising ideas in global development that haven’t been written up by anyone with a strong connection to EA).
I can’t speak for people who individually choose to work on topics like AI, animal welfare, or nuclear policy, and what their impressions of marginal impact may be, but it seems like EA is just… small, without enough research-hours available to devote to everything worth exploring.
(Especially considering the specialization that often occurs before research topics are chosen; someone who discovers EA in the first year of their machine-learning PhD, after they’ve earned an undergrad CS degree, has a strong reason to research AI risk rather than other topics.)
Perhaps we should be doing more to reach out to talented researchers in fields more closely related to climate change, or students who might someday become those researchers? (As is often the case, “EAs should do more X” means something like “these specific people and organizations should do more X and less Y”, unless we grow the pool of available people/organizations.)
An example of what I had in mind was focusing more on climate change when running events like Raemon’s Question Answering hackathons. My intuition says that it would be much easier to turn up insights like the OP than insights of “equal importance to EA” (however that’s defined) in e.g. technical AI safety.
Reducing global poverty, and improving farming practices, lack philosophically attractive problems (for a consequentialist, at least) - yet EAs work heavily on them all the same. And climate change does have some philosophical issues with model parameters like discount rates. Admittedly, they are a little more messy and applied in nature than talking about formal agent behavior.
I think this comes from an initial emphasis towards short-term, easily measured interventions (promoted by the $x saves a life meme, drowning child argument, etc.) among the early cluster of EA advocates. Obviously, the movement has since branched out into cause areas that trade certainty and immediate benefit for the chance of higher impact, but these tend to be clustered in “philosophically attractive” fields. It seems plausible to me that climate change has fallen between two stools: not concrete enough to appeal to the instinct for quantified altruism, but not intellectually attractive enough to compete with AI risk and other long-termist interventions.
This is too ad hoc, dividing three or four cause areas into two or three categories, to be a reliable explanation.