Sorry in what sense does Shruti say that EA solutions aren’t effective in the case of air pollution? Do you mean that the highest ‘EV’ interventions are likely to be ones with high uncertainty about whether they work or not?
(I don’t think of EA as being about achieving high confidence in impact, if anything I’d associate EA with high-risk hits based giving.)
[disclaimer that I’m not Shruti so can only offer my interpretation of the argument she makes]
I think this paragraph from her piece does a good job of distilling the challenge of air pollution -
“It takes decades, maybe centuries to develop high state capacity that can tackle commons problems, mitigate pollution and create a world-class clean public transportation system. And this requires increases in economic growth and government revenue as well as well aligned political incentives. The problem is there is no simple solution that can be easily implemented. Unlike malaria, the impact of air pollution cannot be avoided by handing out air purifiers. They they don’t even make a dent in lowering the hazardous AQI in Delhi. The problem can only be solved though better governance mechanisms and innovation. Innovation can take the form of better construction technology that doesn’t contribute as much to particulate matter pollution. Or by developing cleaner fuel for vehicles. Or through better carbon capture and particulate matter capture technology. But none of this is legible or predictable.”
I think the last sentence is the crux, solutions to air pollution are neither legible or predictable. I’d claim that legibility and predictability are pretty central to EA based giving, at least GiveWell flavour EA, because they bear on how measures of tractability and impact are calculated.
On predictability, I’d be interested to know why you’d associate EA with high-risk hits based giving. From reading GiveWell cost-effectiveness reports (on say Iron Fortification for instance), I recall there being discounts based on how uncertain members evaluating the grant thought the evidence was. I thought this was fairly standard practice across such grants.
On legibility, I think Shruti is making the point that it would be really difficult to know the precise interventions needed to make sure Delhi’s air pollution problem never got as bad as it did and it still isn’t totally clear what the best ones are. What we do seem to know is that improving state capacity (strongly linked to economic growth), and driving technological innovation (funding carbon capture startups for instance) are likely the best approaches to tackling air pollution in India. And neither of those two options seem particularly EA-aligned.
Ah OK, I agree it’s not that consistent with GiveWell’s traditional approach.
I think of high-confidence GiveWell-style giving as just one possible approach one might take in the pursuit of ‘effective altruism’, and it’s one that I personally think is misguided for the sorts of reasons Shruti is pointing to.
High-confidence (e.g. GiveWell) and hits-based giving (e.g. Open Phil, all longtermism) are both large fractions of the EA-inspired portfolio of giving and careers.
So really I should just say that there’s nothing like a consensus around whether EA implies going for high-confidence or low-confidence strategies (or something in the middle I guess).
(Incidentally from my interview with Elie I’d say GiveWell is actually now doing some hits-based giving of its own.)
Thanks for this Robert, gives me more context on some of the EA flavours. I have a sense that even OP hits-based giving isn’t fully aligned with the point Shruti is making (after all EA is not EV—Emergent Ventures) but I wouldn’t be sure how to articulate the difference. Think it could be a great conversation for your podcast to host :))
Sorry in what sense does Shruti say that EA solutions aren’t effective in the case of air pollution? Do you mean that the highest ‘EV’ interventions are likely to be ones with high uncertainty about whether they work or not?
(I don’t think of EA as being about achieving high confidence in impact, if anything I’d associate EA with high-risk hits based giving.)
[disclaimer that I’m not Shruti so can only offer my interpretation of the argument she makes]
I think this paragraph from her piece does a good job of distilling the challenge of air pollution -
“It takes decades, maybe centuries to develop high state capacity that can tackle commons problems, mitigate pollution and create a world-class clean public transportation system. And this requires increases in economic growth and government revenue as well as well aligned political incentives. The problem is there is no simple solution that can be easily implemented. Unlike malaria, the impact of air pollution cannot be avoided by handing out air purifiers. They they don’t even make a dent in lowering the hazardous AQI in Delhi. The problem can only be solved though better governance mechanisms and innovation. Innovation can take the form of better construction technology that doesn’t contribute as much to particulate matter pollution. Or by developing cleaner fuel for vehicles. Or through better carbon capture and particulate matter capture technology. But none of this is legible or predictable.”
I think the last sentence is the crux, solutions to air pollution are neither legible or predictable. I’d claim that legibility and predictability are pretty central to EA based giving, at least GiveWell flavour EA, because they bear on how measures of tractability and impact are calculated.
On predictability, I’d be interested to know why you’d associate EA with high-risk hits based giving. From reading GiveWell cost-effectiveness reports (on say Iron Fortification for instance), I recall there being discounts based on how uncertain members evaluating the grant thought the evidence was. I thought this was fairly standard practice across such grants.
On legibility, I think Shruti is making the point that it would be really difficult to know the precise interventions needed to make sure Delhi’s air pollution problem never got as bad as it did and it still isn’t totally clear what the best ones are. What we do seem to know is that improving state capacity (strongly linked to economic growth), and driving technological innovation (funding carbon capture startups for instance) are likely the best approaches to tackling air pollution in India. And neither of those two options seem particularly EA-aligned.
Ah OK, I agree it’s not that consistent with GiveWell’s traditional approach.
I think of high-confidence GiveWell-style giving as just one possible approach one might take in the pursuit of ‘effective altruism’, and it’s one that I personally think is misguided for the sorts of reasons Shruti is pointing to.
High-confidence (e.g. GiveWell) and hits-based giving (e.g. Open Phil, all longtermism) are both large fractions of the EA-inspired portfolio of giving and careers.
So really I should just say that there’s nothing like a consensus around whether EA implies going for high-confidence or low-confidence strategies (or something in the middle I guess).
(Incidentally from my interview with Elie I’d say GiveWell is actually now doing some hits-based giving of its own.)
Thanks for this Robert, gives me more context on some of the EA flavours. I have a sense that even OP hits-based giving isn’t fully aligned with the point Shruti is making (after all EA is not EV—Emergent Ventures) but I wouldn’t be sure how to articulate the difference. Think it could be a great conversation for your podcast to host :))