I agree that externalities should be taken into account in analyses of EA projects, and as as aogara’s comment shows, they may be non-negligible (though the order of magnitude in that calculation wasn’t changed). I think it’s important to raise this point.
However:
The adversarial perspective in this post looks very wrong to me. Environmental effects are one part of the calculation. It’s not necessarily good to only do “sustainable” things. E.g. the death of all humans would be much more sustainable, but is something we would fight against.
Moreover, it should be remembered that the people whose children are dying of malaria aren’t on this forum. They’re poor, and they’re already dying in their millions—helping them not die isn’t what’s causing their presence in wetlands. Should we want everyone to go live in cities? Sure. Is there a cost-effective way to advocate for that? I’m not sure.
Presumably, everything else we do also has environmental impacts. Are those big enough to worry about instead of putting our time, money and effort into e.g. research or advocacy? And are different interventions’ impacts very different from each other?
I’m so sorry you find my post ‘adversarial’. I do apologise if that is the impression you have received. It was not intended. By way of explanation—I’ve arrived at Effective Altruism via a path that started with existential risks and then expanded to longtermism, so I suppose I automatically start from a more risk-averse perspective. X-risks and longtermism lead to one thinking more in terms of the negative effects an intervention could have on vast numbers of future people (since a human extinction event would prevent huge numbers of future people from leading happy fulfilling lives up to the habitable limit of this planet, around one billion years, and prevent even huger, barely comprehensible numbers of future people expanding to settle inhabitable planets throughout the universe), and this often seems to conflict with considerations of smaller (in comparison) numbers of people here on this one planet in the short-term. It is a quite horrible moral dilemma, to weigh these up against one another, and one which is very uncomfortable indeed to contemplate or to even attempt to quantify. But we should not shrink from this difficult task, I feel.
I agree that externalities should be taken into account in analyses of EA projects, and as as aogara’s comment shows, they may be non-negligible (though the order of magnitude in that calculation wasn’t changed). I think it’s important to raise this point.
However:
The adversarial perspective in this post looks very wrong to me. Environmental effects are one part of the calculation. It’s not necessarily good to only do “sustainable” things. E.g. the death of all humans would be much more sustainable, but is something we would fight against.
Moreover, it should be remembered that the people whose children are dying of malaria aren’t on this forum. They’re poor, and they’re already dying in their millions—helping them not die isn’t what’s causing their presence in wetlands. Should we want everyone to go live in cities? Sure. Is there a cost-effective way to advocate for that? I’m not sure.
Presumably, everything else we do also has environmental impacts. Are those big enough to worry about instead of putting our time, money and effort into e.g. research or advocacy? And are different interventions’ impacts very different from each other?
I’m so sorry you find my post ‘adversarial’. I do apologise if that is the impression you have received. It was not intended. By way of explanation—I’ve arrived at Effective Altruism via a path that started with existential risks and then expanded to longtermism, so I suppose I automatically start from a more risk-averse perspective. X-risks and longtermism lead to one thinking more in terms of the negative effects an intervention could have on vast numbers of future people (since a human extinction event would prevent huge numbers of future people from leading happy fulfilling lives up to the habitable limit of this planet, around one billion years, and prevent even huger, barely comprehensible numbers of future people expanding to settle inhabitable planets throughout the universe), and this often seems to conflict with considerations of smaller (in comparison) numbers of people here on this one planet in the short-term. It is a quite horrible moral dilemma, to weigh these up against one another, and one which is very uncomfortable indeed to contemplate or to even attempt to quantify. But we should not shrink from this difficult task, I feel.