Thank you for this. I think itās worth discussing which kinds of moral views are compatible with EA. For example, in chapter 2 of The Precipice, Toby Ord enumerates 5 moral foundations for caring about existential risk (also discussed in this presentation):
1. Our concern could be rooted in the present ā the immediate toll such a catastrophe would take on everyone alive at the time it struck. (common-sense ethics)
2. It could be rooted in the future, stretching so much further than our own moment ā everything that would be lost. (longtermism)
3. It could be rooted in the past, on how we would fail every generation that came before us. (Burkean āpartnership of generationsā conservatism)
4. We could also make a case based on virtue, on how by risking our entire future, humanity itself displays a staggering deficiency of patience, prudence, and wisdom. (virtue ethics)
5. We could make a case based on our cosmic significance, on how this might be the only place in the universe where thereās intelligent life, the only chance for the universe to understand itself, on how we are the only beings who can deliberately shape the future toward what is good or just.
So I find it strange and disappointing that we make little effort to promote longtermism to people who donāt share the EA mainstreamās utilitarian foundations.
Similarly, I think itās worth helping conservationists figure out how to conserve biodiversity as efficiently as possible, perhaps alongside other values such as human and animal welfare, even though it is not something inherently valued by utilitarianism and seems to conflict with improving wild animal welfare. I have moral uncertainty as to the relative importance of biodiversity and WAW, so Iād like to see society try to optimize both and come to a consensus about how to navigate the tradeoffs between the two.
Thank you for this. I think itās worth discussing which kinds of moral views are compatible with EA. For example, in chapter 2 of The Precipice, Toby Ord enumerates 5 moral foundations for caring about existential risk (also discussed in this presentation):
So I find it strange and disappointing that we make little effort to promote longtermism to people who donāt share the EA mainstreamās utilitarian foundations.
Similarly, I think itās worth helping conservationists figure out how to conserve biodiversity as efficiently as possible, perhaps alongside other values such as human and animal welfare, even though it is not something inherently valued by utilitarianism and seems to conflict with improving wild animal welfare. I have moral uncertainty as to the relative importance of biodiversity and WAW, so Iād like to see society try to optimize both and come to a consensus about how to navigate the tradeoffs between the two.