I havenāt yet read this post*, but wanted to mention that readers of this might also find interesting the EA Wiki article Ethics of existential risk and/āor some of the readings it links to/ācollects.
So readers can have more of a sense of what that page is about, here are the first two paragraphs of that page:
The ethics of existential risk is the study of the ethical issues related to existential risk, including questions of how bad an existential catastrophe would be, how good it is to reduce existential risk, why those things are as bad or good as they are, and how this differs between different specific existential risks. There is a range of different perspectives on these questions, and these questions have implications for how much to prioritise reducing existential risk in general and which specific risks to prioritise reducing.
In The Precipice, Toby Ord discusses five different āmoral foundationsā for assessing the value of existential risk reduction, depending on whether emphasis is placed on thefuture, thepresent, thepast, civilizational virtues or cosmic significance.[1]
*Though I did read the Introduction, and think these seem like important & correct points. ETA: Iāve now read the rest and still broadly agree and think these points are important.
Although it seems unlikely x-risk reduction is the best buy from the lights of a person-affecting view (we should be suspicious if it were), given ~$10000 per life year compares unfavourably to best global health interventions, it is still a good buy
although it seems unlikely that x-risk reduction would be the best buy by the lights of a person affecting view, this would not be wildly outlandish.
I havenāt yet read this post*, but wanted to mention that readers of this might also find interesting the EA Wiki article Ethics of existential risk and/āor some of the readings it links to/ācollects.
So readers can have more of a sense of what that page is about, here are the first two paragraphs of that page:
*Though I did read the Introduction, and think these seem like important & correct points. ETA: Iāve now read the rest and still broadly agree and think these points are important.
Thanks for linking; in particular, Greg covered some similar ground to this post in The person-affecting value of existential risk reduction: