I thought the series on Exaggerating the risks quite interesting. In particular, it helped me internalise the preliminary lessons of this post:
First, risk estimates can be inflated by orders of magnitude.
(...)
Second, the evidential basis for existential risk estimates is sometimes very slim.
(...)
Finally, we saw throughout this series what I have called a regression to the inscrutable.
I think there is a strong tendency towards giving values between 1 % and 90 % for existential risk until 2100 if one knows very little about the risk, but super slim evidential basis is also compatible with values many OOMs below 1 %.
Update: I have now gone through the 1st 8 posts of the series Existential risk pessimism, and found it pretty valuable too. As someone who puts really large weight on expectational total hedonistic utilitarianism, I am not persuaded by common objections to longtermism such as “maybe creating lives is neutral” or “you cannot use expected value when the probabilities are super small”. These are the ones I typically found on the EA Forum or EA-aligned podcasts, but the series shows that:
Fourth, skepticism about longtermism is for everyone. You don’t have to deny consequentialism, totalism, or any other ethical or decision-theoretic principle to be worried about longtermism. This series is one of a rough patchwork of challenges to longtermism that everyone should take seriously, regardless of their background view.
Thanks Vasco! I appreciate your readership, and you’ve got my view exactly right here. Even a 1% chance of literal extinction in this century should be life-alteringly frightening on many moral views (including mine!). Pushing the risk a fair bit lower than that should be a part of most plausible strategies for resisting the focus on existential risk mitigation.
Thanks for the update, David!
I thought the series on Exaggerating the risks quite interesting. In particular, it helped me internalise the preliminary lessons of this post:
I think there is a strong tendency towards giving values between 1 % and 90 % for existential risk until 2100 if one knows very little about the risk, but super slim evidential basis is also compatible with values many OOMs below 1 %.
Update: I have now gone through the 1st 8 posts of the series Existential risk pessimism, and found it pretty valuable too. As someone who puts really large weight on expectational total hedonistic utilitarianism, I am not persuaded by common objections to longtermism such as “maybe creating lives is neutral” or “you cannot use expected value when the probabilities are super small”. These are the ones I typically found on the EA Forum or EA-aligned podcasts, but the series shows that:
Thanks Vasco! I appreciate your readership, and you’ve got my view exactly right here. Even a 1% chance of literal extinction in this century should be life-alteringly frightening on many moral views (including mine!). Pushing the risk a fair bit lower than that should be a part of most plausible strategies for resisting the focus on existential risk mitigation.