The conclusion in favour of extinction doesn’t necessarily follow, though, depending on the exact framing of the asymmetry and neutrality (although I think it would according to the views CLR defends, but I don’t even think everyone at CLR agrees with those views). See the soft asymmetry and conclusion here:
https://globalprioritiesinstitute.org/teruji-thomas-the-asymmetry-uncertainty-and-the-long-term/
Note that this view does satisfy transitivity, but not the independence of irrelevant alternatives, i.e. whether A is better than B can depend on what other options are available. I think standard intuitions about the repugnant conclusion, which the soft asymmetry avoids (if so recall correctly), do not satisfy the independence of irrelevant alternatives. There are other cases where independence is violated by common intuitions:
https://forum.effectivealtruism.org/posts/HyeTgKBv7DjZYjcQT/the-problem-with-person-affecting-views?commentId=qPDNPCsWuCF86hsqi
For what it’s worth, this is a new view put forth, so it’s likely few people know about it, but I suspect it’s closest to a temporally impartial version of most people’s moral intuitions.
Personally, I basically agree with the views in that article by CLR, the asymmetry in particular is one of my strongest intuitions (the hard version, additional happy lives aren’t good), and I think that an empty future would be optimal because of the asymmetry. I do not find this counterintuitive.
I agree with Jack that neutrality about creating happy lives is (probably) a minority view within EA, although I’m not sure. 80% of EAs are consequentialist according to the most recent EA survey, and most of those probably reject neutrality: https://www.rethinkpriorities.org/blog/2019/12/5/ea-survey-2019-series-community-demographics-amp-characteristics
The conclusion in favour of extinction doesn’t necessarily follow, though, depending on the exact framing of the asymmetry and neutrality (although I think it would according to the views CLR defends, but I don’t even think everyone at CLR agrees with those views). See the soft asymmetry and conclusion here: https://globalprioritiesinstitute.org/teruji-thomas-the-asymmetry-uncertainty-and-the-long-term/
Note that this view does satisfy transitivity, but not the independence of irrelevant alternatives, i.e. whether A is better than B can depend on what other options are available. I think standard intuitions about the repugnant conclusion, which the soft asymmetry avoids (if so recall correctly), do not satisfy the independence of irrelevant alternatives. There are other cases where independence is violated by common intuitions: https://forum.effectivealtruism.org/posts/HyeTgKBv7DjZYjcQT/the-problem-with-person-affecting-views?commentId=qPDNPCsWuCF86hsqi
For what it’s worth, this is a new view put forth, so it’s likely few people know about it, but I suspect it’s closest to a temporally impartial version of most people’s moral intuitions.
There’s also the possibility of s-risks by omission, like failing to help aliens (causally or causally), which extinction would exacerbate, although I’m personally skeptical that we would find and help aliens. Some discussion here: https://centerforreducingsuffering.org/s-risk-impact-distribution-is-double-tailed/
Personally, I basically agree with the views in that article by CLR, the asymmetry in particular is one of my strongest intuitions (the hard version, additional happy lives aren’t good), and I think that an empty future would be optimal because of the asymmetry. I do not find this counterintuitive.