I don’t think longtermism is a nice solution to this problem. If you’re open to letting astronomically large but unlikely scenarios dominate your expected value calculations, then I don’t think this rounds out nicely to simply “reduce existential risk”. The more accurate summary would be: reduce existential risk according to a worldview in which astronomical value is possible, which is likely to lead to very different recommendations than if you were to attempt to reduce existential risk unconditionally.
I don’t think longtermism is a nice solution to this problem. If you’re open to letting astronomically large but unlikely scenarios dominate your expected value calculations, then I don’t think this rounds out nicely to simply “reduce existential risk”. The more accurate summary would be: reduce existential risk according to a worldview in which astronomical value is possible, which is likely to lead to very different recommendations than if you were to attempt to reduce existential risk unconditionally.
https://forum.effectivealtruism.org/posts/RCmgGp2nmoWFcRwdn/should-strong-longtermists-really-want-to-minimize