I think timidity, as described in your first link, e.g. with a bounded social welfare function, is basically okay, but it’s a matter of intuition (similarly, discomfort with Pascalian problems is a matter of intuition). However, it does mean giving up separability in probabilistic cases, and it may instead support x-risks reduction (depending on the details).
Also, questions of fanaticism may be relevant for these x-risks, since it’s not the probability of the risks that matter, but the difference you can make. There’s also ambiguity, since it’s possible to do more harm than good, by increasing the risk instead or increasing other risks (e.g. reducing extinction risks may increase s-risks, and you may be morally uncertain about how to weigh these).
I think timidity, as described in your first link, e.g. with a bounded social welfare function, is basically okay, but it’s a matter of intuition (similarly, discomfort with Pascalian problems is a matter of intuition). However, it does mean giving up separability in probabilistic cases, and it may instead support x-risks reduction (depending on the details).
I would also recommend https://globalprioritiesinstitute.org/christian-tarsney-the-epistemic-challenge-to-longtermism/ https://globalprioritiesinstitute.org/christian-tarsney-exceeding-expectations-stochastic-dominance-as-a-general-decision-theory/
Also, questions of fanaticism may be relevant for these x-risks, since it’s not the probability of the risks that matter, but the difference you can make. There’s also ambiguity, since it’s possible to do more harm than good, by increasing the risk instead or increasing other risks (e.g. reducing extinction risks may increase s-risks, and you may be morally uncertain about how to weigh these).