I think the probability of these events regardless of our influence is not what matters; it’s our causal effect that does. Longtermism rests on the claim that we can predictably affect the longterm future positively. You say that it would be overconfident to assign probabilities too low in certain cases, but that argument also applies to the risk of well-intentioned longtermist interventions backfiring, e.g. by accelerating AI development faster than we align it, an intervention leading to a false sense of security and complacency, or the possibility that the future could be worse if we don’t go extinct. Any intervention can backfire. Most will accomplish little. With longtermist interventions, we may never know, since the feedback is not good enough.
I also disagree that we should have sharp probabilities, since this means making fairly arbitrary but potentially hugely influential commitments. That’s what sensitivity analysis and robust decision-making under deep uncertainty are for. The requirement that we should have sharp probabilities doesn’t rule out the possibility that we could come to vastly different conclusions based on exactly the same evidence, just because we have different priors or weight the evidence differently.
I think the probability of these events regardless of our influence is not what matters; it’s our causal effect that does. Longtermism rests on the claim that we can predictably affect the longterm future positively. You say that it would be overconfident to assign probabilities too low in certain cases, but that argument also applies to the risk of well-intentioned longtermist interventions backfiring, e.g. by accelerating AI development faster than we align it, an intervention leading to a false sense of security and complacency, or the possibility that the future could be worse if we don’t go extinct. Any intervention can backfire. Most will accomplish little. With longtermist interventions, we may never know, since the feedback is not good enough.
I also disagree that we should have sharp probabilities, since this means making fairly arbitrary but potentially hugely influential commitments. That’s what sensitivity analysis and robust decision-making under deep uncertainty are for. The requirement that we should have sharp probabilities doesn’t rule out the possibility that we could come to vastly different conclusions based on exactly the same evidence, just because we have different priors or weight the evidence differently.