I think a key point of contention here with many people (including me) who would endorse the argument for working to mitigate existential risks might be your background assumption that the per-unit risk rate is constant over time. I would put substantial probability (at least 5% at a conservative minimum) on us being in a relatively short period of heightened existential risk, which will then be followed by a much longer and safer period. I think if you put substantial credence on this, then small relative reductions of the risk rate in this century still end up looking very good in expected value.
To make this concrete, consider the following simplified model. Suppose that we will face 10 centuries with a 10% chance that we go extinct in each one, followed by a reduction of the per-century risk to 10−6. (Let’s suppose all these probabilities are independent for simplicity.) Under this model, the value of the future would be approximately 3.5×1014 human lives.
Then, if we could decrease the relative risk of extinction this century by 1 in 100 million, this would be equivalent in expected value to saving approximately 1.4×106 lives. (To calculate this, consider the probability that we would have gone extinct this century, but our intervention prevented this, and that we would not have gone extinct in the following dangerous centuries.) Discounting by a further factor of 20 to account for my 5% credence that this model is reasonable, this would give a lower bound on the value of our intervention of approximately 7×104 lives.
These are somewhat smaller than the numbers Bostrom gets (partly due to my conservative discounting for model uncertainty) but even so are large enough that I think his core point still stands.
I expect our key disagreement might be in whether we should assign non-negligible credence to humanity driving the background risk down to a very low level. While this might seem unlikely from our current perspective, I find it hard to justify putting a credence below 5% on this happening. This largely comes from seeing several plausible pathways for this to happen (space colonisation, lock-in driven by TAI, etc.) plus quite a lot of epistemic modesty because predicting the future is hard.
I think a key point of contention here with many people (including me) who would endorse the argument for working to mitigate existential risks might be your background assumption that the per-unit risk rate is constant over time. I would put substantial probability (at least 5% at a conservative minimum) on us being in a relatively short period of heightened existential risk, which will then be followed by a much longer and safer period. I think if you put substantial credence on this, then small relative reductions of the risk rate in this century still end up looking very good in expected value.
To make this concrete, consider the following simplified model. Suppose that we will face 10 centuries with a 10% chance that we go extinct in each one, followed by a reduction of the per-century risk to 10−6. (Let’s suppose all these probabilities are independent for simplicity.) Under this model, the value of the future would be approximately 3.5×1014 human lives.
Then, if we could decrease the relative risk of extinction this century by 1 in 100 million, this would be equivalent in expected value to saving approximately 1.4×106 lives. (To calculate this, consider the probability that we would have gone extinct this century, but our intervention prevented this, and that we would not have gone extinct in the following dangerous centuries.) Discounting by a further factor of 20 to account for my 5% credence that this model is reasonable, this would give a lower bound on the value of our intervention of approximately 7×104 lives.
These are somewhat smaller than the numbers Bostrom gets (partly due to my conservative discounting for model uncertainty) but even so are large enough that I think his core point still stands.
I expect our key disagreement might be in whether we should assign non-negligible credence to humanity driving the background risk down to a very low level. While this might seem unlikely from our current perspective, I find it hard to justify putting a credence below 5% on this happening. This largely comes from seeing several plausible pathways for this to happen (space colonisation, lock-in driven by TAI, etc.) plus quite a lot of epistemic modesty because predicting the future is hard.
I think David has broadly addressed his views on this in ’Existential Risk Pessimism and the Time of Perils” (https://globalprioritiesinstitute.org/wp-content/uploads/David-Thorstad-Existential-risk-pessimism-.pdf), which I believe this moral mathematics series is a follow up to