Do you know of work on this off the top of your head? I know if Ord has his estimate of 6% extinction in the next 100 years, but I don’t know of attempts to extrapolate this or other estimates.
I think for long timescales, we wouldn’t want to use an exchangeable model, because the “underlying risk” isn’t stationary
I’m not sure I’ve seen any models where the discrepancy would have been large. I think most models with discount rates I’ve seen in EA use fixed constant yearly discount rates like “coin flips” (and sometimes not applied like probabilities at all, just actual multipliers on value, which can be misleading if marginal returns to additional resources are decreasing), although may do sensitivity analysis to the discount rate with very low minimum discount rates, so the bounds could still be valid. Some examples:
But I guess if we’re being really humble, shouldn’t we assign some positive probability to our descendants lasting forever (no heat death, etc., and no just as good or better other civilization taking our place in our light cone if we go extinct), so the expected future is effectively infinite in duration? I don’t think most models allow for this. (There are also other potential infinities, like from acausal influence in a spatially unbounded universe and the many worlds interpretation of quantum mechanics, so duration might not be the most important one.)
Sorry, I wouldn’t have the time, since it’s outside my focus at work, animal welfare, and I already have some other things I want to work on outside of my job.
I know if Ord has his estimate of 6% extinction in the next 100 years, but I don’t know of attempts to extrapolate this or other estimates.
This doesn’t change the substance of your point, but Ord estimates a one-in-six chance of an existential catastrophe this century.
Concerning extrapolation of this particular estimate, I think it’s much clearer here that this would be incorrect, since the bulk of the risk in Toby’s breakdown comes from AI, which is a step risk rather than a state risk.
Do you know of work on this off the top of your head? I know if Ord has his estimate of 6% extinction in the next 100 years, but I don’t know of attempts to extrapolate this or other estimates.
I think for long timescales, we wouldn’t want to use an exchangeable model, because the “underlying risk” isn’t stationary
I’m not sure I’ve seen any models where the discrepancy would have been large. I think most models with discount rates I’ve seen in EA use fixed constant yearly discount rates like “coin flips” (and sometimes not applied like probabilities at all, just actual multipliers on value, which can be misleading if marginal returns to additional resources are decreasing), although may do sensitivity analysis to the discount rate with very low minimum discount rates, so the bounds could still be valid. Some examples:
https://globalprioritiesinstitute.org/christian-tarsney-the-epistemic-challenge-to-longtermism/
https://docs.google.com/document/d/1cUJlf8Yg4nn-vvSxMvGVdVd_gdmCgNYvWfE1WIJFqFs/edit#heading=h.nayrq3eee7jg (from https://forum.effectivealtruism.org/posts/wyHjpcCxuqFzzRgtX/a-practical-guide-to-long-term-planning-and-suggestions-for )
https://www.philiptrammell.com/static/discounting_for_patient_philanthropists.pdf
But I guess if we’re being really humble, shouldn’t we assign some positive probability to our descendants lasting forever (no heat death, etc., and no just as good or better other civilization taking our place in our light cone if we go extinct), so the expected future is effectively infinite in duration? I don’t think most models allow for this. (There are also other potential infinities, like from acausal influence in a spatially unbounded universe and the many worlds interpretation of quantum mechanics, so duration might not be the most important one.)
You’ve gotten me interested in looking at total extinction risk as a follow up, are you interested in working together on it?
Sorry, I wouldn’t have the time, since it’s outside my focus at work, animal welfare, and I already have some other things I want to work on outside of my job.
This doesn’t change the substance of your point, but Ord estimates a one-in-six chance of an existential catastrophe this century.
Concerning extrapolation of this particular estimate, I think it’s much clearer here that this would be incorrect, since the bulk of the risk in Toby’s breakdown comes from AI, which is a step risk rather than a state risk.