Thanks for this post—I think this is a very important topic! I largely agree that this argument deserves substantial weight, and that we should probably think more about unknown existential risks and about how we should respond to the possibility of them.
In general, the more recently we discovered a particular existential risk, the more probable it appears. I proposed that this pattern occurs because technological growth introduces increasingly-significant risks. But we have an alternative explanation: Perhaps all existential risks are unlikely, but the recently-discovered risks appear more likely due to bad priors plus wide error bars in our probability estimates.
(I’m not sure if the following is disagreeing with you or just saying something separate yet consistent)
I agree that recently discovered risks seem the largest source of existential risk, and that two plausible explanations of that are “technological growth introduces increasingly significant risks” and “we have bad priors in general and wider error bars for probability estimates about recently discovered risks than about risks we discovered earlier”. But it also seems to me that we can believe all that and yet still think there’s a fairly high chance we’ll later realise that the risks we recently discovered are less risky than we thought, and that future unknown risks in general aren’t very risky, even if we didn’t have bad priors.
Essentially, this is because, if we have a few decades or centuries without an existential catastrophe (and especially if this happens despite the development of highly advanced AI, nanotechnology, and biotech), that’ll represent new evidence not just about those particular risks, but also about how risky advanced technologies are in general and how resilient to major changes civilization is in general.
Thousands of years where asteroids could’ve wiped humanity out but didn’t updates us towards thinking extinction risk from asteroids is low. I think that, in the same way, developing more things in the reference class of “very advanced tech with hard-to-predict consequences” without an existential catastrophe occurring should update us towards thinking existential risk from developing more things like that is relatively low.
I think we can make that update in 2070 (or whatever) without saying our beliefs in 2020 were foolish, given what we knew. And I don’t think we should make that update yet, as we haven’t yet seen that evidence of “general civilizational stability” (or something like that).
Thanks for this post—I think this is a very important topic! I largely agree that this argument deserves substantial weight, and that we should probably think more about unknown existential risks and about how we should respond to the possibility of them.
(I’m not sure if the following is disagreeing with you or just saying something separate yet consistent)
I agree that recently discovered risks seem the largest source of existential risk, and that two plausible explanations of that are “technological growth introduces increasingly significant risks” and “we have bad priors in general and wider error bars for probability estimates about recently discovered risks than about risks we discovered earlier”. But it also seems to me that we can believe all that and yet still think there’s a fairly high chance we’ll later realise that the risks we recently discovered are less risky than we thought, and that future unknown risks in general aren’t very risky, even if we didn’t have bad priors.
Essentially, this is because, if we have a few decades or centuries without an existential catastrophe (and especially if this happens despite the development of highly advanced AI, nanotechnology, and biotech), that’ll represent new evidence not just about those particular risks, but also about how risky advanced technologies are in general and how resilient to major changes civilization is in general.
Thousands of years where asteroids could’ve wiped humanity out but didn’t updates us towards thinking extinction risk from asteroids is low. I think that, in the same way, developing more things in the reference class of “very advanced tech with hard-to-predict consequences” without an existential catastrophe occurring should update us towards thinking existential risk from developing more things like that is relatively low.
I think we can make that update in 2070 (or whatever) without saying our beliefs in 2020 were foolish, given what we knew. And I don’t think we should make that update yet, as we haven’t yet seen that evidence of “general civilizational stability” (or something like that).
Does that make sense?