But based on Rowe & Beard’s survey (as well as Michael Aird’s database of existential risk estimates), no other sources appear to have addressed the likelihood of unknown x-risks, which implies that most others do not give unknown risks serious consideration.
I don’t think this is true. The Doomsday Argument literature (Carter, Leslie, Gott etc.) mostly considers the probability of extinction independently of any specific risks, so these authors’ estimates implicitly involve an assessment of unknown risks. Lots of this writing was before there were well-developed cases for specific risks. Indeed, the Doomsday literature seems to have inspired Leslie, and then Bostrom, to start seriously considering specific risks.
Leslie explicitly considers unknown risks (p.146, End of the World):
Finally, we may well run a severe risk from something-we-know-not-what: something of which we can say only that it would come as a nasty surprise like the Antarctic ozone hole and that, again like the ozone hole, it would be a consequence of technological advances.
We need a catch-all category. It would be foolish to be confident that we have already imagined and anticipated all significant risks. Future technological or scientific developments may very well reveal novel ways of destroying the world.
I agree that the literature on the Doomsday Argument involves an implicit assessment of unknown risks, in the sense that any residual probability mass assigned to existential risk after deducting the known x-risks must fall under the unknown risks category. (Note that our object-level assessment of specific risks may cause us to update our prior general risk estimates derived from the Doomsday Argument.)
Still, Michael’s argument is not based on anthropic considerations, but on extrapolation from the rate of x-risk discovery. These are two very different reasons for revising our estimates of unknown x-risks, so it’s important to keep them separate. (I don’t think we disagree; I just thought this was worth highlighting.)
Related to this, I find anthropic reasoning pretty suspect, and I don’t think we have a good enough grasp on how to reason about anthropics to draw any strong conclusions about it. The same could be said about choices of priors, e.g., MacAskill vs. Ord where the answer to “are we living at the most influential time in history?” completely hinges on the choice of prior, but we don’t really know the best way to pick a prior. This seems related to anthropic reasoning in that the Doomsday Argument depends on using a certain type of prior distribution over the number of humans who will ever live. My general impression is that we as a society don’t know enough about this kind of thing (and I personally know hardly anything about it). However, it’s possible that some people have correctly figured out the “philosophy of priors” and that knowledge just hasn’t fully propagated yet.
Thanks — I agree with this, and should have made clearer that I didn’t see my comment as undermining the thrust of Michael’s argument, which I find quite convincing.
Thanks for this perspective! I’ve heard of the Doomsday Argument but I haven’t read the literature. My understanding was that the majority belief is that the Doomsday Argument is wrong, we just haven’t figured out why it’s wrong. I didn’t realize there was substantial literature on the problem, so I will need to do some reading!
I think it is still accurate to claim that very few sources have considered the probability of unknown risks relative to known risks. I’m mainly basing this off the Rowe & Beard literature review, which is pretty comprehensive AFAIK. Leslie and Bostrom discuss unknown risks, but without addressing their relative probabilities (at least Bostrom doesn’t, I don’t have access to Leslie’s book right now). If you know of any sources that address this that Rowe & Beard didn’t cover, I’d be happy to hear about them.
Note that it’s not just the Doomsday Argument that may give one reason for revising one’s x-risk estimates beyond what is suggested by object-level analyses of specific risks. See this post by Robin Hanson for an enumeration of the relevant types of arguments.
I am puzzled that these arguments do not seem to be influencing many of the most cited x-risk estimates (e.g. Toby’s). Is this because these arguments are thought to be clearly flawed? Or is it because people feel they just don’t know how to think about them, and so they simply ignore them? I would like to see more “reasoning transparency” about these issues.
It’s also worth noting that some of these “speculative” arguments would provide reason for revising not only overall x-risks estimates, but also estimates of specific risks. For example, as Katja Grace and Greg Lewis have noted, a misaligned intelligence explosion cannot be the Great Filter. Accordingly, insofar as the Great Filter influences one’s x-risk estimates, one should shift probability mass away from AI risk and toward risks that are plausible Great Filters.
In line with Matthew’s comment, think it’s true that several sources discuss the possibility of unknown risks or discuss the total risk level (which should presumably include unknown risks). But I’m also not aware of any sources apart from Ord and Pamlin & Armstrong which give quantitative estimates of unknown risk specifically. (If someone knows of any, please add them to my database!)
I’m also not actually sure if I know of any other sources which even provide relatively specific qualitative statements about the probability of unknown risks causing existential catastrophe. I wouldn’t count the statements from Leslie and Bostrom, for example.
So I think Michael’s claim is fair, at least if we interpret it as “no other sources appear to have clearly addressed the likelihood of unknown x-risks in particular, which implies that most others do not give unknown risks serious consideration.”
Great post!
I don’t think this is true. The Doomsday Argument literature (Carter, Leslie, Gott etc.) mostly considers the probability of extinction independently of any specific risks, so these authors’ estimates implicitly involve an assessment of unknown risks. Lots of this writing was before there were well-developed cases for specific risks. Indeed, the Doomsday literature seems to have inspired Leslie, and then Bostrom, to start seriously considering specific risks.
Leslie explicitly considers unknown risks (p.146, End of the World):
As does Bostrom (2002):
I agree that the literature on the Doomsday Argument involves an implicit assessment of unknown risks, in the sense that any residual probability mass assigned to existential risk after deducting the known x-risks must fall under the unknown risks category. (Note that our object-level assessment of specific risks may cause us to update our prior general risk estimates derived from the Doomsday Argument.)
Still, Michael’s argument is not based on anthropic considerations, but on extrapolation from the rate of x-risk discovery. These are two very different reasons for revising our estimates of unknown x-risks, so it’s important to keep them separate. (I don’t think we disagree; I just thought this was worth highlighting.)
Related to this, I find anthropic reasoning pretty suspect, and I don’t think we have a good enough grasp on how to reason about anthropics to draw any strong conclusions about it. The same could be said about choices of priors, e.g., MacAskill vs. Ord where the answer to “are we living at the most influential time in history?” completely hinges on the choice of prior, but we don’t really know the best way to pick a prior. This seems related to anthropic reasoning in that the Doomsday Argument depends on using a certain type of prior distribution over the number of humans who will ever live. My general impression is that we as a society don’t know enough about this kind of thing (and I personally know hardly anything about it). However, it’s possible that some people have correctly figured out the “philosophy of priors” and that knowledge just hasn’t fully propagated yet.
Thanks — I agree with this, and should have made clearer that I didn’t see my comment as undermining the thrust of Michael’s argument, which I find quite convincing.
Thanks for this perspective! I’ve heard of the Doomsday Argument but I haven’t read the literature. My understanding was that the majority belief is that the Doomsday Argument is wrong, we just haven’t figured out why it’s wrong. I didn’t realize there was substantial literature on the problem, so I will need to do some reading!
I think it is still accurate to claim that very few sources have considered the probability of unknown risks relative to known risks. I’m mainly basing this off the Rowe & Beard literature review, which is pretty comprehensive AFAIK. Leslie and Bostrom discuss unknown risks, but without addressing their relative probabilities (at least Bostrom doesn’t, I don’t have access to Leslie’s book right now). If you know of any sources that address this that Rowe & Beard didn’t cover, I’d be happy to hear about them.
Note that it’s not just the Doomsday Argument that may give one reason for revising one’s x-risk estimates beyond what is suggested by object-level analyses of specific risks. See this post by Robin Hanson for an enumeration of the relevant types of arguments.
I am puzzled that these arguments do not seem to be influencing many of the most cited x-risk estimates (e.g. Toby’s). Is this because these arguments are thought to be clearly flawed? Or is it because people feel they just don’t know how to think about them, and so they simply ignore them? I would like to see more “reasoning transparency” about these issues.
It’s also worth noting that some of these “speculative” arguments would provide reason for revising not only overall x-risks estimates, but also estimates of specific risks. For example, as Katja Grace and Greg Lewis have noted, a misaligned intelligence explosion cannot be the Great Filter. Accordingly, insofar as the Great Filter influences one’s x-risk estimates, one should shift probability mass away from AI risk and toward risks that are plausible Great Filters.
In line with Matthew’s comment, think it’s true that several sources discuss the possibility of unknown risks or discuss the total risk level (which should presumably include unknown risks). But I’m also not aware of any sources apart from Ord and Pamlin & Armstrong which give quantitative estimates of unknown risk specifically. (If someone knows of any, please add them to my database!)
I’m also not actually sure if I know of any other sources which even provide relatively specific qualitative statements about the probability of unknown risks causing existential catastrophe. I wouldn’t count the statements from Leslie and Bostrom, for example.
So I think Michael’s claim is fair, at least if we interpret it as “no other sources appear to have clearly addressed the likelihood of unknown x-risks in particular, which implies that most others do not give unknown risks serious consideration.”