Note that it’s not just the Doomsday Argument that may give one reason for revising one’s x-risk estimates beyond what is suggested by object-level analyses of specific risks. See this post by Robin Hanson for an enumeration of the relevant types of arguments.
I am puzzled that these arguments do not seem to be influencing many of the most cited x-risk estimates (e.g. Toby’s). Is this because these arguments are thought to be clearly flawed? Or is it because people feel they just don’t know how to think about them, and so they simply ignore them? I would like to see more “reasoning transparency” about these issues.
It’s also worth noting that some of these “speculative” arguments would provide reason for revising not only overall x-risks estimates, but also estimates of specific risks. For example, as Katja Grace and Greg Lewis have noted, a misaligned intelligence explosion cannot be the Great Filter. Accordingly, insofar as the Great Filter influences one’s x-risk estimates, one should shift probability mass away from AI risk and toward risks that are plausible Great Filters.
Note that it’s not just the Doomsday Argument that may give one reason for revising one’s x-risk estimates beyond what is suggested by object-level analyses of specific risks. See this post by Robin Hanson for an enumeration of the relevant types of arguments.
I am puzzled that these arguments do not seem to be influencing many of the most cited x-risk estimates (e.g. Toby’s). Is this because these arguments are thought to be clearly flawed? Or is it because people feel they just don’t know how to think about them, and so they simply ignore them? I would like to see more “reasoning transparency” about these issues.
It’s also worth noting that some of these “speculative” arguments would provide reason for revising not only overall x-risks estimates, but also estimates of specific risks. For example, as Katja Grace and Greg Lewis have noted, a misaligned intelligence explosion cannot be the Great Filter. Accordingly, insofar as the Great Filter influences one’s x-risk estimates, one should shift probability mass away from AI risk and toward risks that are plausible Great Filters.