I agree that the literature on the Doomsday Argument involves an implicit assessment of unknown risks, in the sense that any residual probability mass assigned to existential risk after deducting the known x-risks must fall under the unknown risks category. (Note that our object-level assessment of specific risks may cause us to update our prior general risk estimates derived from the Doomsday Argument.)
Still, Michael’s argument is not based on anthropic considerations, but on extrapolation from the rate of x-risk discovery. These are two very different reasons for revising our estimates of unknown x-risks, so it’s important to keep them separate. (I don’t think we disagree; I just thought this was worth highlighting.)
Related to this, I find anthropic reasoning pretty suspect, and I don’t think we have a good enough grasp on how to reason about anthropics to draw any strong conclusions about it. The same could be said about choices of priors, e.g., MacAskill vs. Ord where the answer to “are we living at the most influential time in history?” completely hinges on the choice of prior, but we don’t really know the best way to pick a prior. This seems related to anthropic reasoning in that the Doomsday Argument depends on using a certain type of prior distribution over the number of humans who will ever live. My general impression is that we as a society don’t know enough about this kind of thing (and I personally know hardly anything about it). However, it’s possible that some people have correctly figured out the “philosophy of priors” and that knowledge just hasn’t fully propagated yet.
Thanks — I agree with this, and should have made clearer that I didn’t see my comment as undermining the thrust of Michael’s argument, which I find quite convincing.
I agree that the literature on the Doomsday Argument involves an implicit assessment of unknown risks, in the sense that any residual probability mass assigned to existential risk after deducting the known x-risks must fall under the unknown risks category. (Note that our object-level assessment of specific risks may cause us to update our prior general risk estimates derived from the Doomsday Argument.)
Still, Michael’s argument is not based on anthropic considerations, but on extrapolation from the rate of x-risk discovery. These are two very different reasons for revising our estimates of unknown x-risks, so it’s important to keep them separate. (I don’t think we disagree; I just thought this was worth highlighting.)
Related to this, I find anthropic reasoning pretty suspect, and I don’t think we have a good enough grasp on how to reason about anthropics to draw any strong conclusions about it. The same could be said about choices of priors, e.g., MacAskill vs. Ord where the answer to “are we living at the most influential time in history?” completely hinges on the choice of prior, but we don’t really know the best way to pick a prior. This seems related to anthropic reasoning in that the Doomsday Argument depends on using a certain type of prior distribution over the number of humans who will ever live. My general impression is that we as a society don’t know enough about this kind of thing (and I personally know hardly anything about it). However, it’s possible that some people have correctly figured out the “philosophy of priors” and that knowledge just hasn’t fully propagated yet.
Thanks — I agree with this, and should have made clearer that I didn’t see my comment as undermining the thrust of Michael’s argument, which I find quite convincing.