This is a helpful comment—I’ll see if I can reframe some points to make them clearer.
Human psychology is flawed in such a way that we consistently estimate the probability of existential risk from each cause to be ~10% by default.
I’m actually not assuming human psychology is flawed. The post is meant to be talking about how a rational person (or, at least, a boundedly rational person) should update their views.
On the probabilities: I suppose I’m implicitly evoking both a subjective notion of probability (“What’s a reasonable credence to assign to X happening?” or “If you were betting on X, what betting odds should you be willing to accept?”) and a more objective notion (“How strong is the propensity for X to happen?” or “How likely is X actually?” or “If you replayed the tape a billion times, with slight tweaks to the initial conditions, how often would X happen?”).[1] What it means for something to pose a “major risk,” in the language I’m using, is for the objective probability of doom to be high.
For example, let’s take existential risks from overpopulation. In the 60s and 70s, a lot of serious people were worried about near-term existential risks from overpopulation and environmental depletion. In hindsight, we can see that overpopulation actually wasn’t a major risk. However, this wouldn’t have been clear to someone first encountering the idea and noticing how many experts took it seriously. I think it might have been reasonable for someone first hearing about The Population Bomb to assign something on the order of a 10% credence to overpopulation being a major risk.
I think, for a small number of other proposed existential risks, we’re in a similar epistemic position. We don’t yet know enough to say whether it’s actually a major risk, but we’ve heard enough to justify a significant credence in the hypothesis that it is one.[2]
why is there a 90% chance that more information leads to less worry? Is this assuming that for 90% of risks, they have P(Doom) < 10%, and for the other 10% of risks P(Doom) ≥ 10%?
If you assign a 10% credence to something not being a major risk, then you should assign a roughly 90% credence to further evidence/arguments helping you see that it’s not a major risk. If you become increasingly confident that it’s not a major risk, then your credence in doom should go down.
You can also think of the objective probability as, basically, what your subjective should become if you gained access to dramatically more complete evidence and arguments.
The ~10% number is a bit arbitrary. I think it’d almost always be unreasonable to be close to 100% confident that something is a major existential risk, after hearing just initial rough arguments and evidence for it. In most cases—like when hearing about possible existential risks from honeybee collapse—it’s in fact reasonable to start out with a credence below 1%. So, when I’m talking about risks that we should assign “something on the order of a 10% credence to,” I’m talking about the absolute most plausible category of risks.
This is a helpful comment—I’ll see if I can reframe some points to make them clearer.
I’m actually not assuming human psychology is flawed. The post is meant to be talking about how a rational person (or, at least, a boundedly rational person) should update their views.
On the probabilities: I suppose I’m implicitly evoking both a subjective notion of probability (“What’s a reasonable credence to assign to X happening?” or “If you were betting on X, what betting odds should you be willing to accept?”) and a more objective notion (“How strong is the propensity for X to happen?” or “How likely is X actually?” or “If you replayed the tape a billion times, with slight tweaks to the initial conditions, how often would X happen?”).[1] What it means for something to pose a “major risk,” in the language I’m using, is for the objective probability of doom to be high.
For example, let’s take existential risks from overpopulation. In the 60s and 70s, a lot of serious people were worried about near-term existential risks from overpopulation and environmental depletion. In hindsight, we can see that overpopulation actually wasn’t a major risk. However, this wouldn’t have been clear to someone first encountering the idea and noticing how many experts took it seriously. I think it might have been reasonable for someone first hearing about The Population Bomb to assign something on the order of a 10% credence to overpopulation being a major risk.
I think, for a small number of other proposed existential risks, we’re in a similar epistemic position. We don’t yet know enough to say whether it’s actually a major risk, but we’ve heard enough to justify a significant credence in the hypothesis that it is one.[2]
If you assign a 10% credence to something not being a major risk, then you should assign a roughly 90% credence to further evidence/arguments helping you see that it’s not a major risk. If you become increasingly confident that it’s not a major risk, then your credence in doom should go down.
You can also think of the objective probability as, basically, what your subjective should become if you gained access to dramatically more complete evidence and arguments.
The ~10% number is a bit arbitrary. I think it’d almost always be unreasonable to be close to 100% confident that something is a major existential risk, after hearing just initial rough arguments and evidence for it. In most cases—like when hearing about possible existential risks from honeybee collapse—it’s in fact reasonable to start out with a credence below 1%. So, when I’m talking about risks that we should assign “something on the order of a 10% credence to,” I’m talking about the absolute most plausible category of risks.