I think I see your reasoning. But I might frame things differently.
In your first scenario, if Alice sees Bob seems to be saying āOh, well then Iāve just got to accept that thereās a 5% chanceā, she can clarify āNo, hold up! That 5% was my unconditional estimate. Thatās what I think is going to happen. But I donāt know how much effort people are going to put in. And you and I can help decide what the truth is on thatāwe can contribute to changing how much effort is put in. My estimate conditional on humanity taking this somewhat more seriously than I currently expect is a 1% chance of catastrophe, and conditional on us taking it as seriously as I think we should, the chance is <0.1%. And we can help make that happen.ā
See also a relevant passage from The Precipice, which I quote here.
Also, I think I see the difference between unconditional and conditional on us doing more than currently expected as more significant than the difference between unconditional and conditional on us doing less than currently expected. For me, the unconditional estimate is relevant mainly because it updates me towards beliefs like āThere are probably low-hanging fruit for reducing this riskā, or āconditioned on us making more efforts to mitigate existential risks from AI than currently expected, the probability of an AI related existential catastrophe is [notably lower number].ā
Sort-of relevant to the above: āIf humanity wonāt take X seriously, thereās a 10% chance weāll all die due to Xā is consistent with āEven if humanity does take X really really seriously, there still a 10% chance weāll all die due to X.ā So in either case, you plausibly might need to clarify what you mean and what the alternative scenarios are, depending on how people react.
In your first scenario, if Alice sees Bob seems to be saying āOh, well then Iāve just got to accept that thereās a 5% chanceā
Maybe a crux here is what fraction of people in the role of Bob would instead convince themselves that the unconditional estimate is nonsense (due to motivated reasoning).
I think I see your reasoning. But I might frame things differently.
In your first scenario, if Alice sees Bob seems to be saying āOh, well then Iāve just got to accept that thereās a 5% chanceā, she can clarify āNo, hold up! That 5% was my unconditional estimate. Thatās what I think is going to happen. But I donāt know how much effort people are going to put in. And you and I can help decide what the truth is on thatāwe can contribute to changing how much effort is put in. My estimate conditional on humanity taking this somewhat more seriously than I currently expect is a 1% chance of catastrophe, and conditional on us taking it as seriously as I think we should, the chance is <0.1%. And we can help make that happen.ā
See also a relevant passage from The Precipice, which I quote here.
Also, I think I see the difference between unconditional and conditional on us doing more than currently expected as more significant than the difference between unconditional and conditional on us doing less than currently expected. For me, the unconditional estimate is relevant mainly because it updates me towards beliefs like āThere are probably low-hanging fruit for reducing this riskā, or āconditioned on us making more efforts to mitigate existential risks from AI than currently expected, the probability of an AI related existential catastrophe is [notably lower number].ā
Sort-of relevant to the above: āIf humanity wonāt take X seriously, thereās a 10% chance weāll all die due to Xā is consistent with āEven if humanity does take X really really seriously, there still a 10% chance weāll all die due to X.ā So in either case, you plausibly might need to clarify what you mean and what the alternative scenarios are, depending on how people react.
Maybe a crux here is what fraction of people in the role of Bob would instead convince themselves that the unconditional estimate is nonsense (due to motivated reasoning).