Thanks for the feedback! And for your helpful suggestions in the spreadsheet itself.
Could you expand on what you mean by āone can argue that the non-conditional estimates are not directly decision-relevantā?
It does seem that, holding constant the accuracy of different estimates, the most decision-relevant estimates would be things like pairings of āWhatās the total x-risk this century, conditional on the EA community doing nothing about itā with āWhatās the total x-risk this century, conditional on a decent fraction of EAs dedicating their careers to x-risk reductionā.
But it still seems like Iād want to make different decisions if I had reason to believe the risks from AI are 100 times larger than those from engineered pandemics, compared to if I believed the inverse. Obviously, thatās not the whole storyāit ignores at least tractability, neglectedness, and comparative advantage. But it does seem like a major part of the story.
But it still seems like Iād want to make different decisions if I had reason to believe the risks from AI are 100 times larger than those from engineered pandemics, compared to if I believed the inverse.
Agreed, but I would argue that in this example, acquiring that belief is consequential because it makes you update towards the estimate: āconditioned on us not making more efforts to mitigate existential risks from AI, the probability of an AI related existential catastrophe is ā¦ā.
I might have been nitpicking here, but just to give a sense of why I think this issue might be relevant:
Suppose Alice, who is involved in EA, tells a random person Bob: āthereās a 5% chance weāll all die due to Xā. What is Bob more likely to do next: (1) convince himself that Alice is a crackpot and that her estimate is nonsense; or (2) accept Aliceās estimate, i.e. accept that thereās a 5% chance that he and everyone he cares about will die due to X, no matter what he and the rest of humanity will do. (Because the 5% estimate already took into account everything that Bob and the rest of humanity will do about X.)
Now suppose instead that Alice tells Bob āIf humanity wonāt take X seriously, thereās a 10% chance weāll all die due to X.ā I suspect that in this scenario Bob is more likely to seriously think about X.
I think I see your reasoning. But I might frame things differently.
In your first scenario, if Alice sees Bob seems to be saying āOh, well then Iāve just got to accept that thereās a 5% chanceā, she can clarify āNo, hold up! That 5% was my unconditional estimate. Thatās what I think is going to happen. But I donāt know how much effort people are going to put in. And you and I can help decide what the truth is on thatāwe can contribute to changing how much effort is put in. My estimate conditional on humanity taking this somewhat more seriously than I currently expect is a 1% chance of catastrophe, and conditional on us taking it as seriously as I think we should, the chance is <0.1%. And we can help make that happen.ā
See also a relevant passage from The Precipice, which I quote here.
Also, I think I see the difference between unconditional and conditional on us doing more than currently expected as more significant than the difference between unconditional and conditional on us doing less than currently expected. For me, the unconditional estimate is relevant mainly because it updates me towards beliefs like āThere are probably low-hanging fruit for reducing this riskā, or āconditioned on us making more efforts to mitigate existential risks from AI than currently expected, the probability of an AI related existential catastrophe is [notably lower number].ā
Sort-of relevant to the above: āIf humanity wonāt take X seriously, thereās a 10% chance weāll all die due to Xā is consistent with āEven if humanity does take X really really seriously, there still a 10% chance weāll all die due to X.ā So in either case, you plausibly might need to clarify what you mean and what the alternative scenarios are, depending on how people react.
In your first scenario, if Alice sees Bob seems to be saying āOh, well then Iāve just got to accept that thereās a 5% chanceā
Maybe a crux here is what fraction of people in the role of Bob would instead convince themselves that the unconditional estimate is nonsense (due to motivated reasoning).
Thanks for the feedback! And for your helpful suggestions in the spreadsheet itself.
Could you expand on what you mean by āone can argue that the non-conditional estimates are not directly decision-relevantā?
It does seem that, holding constant the accuracy of different estimates, the most decision-relevant estimates would be things like pairings of āWhatās the total x-risk this century, conditional on the EA community doing nothing about itā with āWhatās the total x-risk this century, conditional on a decent fraction of EAs dedicating their careers to x-risk reductionā.
But it still seems like Iād want to make different decisions if I had reason to believe the risks from AI are 100 times larger than those from engineered pandemics, compared to if I believed the inverse. Obviously, thatās not the whole storyāit ignores at least tractability, neglectedness, and comparative advantage. But it does seem like a major part of the story.
Agreed, but I would argue that in this example, acquiring that belief is consequential because it makes you update towards the estimate: āconditioned on us not making more efforts to mitigate existential risks from AI, the probability of an AI related existential catastrophe is ā¦ā.
I might have been nitpicking here, but just to give a sense of why I think this issue might be relevant:
Suppose Alice, who is involved in EA, tells a random person Bob: āthereās a 5% chance weāll all die due to Xā. What is Bob more likely to do next: (1) convince himself that Alice is a crackpot and that her estimate is nonsense; or (2) accept Aliceās estimate, i.e. accept that thereās a 5% chance that he and everyone he cares about will die due to X, no matter what he and the rest of humanity will do. (Because the 5% estimate already took into account everything that Bob and the rest of humanity will do about X.)
Now suppose instead that Alice tells Bob āIf humanity wonāt take X seriously, thereās a 10% chance weāll all die due to X.ā I suspect that in this scenario Bob is more likely to seriously think about X.
I think I see your reasoning. But I might frame things differently.
In your first scenario, if Alice sees Bob seems to be saying āOh, well then Iāve just got to accept that thereās a 5% chanceā, she can clarify āNo, hold up! That 5% was my unconditional estimate. Thatās what I think is going to happen. But I donāt know how much effort people are going to put in. And you and I can help decide what the truth is on thatāwe can contribute to changing how much effort is put in. My estimate conditional on humanity taking this somewhat more seriously than I currently expect is a 1% chance of catastrophe, and conditional on us taking it as seriously as I think we should, the chance is <0.1%. And we can help make that happen.ā
See also a relevant passage from The Precipice, which I quote here.
Also, I think I see the difference between unconditional and conditional on us doing more than currently expected as more significant than the difference between unconditional and conditional on us doing less than currently expected. For me, the unconditional estimate is relevant mainly because it updates me towards beliefs like āThere are probably low-hanging fruit for reducing this riskā, or āconditioned on us making more efforts to mitigate existential risks from AI than currently expected, the probability of an AI related existential catastrophe is [notably lower number].ā
Sort-of relevant to the above: āIf humanity wonāt take X seriously, thereās a 10% chance weāll all die due to Xā is consistent with āEven if humanity does take X really really seriously, there still a 10% chance weāll all die due to X.ā So in either case, you plausibly might need to clarify what you mean and what the alternative scenarios are, depending on how people react.
Maybe a crux here is what fraction of people in the role of Bob would instead convince themselves that the unconditional estimate is nonsense (due to motivated reasoning).