As I once mentioned here, computing basis points is much more compatible with a diverse range of views if we think about survival probability where a 100% increase means probability of survival doubled rather than survival probability where a 100% increase means one universe saved.
To illustrate the issue with the latter, suppose a project can decrease some particular x-risk from 1% to 0%, and this x-risk is uncorrelated with others. If there are no other x-risks, this project brings total x-risk from 1% to 0%, so we gain 100 basis points. If other x-risks are 99% likely, this project brings total x-risk from 99.01% to 99%, so we gain 1 basis point. Thus whether a project passes the latter threshold depends on the likelihood of other x-risks. (But the project increases the former probability by 1% in both cases.)
Yeah this is a really interesting challenge. I haven’t formed an opinion about whether I prefer reduction probability vs absolute basis points and chose the latter partially for simplicity and partially for ease of comparability with places like Open Phil. I can easily imagine this being a fairly important high-level point and your arguments do seem reasonable.
By “this x-risk is uncorrelated with others” I meant that the risks are independent and so “from 99.01% to 99%” is correct. Maybe that could be clearer; let me know if you have a suggestion to rephrase...
I’m confused as to why you use a change of 1pp (from 1% to 0%) in the no-other-x-risks case, but a change of 0.01pp (from 99.01% to 99%) in the other-x-risks case.
Suppose for illustration that there is a 1% chance of bio-x-risk in (the single year) 2030 and a 99% chance of AI-x-risk in 2040 (assuming that we survive past 2030). Then we survive both risks with probability (1-.01)*(1-.99) = .0099. Eliminating the bio-x-risk, we survive with probability 1-.99 = .01.
But if there is no AI risk, eliminating biorisk changes our survival probability from .99 to 1.
So in the two-risk case, P(die) = P(die from bio OR die from AI) = P(bio) + P(AI) - P(bio AND AI) = (using independence) 0.01 + 0.99 − 0.01*0.99 = 1 − 0.0099 = 0.9901. If P(die from bio)=0, then P(die) = P(die from AI) = 0.99.
Interesting.
As I once mentioned here, computing basis points is much more compatible with a diverse range of views if we think about survival probability where a 100% increase means probability of survival doubled rather than survival probability where a 100% increase means one universe saved.
To illustrate the issue with the latter, suppose a project can decrease some particular x-risk from 1% to 0%, and this x-risk is uncorrelated with others. If there are no other x-risks, this project brings total x-risk from 1% to 0%, so we gain 100 basis points. If other x-risks are 99% likely, this project brings total x-risk from 99.01% to 99%, so we gain 1 basis point. Thus whether a project passes the latter threshold depends on the likelihood of other x-risks. (But the project increases the former probability by 1% in both cases.)
Yeah this is a really interesting challenge. I haven’t formed an opinion about whether I prefer reduction probability vs absolute basis points and chose the latter partially for simplicity and partially for ease of comparability with places like Open Phil. I can easily imagine this being a fairly important high-level point and your arguments do seem reasonable.
>If other x-risks are 99% likely, this project brings total x-risk from 99.01% to 99%
Shouldn’t this be “from 100% to 99%”?
By “this x-risk is uncorrelated with others” I meant that the risks are independent and so “from 99.01% to 99%” is correct. Maybe that could be clearer; let me know if you have a suggestion to rephrase...
I’m confused as to why you use a change of 1pp (from 1% to 0%) in the no-other-x-risks case, but a change of 0.01pp (from 99.01% to 99%) in the other-x-risks case.
Suppose for illustration that there is a 1% chance of bio-x-risk in (the single year) 2030 and a 99% chance of AI-x-risk in 2040 (assuming that we survive past 2030). Then we survive both risks with probability (1-.01)*(1-.99) = .0099. Eliminating the bio-x-risk, we survive with probability 1-.99 = .01.
But if there is no AI risk, eliminating biorisk changes our survival probability from .99 to 1.
I see, thanks!
So in the two-risk case, P(die) = P(die from bio OR die from AI) = P(bio) + P(AI) - P(bio AND AI) = (using independence) 0.01 + 0.99 − 0.01*0.99 = 1 − 0.0099 = 0.9901.
If P(die from bio)=0, then P(die) = P(die from AI) = 0.99.
Exactly.