Luisa’s post addresses our chance of getting killed ‘within decades’ of a civilisational collapse, but that’s not the same as the chance that it prevents us ever becoming a happy intergalactic civilisation, which is the end state we’re seeking. If you think that probability is 90%, given a global collapse, then the effective x-risk of that collapse is 0.1 * <its probability of happening>. One order of magnitude doesn’t seem like that big a deal here, given all the other uncertainties around our future.
That’s right! I just think that the base rate for “civilisation collapse prevents us from ever becoming a happy intergalactic civilisation” is very low. And multiplying any probability by 0.1 also does matter because when we’re talking about AGI, we’re talking about things are >=10% likely to happen for a lot of people (I put a higher likelihood than that but Toby Ord putting 10% is sufficient).
So it means that even if you condition on biorisks being the same as AGI (which is the point I argue against) for everything else, you still need biorisks to be >5% likely to lead to a civilizational collapse by the end of the century for my point to not hold, i.e that 95% of longtermists should work AI (19/20 of the people + assumption of linear returns for the few first thousands ppl).
Luisa’s post addresses our chance of getting killed ‘within decades’ of a civilisational collapse, but that’s not the same as the chance that it prevents us ever becoming a happy intergalactic civilisation, which is the end state we’re seeking. If you think that probability is 90%, given a global collapse, then the effective x-risk of that collapse is 0.1 * <its probability of happening>. One order of magnitude doesn’t seem like that big a deal here, given all the other uncertainties around our future.
That’s right! I just think that the base rate for “civilisation collapse prevents us from ever becoming a happy intergalactic civilisation” is very low.
And multiplying any probability by 0.1 also does matter because when we’re talking about AGI, we’re talking about things are >=10% likely to happen for a lot of people (I put a higher likelihood than that but Toby Ord putting 10% is sufficient).
So it means that even if you condition on biorisks being the same as AGI (which is the point I argue against) for everything else, you still need biorisks to be >5% likely to lead to a civilizational collapse by the end of the century for my point to not hold, i.e that 95% of longtermists should work AI (19/20 of the people + assumption of linear returns for the few first thousands ppl).