One such uncertainty is related to the conditional probability of x-risks and their relative order. Imagine that there is 90 per cent chance of biological x-risk before 2030, but if it doesn’t happen, there is 90 per cent chance of AI-related x-risk event between 2030 and 2050.
In that case, total probability of survival extinction is 99 per cent, of which 90 is biological and only 9 is from AI. In other words, more remote risks are “reduced” in expected size by earlier risks which “overshadow” them.
Another point is that x-risks are by definition one-time events, so the frequentist probability is not applicable to them.
Yeah so the first point is what I’m referring to by timelines. And we should all also discount the risk of a particular hazard by the probability of achieving invulnerability.
One such uncertainty is related to the conditional probability of x-risks and their relative order. Imagine that there is 90 per cent chance of biological x-risk before 2030, but if it doesn’t happen, there is 90 per cent chance of AI-related x-risk event between 2030 and 2050.
In that case, total probability of survival extinction is 99 per cent, of which 90 is biological and only 9 is from AI. In other words, more remote risks are “reduced” in expected size by earlier risks which “overshadow” them.
Another point is that x-risks are by definition one-time events, so the frequentist probability is not applicable to them.
Yeah so the first point is what I’m referring to by timelines. And we should all also discount the risk of a particular hazard by the probability of achieving invulnerability.