Yeah, I think it would be good to introduce premisses relating to the time that AI and bio capabilities that could cause an x-catastrophe (“crazy AI” and “crazy bio”) will be developed. To elaborate on a (protected) tweet of Daniel’s.
Suppose that you have as long timelines for crazy AI and for crazy bio, but that you are uncertain about them, and that they’re uncorrelated, in your view.
Suppose also that we modify 2 into “a non-accidental AI x-catastrophe is at least as likely as a non-accidental bio x-catastrophe, conditional on there existing both crazy AI and crazy bio, and conditional on there being no other x-catastrophe”. (I think that captures the spirit of Ryan’s version of 2.)
Suppose also that you think that the chance that in the world where crazy AI gets developed first, there is a 90% chance of an accidental AI x-catastrophe, and that in 50% of the worlds where there isn’t an accidental x-catastrophe, there is a non-accidental AI x-catastrophe—meaning the overall risk is 95% (in line with 3). In the world where crazy bio is rather developed first, there is a 50% chance of an accidental x-catastrophe (by the modified version of 2), plus some chance of a non-accidental x-catastrophe , meaning the overall risk is a bit more than 50%.
Regarding the timelines of the technologies, one way of thinking would be to say that there is a 50⁄50 chance that we get AI or bio first, meaning there is a 49.5% chance of an AI x-catastrophe and a >25% chance of a bio x-catastrophe (plus additional small probabilities of the slower crazy technology killing us in the worlds where we survive the first one; but let’s ignore that for now). That would mean that the ratio of AI x-risk to bio x-risk is more like 2:1. However, one might also think that there is a significant number of worlds where both technologies are developed at the same time, in the relevant sense—and your original argument potentially could be used as it is regarding those worlds. If so, that would increase the ratio between AI and bio x-risk.
In any event, this is just to spell out that the time factor is important. These numbers are made up solely for the purpose of showing that, not because I find them plausible. (Potentially my example could be better/isn’t ideal.)
Yeah, I think it would be good to introduce premisses relating to the time that AI and bio capabilities that could cause an x-catastrophe (“crazy AI” and “crazy bio”) will be developed. To elaborate on a (protected) tweet of Daniel’s.
Suppose that you have as long timelines for crazy AI and for crazy bio, but that you are uncertain about them, and that they’re uncorrelated, in your view.
Suppose also that we modify 2 into “a non-accidental AI x-catastrophe is at least as likely as a non-accidental bio x-catastrophe, conditional on there existing both crazy AI and crazy bio, and conditional on there being no other x-catastrophe”. (I think that captures the spirit of Ryan’s version of 2.)
Suppose also that you think that the chance that in the world where crazy AI gets developed first, there is a 90% chance of an accidental AI x-catastrophe, and that in 50% of the worlds where there isn’t an accidental x-catastrophe, there is a non-accidental AI x-catastrophe—meaning the overall risk is 95% (in line with 3). In the world where crazy bio is rather developed first, there is a 50% chance of an accidental x-catastrophe (by the modified version of 2), plus some chance of a non-accidental x-catastrophe , meaning the overall risk is a bit more than 50%.
Regarding the timelines of the technologies, one way of thinking would be to say that there is a 50⁄50 chance that we get AI or bio first, meaning there is a 49.5% chance of an AI x-catastrophe and a >25% chance of a bio x-catastrophe (plus additional small probabilities of the slower crazy technology killing us in the worlds where we survive the first one; but let’s ignore that for now). That would mean that the ratio of AI x-risk to bio x-risk is more like 2:1. However, one might also think that there is a significant number of worlds where both technologies are developed at the same time, in the relevant sense—and your original argument potentially could be used as it is regarding those worlds. If so, that would increase the ratio between AI and bio x-risk.
In any event, this is just to spell out that the time factor is important. These numbers are made up solely for the purpose of showing that, not because I find them plausible. (Potentially my example could be better/isn’t ideal.)