We tend to think that AI x-risk is mostly from accidents because well, few people are omnicidal, and alignment is hard, so an accident is more likely. We tend to think that in bio, on the other hand, it would be very hard for a natural or accidental event to cause the extinction of all humanity. But the arguments we use for AI ought to also imply that the risks from intentional use of biotech are quite slim.
We can state this argument more formally using three premises:
The risk of accidental bio-x-catastrophe is much lower than that of non-accidental x-catastrophe.
A non-accidental AI x-catastrophe is at least as likely as a non-accidental bio x-catastrophe.
>90% of AI x-risk comes from an accident.
It follows from (1-3) that x-risk from AI is >10x larger than that of biotech. We ought to believe that (1) and (3) are true for reasons given in the first paragraph. (2) is, in my opinion, a topic too fraught with infohazards to be fit for public debate. That said, it seems plausible due to AI being generally more powerful than biotech. So I lean toward thinking the conclusion is correct.
In The Precipice, the risk from AI was rated as merely 3x greater. But if the difference is >10x, then almost all longtermists who are not much more competent in bio than in AI should prefer to work on AIS.
I like this approach, even though I’m unsure of what to conclude from it. In particular, I like the introduction of the accident vs non-accident distinction. It’s hard to get an intuition of what the relative chances of a bio-x-catastrophe and an AI-x-catastrophe are. It’s easier to have intuitions about the relative chances of:
Accidental vs non-accidental bio-x-catastrophes
Non-accidental AI-x-catastrophes vs non-accidental bio-x-catastrophes
Accidental vs non-accidental AI-x-catastrophes
That’s what you’re making use of in this post. Regardless of what one thinks of the conclusion, the methodology is interesting.
Yeah, I think it would be good to introduce premisses relating to the time that AI and bio capabilities that could cause an x-catastrophe (“crazy AI” and “crazy bio”) will be developed. To elaborate on a (protected) tweet of Daniel’s.
Suppose that you have as long timelines for crazy AI and for crazy bio, but that you are uncertain about them, and that they’re uncorrelated, in your view.
Suppose also that we modify 2 into “a non-accidental AI x-catastrophe is at least as likely as a non-accidental bio x-catastrophe, conditional on there existing both crazy AI and crazy bio, and conditional on there being no other x-catastrophe”. (I think that captures the spirit of Ryan’s version of 2.)
Suppose also that you think that the chance that in the world where crazy AI gets developed first, there is a 90% chance of an accidental AI x-catastrophe, and that in 50% of the worlds where there isn’t an accidental x-catastrophe, there is a non-accidental AI x-catastrophe—meaning the overall risk is 95% (in line with 3). In the world where crazy bio is rather developed first, there is a 50% chance of an accidental x-catastrophe (by the modified version of 2), plus some chance of a non-accidental x-catastrophe , meaning the overall risk is a bit more than 50%.
Regarding the timelines of the technologies, one way of thinking would be to say that there is a 50⁄50 chance that we get AI or bio first, meaning there is a 49.5% chance of an AI x-catastrophe and a >25% chance of a bio x-catastrophe (plus additional small probabilities of the slower crazy technology killing us in the worlds where we survive the first one; but let’s ignore that for now). That would mean that the ratio of AI x-risk to bio x-risk is more like 2:1. However, one might also think that there is a significant number of worlds where both technologies are developed at the same time, in the relevant sense—and your original argument potentially could be used as it is regarding those worlds. If so, that would increase the ratio between AI and bio x-risk.
In any event, this is just to spell out that the time factor is important. These numbers are made up solely for the purpose of showing that, not because I find them plausible. (Potentially my example could be better/isn’t ideal.)
AI Seems a Lot More Risky Than Biotech
We tend to think that AI x-risk is mostly from accidents because well, few people are omnicidal, and alignment is hard, so an accident is more likely. We tend to think that in bio, on the other hand, it would be very hard for a natural or accidental event to cause the extinction of all humanity. But the arguments we use for AI ought to also imply that the risks from intentional use of biotech are quite slim.
We can state this argument more formally using three premises:
The risk of accidental bio-x-catastrophe is much lower than that of non-accidental x-catastrophe.
A non-accidental AI x-catastrophe is at least as likely as a non-accidental bio x-catastrophe.
>90% of AI x-risk comes from an accident.
It follows from (1-3) that x-risk from AI is >10x larger than that of biotech. We ought to believe that (1) and (3) are true for reasons given in the first paragraph. (2) is, in my opinion, a topic too fraught with infohazards to be fit for public debate. That said, it seems plausible due to AI being generally more powerful than biotech. So I lean toward thinking the conclusion is correct.
In The Precipice, the risk from AI was rated as merely 3x greater. But if the difference is >10x, then almost all longtermists who are not much more competent in bio than in AI should prefer to work on AIS.
I like this approach, even though I’m unsure of what to conclude from it. In particular, I like the introduction of the accident vs non-accident distinction. It’s hard to get an intuition of what the relative chances of a bio-x-catastrophe and an AI-x-catastrophe are. It’s easier to have intuitions about the relative chances of:
Accidental vs non-accidental bio-x-catastrophes
Non-accidental AI-x-catastrophes vs non-accidental bio-x-catastrophes
Accidental vs non-accidental AI-x-catastrophes
That’s what you’re making use of in this post. Regardless of what one thinks of the conclusion, the methodology is interesting.
Note that premise 2 strongly depends on the probability of crazy AI being developed in the relevant time period.
Yeah, I think it would be good to introduce premisses relating to the time that AI and bio capabilities that could cause an x-catastrophe (“crazy AI” and “crazy bio”) will be developed. To elaborate on a (protected) tweet of Daniel’s.
Suppose that you have as long timelines for crazy AI and for crazy bio, but that you are uncertain about them, and that they’re uncorrelated, in your view.
Suppose also that we modify 2 into “a non-accidental AI x-catastrophe is at least as likely as a non-accidental bio x-catastrophe, conditional on there existing both crazy AI and crazy bio, and conditional on there being no other x-catastrophe”. (I think that captures the spirit of Ryan’s version of 2.)
Suppose also that you think that the chance that in the world where crazy AI gets developed first, there is a 90% chance of an accidental AI x-catastrophe, and that in 50% of the worlds where there isn’t an accidental x-catastrophe, there is a non-accidental AI x-catastrophe—meaning the overall risk is 95% (in line with 3). In the world where crazy bio is rather developed first, there is a 50% chance of an accidental x-catastrophe (by the modified version of 2), plus some chance of a non-accidental x-catastrophe , meaning the overall risk is a bit more than 50%.
Regarding the timelines of the technologies, one way of thinking would be to say that there is a 50⁄50 chance that we get AI or bio first, meaning there is a 49.5% chance of an AI x-catastrophe and a >25% chance of a bio x-catastrophe (plus additional small probabilities of the slower crazy technology killing us in the worlds where we survive the first one; but let’s ignore that for now). That would mean that the ratio of AI x-risk to bio x-risk is more like 2:1. However, one might also think that there is a significant number of worlds where both technologies are developed at the same time, in the relevant sense—and your original argument potentially could be used as it is regarding those worlds. If so, that would increase the ratio between AI and bio x-risk.
In any event, this is just to spell out that the time factor is important. These numbers are made up solely for the purpose of showing that, not because I find them plausible. (Potentially my example could be better/isn’t ideal.)