“If (toy numbers here) AI risk is 2 orders of magnitude more likely to occur than biorisk, but four orders of magnitude less tractable”. I think that indeed 2 or 3 OOMs of difference would be needed at least to compensate (especially given that positively shaping biorisks is not extremely positive) and as I argued above I think it’s unlikely.
“They are of course not, as irrecoverable collapse , s-risks and permanent curtailing of human potential”. I think that irrecoverable collapse is the biggest crux. What likelihood do you put on it? For other type of risks, it once again favors working on AI.
Your point below is also on irrecoverable collapse. Personnally I put a small weight on this but I could update pretty quickly because I haven’t thought that strongly about it. I just have these few arguments: - Asymptotically, that would be surprising if you couldn’t find other ways to recover. The world in which our species used the ONLY way to make progress is a tiny fraction of all possible worlds. - There are arguments about stocks of current material (huge) which could be used to recover. - Humans are very adaptable.
I think that biorisks causing >90% of deaths are not for tomorrow and will most likely appear in the second half of the century which makes that it doesn’t compete in terms of timelines with AGI. The reasons why I think that is: - Building viruses is still quite hard— Doing gain-of-function research at a sufficient degree to be able to reach very high degree of lethalities + contamination is really not trivial. - The world is still not connected enough for a virus to spread unstealthy and contaminate everyone
I actually think our big crux here is the amount of uncertainty. Each of the points I raise and each new assumption you are putting in should raise you uncertainty. Given you claim 95% ofongtermists should work on AI, high uncertainties fo not seem to weigh in the favour of your argument.
Note I am not saying and haven’t that either AI isn’t the most important X-Risk or that we shouldn’t work on it. Just arguing against the certainty from your post
I think you make a good point if we were close in terms but what matters primarily is the EV and I expect this to dominate uncertainty here. I didn’t do the computations but I feel like if u have something which is OOMs more important than others, even with very large bars of uncertainty you’d probably put >19/20 of your resources on the highest EV thing. In the same way we don’t give to another less cost-effective org to hedge against AMF even though they might have some tail chances of having a very significant positive impact on society because the bars of estimate are very large.
“If (toy numbers here) AI risk is 2 orders of magnitude more likely to occur than biorisk, but four orders of magnitude less tractable”. I think that indeed 2 or 3 OOMs of difference would be needed at least to compensate (especially given that positively shaping biorisks is not extremely positive) and as I argued above I think it’s unlikely.
“They are of course not, as irrecoverable collapse , s-risks and permanent curtailing of human potential”. I think that irrecoverable collapse is the biggest crux. What likelihood do you put on it? For other type of risks, it once again favors working on AI.
Your point below is also on irrecoverable collapse. Personnally I put a small weight on this but I could update pretty quickly because I haven’t thought that strongly about it. I just have these few arguments:
- Asymptotically, that would be surprising if you couldn’t find other ways to recover. The world in which our species used the ONLY way to make progress is a tiny fraction of all possible worlds.
- There are arguments about stocks of current material (huge) which could be used to recover.
- Humans are very adaptable.
I think that biorisks causing >90% of deaths are not for tomorrow and will most likely appear in the second half of the century which makes that it doesn’t compete in terms of timelines with AGI. The reasons why I think that is:
- Building viruses is still quite hard—
Doing gain-of-function research at a sufficient degree to be able to reach very high degree of lethalities + contamination is really not trivial.
- The world is still not connected enough for a virus to spread unstealthy and contaminate everyone
I actually think our big crux here is the amount of uncertainty. Each of the points I raise and each new assumption you are putting in should raise you uncertainty. Given you claim 95% ofongtermists should work on AI, high uncertainties fo not seem to weigh in the favour of your argument. Note I am not saying and haven’t that either AI isn’t the most important X-Risk or that we shouldn’t work on it. Just arguing against the certainty from your post
I think you make a good point if we were close in terms but what matters primarily is the EV and I expect this to dominate uncertainty here.
I didn’t do the computations but I feel like if u have something which is OOMs more important than others, even with very large bars of uncertainty you’d probably put >19/20 of your resources on the highest EV thing.
In the same way we don’t give to another less cost-effective org to hedge against AMF even though they might have some tail chances of having a very significant positive impact on society because the bars of estimate are very large.