The problem is for the strength of the claims made here, that longtermists should work on AI above all else (like 95% of longtermists should be working on this), you need a tremendous amount of certainty that each of these assumptions hold. As your uncertainty grows, the strength of the argument made here reduces
By default, you shouldn’t have a prior that bio risks is 100x more tractable than AI though. Some (important) people think that the EA community had a net negative impact on biorisks because of infohazards for instance.
Also, I’ll argue below that timelines matter for ITN and I’m pretty confident the risk/year is very different for the two risks (which favors AI in my model).
I would be interested in your uncertainties with all of this. If we are basing our ITN analysis on priors, given the limitations and biases of our priors, I would again be highly uncertain, once more leaning away from the certainty that you present in this post
Basically, as I said in my post I’m fairly confident about most things except the MVP (minimum viable population) where I almost completely defer to Luisa Rodriguez. Likewise, for the likelihood of irrecoverable collapse, my prior is that’s the likelihood is very low for the reasons I gave above but given that I haven’t explored that much the inside view arguments in favor of it, I could quickly update upward and I think that it would the best way for me to update positively on biorisks actually posing an X-risk in the next 30 years.
My view on the 95% is pretty robust to external perturbations though because my beliefs favor short timelines (<2030). So I think you’d also need to change my mind on how easy it is to make a virus by 2030 that kills >90% of the people, spreads so fast/ or is stealthy so that almost everyone gets infected.
I argued that orders of magnitude difference in tractability are rare here.
The problem is for the strength of the claims made here, that longtermists should work on AI above all else (like 95% of longtermists should be working on this), you need a tremendous amount of certainty that each of these assumptions hold. As your uncertainty grows, the strength of the argument made here reduces
By default, you shouldn’t have a prior that bio risks is 100x more tractable than AI though. Some (important) people think that the EA community had a net negative impact on biorisks because of infohazards for instance.
Also, I’ll argue below that timelines matter for ITN and I’m pretty confident the risk/year is very different for the two risks (which favors AI in my model).
I would be interested in your uncertainties with all of this. If we are basing our ITN analysis on priors, given the limitations and biases of our priors, I would again be highly uncertain, once more leaning away from the certainty that you present in this post
Basically, as I said in my post I’m fairly confident about most things except the MVP (minimum viable population) where I almost completely defer to Luisa Rodriguez.
Likewise, for the likelihood of irrecoverable collapse, my prior is that’s the likelihood is very low for the reasons I gave above but given that I haven’t explored that much the inside view arguments in favor of it, I could quickly update upward and I think that it would the best way for me to update positively on biorisks actually posing an X-risk in the next 30 years.
My view on the 95% is pretty robust to external perturbations though because my beliefs favor short timelines (<2030). So I think you’d also need to change my mind on how easy it is to make a virus by 2030 that kills >90% of the people, spreads so fast/ or is stealthy so that almost everyone gets infected.