The authors will have a more-informed answer, but my understanding is that part of the answer is “some ‘disentanglement’ work needed to be done w.r.t. biosecurity for x-risk reduction (as opposed to biosecurity for lower-stakes scenarios).”
I mention this so that I can bemoan the fact that I think we don’t have a similar list of large-scale, clearly-net-positive projects for the purpose of AI x-risk reduction, in part because (I think) the AI situation is more confusing and requires more and harder disentanglement work (some notes on this here and here). The Open Phil “worldview investigations” team (among others) is working on such disentanglement research for AI x-risk reduction and I would like to see more people tackle this strategic clarity bottleneck, ideally in close communication with folks who have experience with relatively deep, thorough investigations of this type (a la Bio Anchors and other Open Phil worldview investigation reports) and in close communication with folks who will use greater strategic clarity to take large actions.
The authors will have a more-informed answer, but my understanding is that part of the answer is “some ‘disentanglement’ work needed to be done w.r.t. biosecurity for x-risk reduction (as opposed to biosecurity for lower-stakes scenarios).”
I mention this so that I can bemoan the fact that I think we don’t have a similar list of large-scale, clearly-net-positive projects for the purpose of AI x-risk reduction, in part because (I think) the AI situation is more confusing and requires more and harder disentanglement work (some notes on this here and here). The Open Phil “worldview investigations” team (among others) is working on such disentanglement research for AI x-risk reduction and I would like to see more people tackle this strategic clarity bottleneck, ideally in close communication with folks who have experience with relatively deep, thorough investigations of this type (a la Bio Anchors and other Open Phil worldview investigation reports) and in close communication with folks who will use greater strategic clarity to take large actions.