According to Open Philanthropyās grants database on 17 February 2024, accounting for the focus areas of āBiosecurity & Pandemic Preparednessā, āForecastingā, āGlobal Catastrophic Risksā, āGlobal Catastrophic Risks Capacity Buildingā, and āPotential Risks from Advanced AIā.
Thanks for the comment, Ryan. I agree that report by Joseph Carlsmith is quite detailed. However, I do not think it is sufficiently quantitative. In particular, the probabilities which are multiplied to obtain the chance of an existential catastrophe are directly guessed, as opposed to resulting from detailed modelling (in contrast to the AI takeoff speeds calculated in Tomās report). Joseph was mostly aiming to qualitatively describe the arguments, as opposed to quantifying the risk:
My main hope, though, is not to push for a specific number, but rather to lay out the arguments in a way that can facilitate productive debate.
I think itās quite possible that OP has built quantitative models which estimate GCR, but that they havenāt published them (e.g. they use them internally).
I assume Open Philanthropy (OP) has built quantitative models which estimate GCR, but probably just simple ones, as I would expect a model like Tomās to be published. There may be concerns about information hazards in the context of bio risk, but OP had an approach to quantify it while mitigate them:
A second, less risky approach is to abstract away most biological details and instead consider general ābase ratesā. The aim is to estimate the likelihood of a biological attack or accident using historical data and base rates of analogous scenarios, and of risk factors such as warfare or terrorism.
[Question] Should Open Philanthropy build detailed quantitative models which estimate global catastrophic risk?
Open Philanthropy has spent 828 M 2022-$ in its grantmaking portfolio of global catastrophic risks[1] (GCRs). However, it has not yet published any detailed quantitative models which estimate GCRs (relatedly), which I believe would be important to inform both efforts to mitigate them and cause prioritisation. I am thinking about models like Tom Davidsonās, which estimates AI takeoff speeds, but outputting the probability of a given annual loss of population or drop in real gross domestic product.
According to Open Philanthropyās grants database on 17 February 2024, accounting for the focus areas of āBiosecurity & Pandemic Preparednessā, āForecastingā, āGlobal Catastrophic Risksā, āGlobal Catastrophic Risks Capacity Buildingā, and āPotential Risks from Advanced AIā.
What about āIs Power-Seeking AI an Existential Risk?ā?
I donāt know if youād count it as quantitative, but it is detailed.
Thanks for the comment, Ryan. I agree that report by Joseph Carlsmith is quite detailed. However, I do not think it is sufficiently quantitative. In particular, the probabilities which are multiplied to obtain the chance of an existential catastrophe are directly guessed, as opposed to resulting from detailed modelling (in contrast to the AI takeoff speeds calculated in Tomās report). Joseph was mostly aiming to qualitatively describe the arguments, as opposed to quantifying the risk:
I think itās quite possible that OP has built quantitative models which estimate GCR, but that they havenāt published them (e.g. they use them internally).
Hi Saul,
I assume Open Philanthropy (OP) has built quantitative models which estimate GCR, but probably just simple ones, as I would expect a model like Tomās to be published. There may be concerns about information hazards in the context of bio risk, but OP had an approach to quantify it while mitigate them: