GCR preparedness: Fund the researchers of this paper to do more research on global food security a la All Fed.
Thanks—that paper made the point that bigger agricultural disasters tend to be correlated, which contributes to the fat tail of food catastrophes. Funding on the problem is good, but funding on the solutions (e.g. ALLFED) is better because the solutions are so much more neglected.
I take your point. I’m inclined to agree with you that the Allfed should be prioritized over this given that you’re the expert. But let’s say you’re fully funded and we would give you more money to regrant on this cause—would you give to these people for more research or out research? If not, where?
Good question. Regranting from ALLFED up to around $100 million would be to existing research labs to research and develop alternate foods as well as planning. I mentioned elsewhere on this page that there are catastrophes that could disrupt the global electricity grid, meaning we could not pull fossil fuels the ground, so the loss of industrial civilization. These catastrophes include extreme solar storm, multiple high altitude detonations of nuclear weapons causing electromagnetic pulse, and a coordinated cyber attack. My preliminary estimate is that $100 million could dramatically increase our resilience to these catastrophes. Beyond that, I think there are number of very neglected failure modes of AI that are between the mass unemployment and AGI/​superintelligence, something I would call global catastrophic AI. An example of this is that the coordinated cyber attack mentioned above could take the form of a narrow AI computer virus. But there are a number of other risks and Alexey Turchin and I are outlining them in a paper we hope to publish soon. Work on prevention of these types of risks could be a high priority not just because they are neglected, but also because they could happen sooner than AGI. I also think a lot of meta-EA work is high leverage.
Thanks—that paper made the point that bigger agricultural disasters tend to be correlated, which contributes to the fat tail of food catastrophes. Funding on the problem is good, but funding on the solutions (e.g. ALLFED) is better because the solutions are so much more neglected.
I take your point. I’m inclined to agree with you that the Allfed should be prioritized over this given that you’re the expert. But let’s say you’re fully funded and we would give you more money to regrant on this cause—would you give to these people for more research or out research? If not, where?
Good question. Regranting from ALLFED up to around $100 million would be to existing research labs to research and develop alternate foods as well as planning. I mentioned elsewhere on this page that there are catastrophes that could disrupt the global electricity grid, meaning we could not pull fossil fuels the ground, so the loss of industrial civilization. These catastrophes include extreme solar storm, multiple high altitude detonations of nuclear weapons causing electromagnetic pulse, and a coordinated cyber attack. My preliminary estimate is that $100 million could dramatically increase our resilience to these catastrophes. Beyond that, I think there are number of very neglected failure modes of AI that are between the mass unemployment and AGI/​superintelligence, something I would call global catastrophic AI. An example of this is that the coordinated cyber attack mentioned above could take the form of a narrow AI computer virus. But there are a number of other risks and Alexey Turchin and I are outlining them in a paper we hope to publish soon. Work on prevention of these types of risks could be a high priority not just because they are neglected, but also because they could happen sooner than AGI. I also think a lot of meta-EA work is high leverage.