Good question. Regranting from ALLFED up to around $100 million would be to existing research labs to research and develop alternate foods as well as planning. I mentioned elsewhere on this page that there are catastrophes that could disrupt the global electricity grid, meaning we could not pull fossil fuels the ground, so the loss of industrial civilization. These catastrophes include extreme solar storm, multiple high altitude detonations of nuclear weapons causing electromagnetic pulse, and a coordinated cyber attack. My preliminary estimate is that $100 million could dramatically increase our resilience to these catastrophes. Beyond that, I think there are number of very neglected failure modes of AI that are between the mass unemployment and AGI/​superintelligence, something I would call global catastrophic AI. An example of this is that the coordinated cyber attack mentioned above could take the form of a narrow AI computer virus. But there are a number of other risks and Alexey Turchin and I are outlining them in a paper we hope to publish soon. Work on prevention of these types of risks could be a high priority not just because they are neglected, but also because they could happen sooner than AGI. I also think a lot of meta-EA work is high leverage.
Good question. Regranting from ALLFED up to around $100 million would be to existing research labs to research and develop alternate foods as well as planning. I mentioned elsewhere on this page that there are catastrophes that could disrupt the global electricity grid, meaning we could not pull fossil fuels the ground, so the loss of industrial civilization. These catastrophes include extreme solar storm, multiple high altitude detonations of nuclear weapons causing electromagnetic pulse, and a coordinated cyber attack. My preliminary estimate is that $100 million could dramatically increase our resilience to these catastrophes. Beyond that, I think there are number of very neglected failure modes of AI that are between the mass unemployment and AGI/​superintelligence, something I would call global catastrophic AI. An example of this is that the coordinated cyber attack mentioned above could take the form of a narrow AI computer virus. But there are a number of other risks and Alexey Turchin and I are outlining them in a paper we hope to publish soon. Work on prevention of these types of risks could be a high priority not just because they are neglected, but also because they could happen sooner than AGI. I also think a lot of meta-EA work is high leverage.