I’ve been thinking lately that nuclear non-proliferation is probably a more pressing x-risk than AI at the moment and for the near term. We have nuclear weapons and the American/Russian situation has been slowly deteriorating for years. We are (likely) decades away from needing to solve AI race global coordination problems.
I am not asserting that AI coordination isn’t critically important. I am asserting that if we nuke ourselves first, it probably won’t matter.
You really don’t need to give so many disclaimers for the view that nuclear war is an important global catastropic risk, and that the instantaneous risk is much higher for existing nuclear arsenals than for future technologies (which have ~0 instantaneous risk), which everyone should agree with. Nor for thinking that nuclear interventions might have better returns today.
You might be interested in reading OpenPhil’s shallow investigation of nuclear weapons policy. And their preliminary prioritization spreadsheet of GCRs.
OpenPhil doesn’t provide recommendations for individual donors, but you could get started on picking a nuclear charity from their investigation (among other things). If you do look into it, it would be great to post about your research process and findings.
I’ve been thinking lately that nuclear non-proliferation is probably a more pressing x-risk than AI at the moment and for the near term. We have nuclear weapons and the American/Russian situation has been slowly deteriorating for years. We are (likely) decades away from needing to solve AI race global coordination problems.
I am not asserting that AI coordination isn’t critically important. I am asserting that if we nuke ourselves first, it probably won’t matter.
You really don’t need to give so many disclaimers for the view that nuclear war is an important global catastropic risk, and that the instantaneous risk is much higher for existing nuclear arsenals than for future technologies (which have ~0 instantaneous risk), which everyone should agree with. Nor for thinking that nuclear interventions might have better returns today.
You might be interested in reading OpenPhil’s shallow investigation of nuclear weapons policy. And their preliminary prioritization spreadsheet of GCRs.
OpenPhil doesn’t provide recommendations for individual donors, but you could get started on picking a nuclear charity from their investigation (among other things). If you do look into it, it would be great to post about your research process and findings.