Following Brian Tomasik’s thinking, I believe that one of the big issues for existential risk is international stability and cooperation to deal with AI arms races and similar issues, so to answer this question I asked something along those lines on Reddit, and got an interesting (and not particularly optimistic) answer.
Maybe one can think that getting through the unsteady period of economic development quickly will hasten the progress of the international community whereas delaying it would simply forestall all the same problems of instability and competition. I don’t know. I wish we had more international relations experts in the EA community.
I’ve been thinking lately that nuclear non-proliferation is probably a more pressing x-risk than AI at the moment and for the near term. We have nuclear weapons and the American/Russian situation has been slowly deteriorating for years. We are (likely) decades away from needing to solve AI race global coordination problems.
I am not asserting that AI coordination isn’t critically important. I am asserting that if we nuke ourselves first, it probably won’t matter.
You really don’t need to give so many disclaimers for the view that nuclear war is an important global catastropic risk, and that the instantaneous risk is much higher for existing nuclear arsenals than for future technologies (which have ~0 instantaneous risk), which everyone should agree with. Nor for thinking that nuclear interventions might have better returns today.
You might be interested in reading OpenPhil’s shallow investigation of nuclear weapons policy. And their preliminary prioritization spreadsheet of GCRs.
OpenPhil doesn’t provide recommendations for individual donors, but you could get started on picking a nuclear charity from their investigation (among other things). If you do look into it, it would be great to post about your research process and findings.
There are so many important unanswered questions relevant to EA charitable giving. Maybe an effective meta-EA charity idea would be a place where EAs could pose research questions they want answered, and they offer money based on how much they would be willing to give to have their question answered with a certain quality.
I feel inclined to say that we should crowdsource or research for those answers and save our money on important causes. For instance, LessWrong did a whole series of interviews with computer scientists asking them about AI risks just by emailing them (http://wiki.lesswrong.com/wiki/Interview_series_on_risks_from_AI). Experts are pretty damn expensive.
That being said, I do think we have a bit of a problem with major conclusions regarding artificial intelligence, economics, etc being drawn by people who lack graduate education and field recognition in those fields. Being an amateur interdisciplinary thinker is nice, but we have a surplus of those people in EA. Maybe we can do more movement building in targeted intellectual communities.
I wasn’t thinking that the money would go towards hiring experts. Rather, something like: “I’ll donate $X to GiveDirectly if someone changes my view on this important question that will decide whether I want to donate my money to Org 1 or Org 2.”
Following Brian Tomasik’s thinking, I believe that one of the big issues for existential risk is international stability and cooperation to deal with AI arms races and similar issues, so to answer this question I asked something along those lines on Reddit, and got an interesting (and not particularly optimistic) answer.
https://www.reddit.com/r/IRstudies/comments/3jk0ks/is_the_economic_development_of_the_global_south/
Maybe one can think that getting through the unsteady period of economic development quickly will hasten the progress of the international community whereas delaying it would simply forestall all the same problems of instability and competition. I don’t know. I wish we had more international relations experts in the EA community.
I’ve been thinking lately that nuclear non-proliferation is probably a more pressing x-risk than AI at the moment and for the near term. We have nuclear weapons and the American/Russian situation has been slowly deteriorating for years. We are (likely) decades away from needing to solve AI race global coordination problems.
I am not asserting that AI coordination isn’t critically important. I am asserting that if we nuke ourselves first, it probably won’t matter.
You really don’t need to give so many disclaimers for the view that nuclear war is an important global catastropic risk, and that the instantaneous risk is much higher for existing nuclear arsenals than for future technologies (which have ~0 instantaneous risk), which everyone should agree with. Nor for thinking that nuclear interventions might have better returns today.
You might be interested in reading OpenPhil’s shallow investigation of nuclear weapons policy. And their preliminary prioritization spreadsheet of GCRs.
OpenPhil doesn’t provide recommendations for individual donors, but you could get started on picking a nuclear charity from their investigation (among other things). If you do look into it, it would be great to post about your research process and findings.
That is a great question you posted on Reddit!
There are so many important unanswered questions relevant to EA charitable giving. Maybe an effective meta-EA charity idea would be a place where EAs could pose research questions they want answered, and they offer money based on how much they would be willing to give to have their question answered with a certain quality.
I feel inclined to say that we should crowdsource or research for those answers and save our money on important causes. For instance, LessWrong did a whole series of interviews with computer scientists asking them about AI risks just by emailing them (http://wiki.lesswrong.com/wiki/Interview_series_on_risks_from_AI). Experts are pretty damn expensive.
That being said, I do think we have a bit of a problem with major conclusions regarding artificial intelligence, economics, etc being drawn by people who lack graduate education and field recognition in those fields. Being an amateur interdisciplinary thinker is nice, but we have a surplus of those people in EA. Maybe we can do more movement building in targeted intellectual communities.
I wasn’t thinking that the money would go towards hiring experts. Rather, something like: “I’ll donate $X to GiveDirectly if someone changes my view on this important question that will decide whether I want to donate my money to Org 1 or Org 2.”