I realize this is a higher-level discussion, but I am curious-by research into catastrophic risks do you mean AI specifically? Because I would be disheartened if you suspected that research on asteroid deflection, probabilities of high energy physics catastrophes, how to prevent global totalitarianism, how to prevent nuclear conflict, how to reduce nuclear stockpiles, how to ramp up conventional or alternative food supplies in a catastrophe, how to make global cooperation in a catastrophe more likely, prioritization within GCR, etc. are about as likely to have positive as negative effect.
I realize this is a higher-level discussion, but I am curious-by research into catastrophic risks do you mean AI specifically? Because I would be disheartened if you suspected that research on asteroid deflection, probabilities of high energy physics catastrophes, how to prevent global totalitarianism, how to prevent nuclear conflict, how to reduce nuclear stockpiles, how to ramp up conventional or alternative food supplies in a catastrophe, how to make global cooperation in a catastrophe more likely, prioritization within GCR, etc. are about as likely to have positive as negative effect.
I don’t believe any of those things, but it’s most plausible with AI and war prevention.