I think robustness (or ambiguity aversion) favours reducing extinction risks without increasing s-risks and reducing s-risks without increasing extinction risks, or overall reducing both, perhaps with a portfolio of interventions. I think this would favour AI safety, especially that focused on cooperation, possibly other work on governance and conflict, and most other work to reduce s-risks (since it does not increase extinction risks), at least if we believe CRS and/or CLR that these do in fact reduce s-risks. I think Brian Tomasik comes to an overall positive view of MIRI in his recommendations page, and Raising for Effective Giving, also a project by the Effective Altruism Foundation like CLR, recommends MIRI in part because “MIRI’s work has the ability to prevent vast amounts of future suffering.”.
Some work to reduce extinction risks seems reasonably likely to me on its own to increase s-risks, like biosecurity and nuclear risk reduction work, although there may also be arguments in favour related to improving cooperation, but I’m skeptical.
For what it’s worth, I’m not personally convinced any particular AI safety work reduces s-risks overall, because it’s not clear it reduces s-risks directly more than it increases them by reducing extinction risks, although I would expect CLR and CRS to be better donation opportunities for this given their priorities. I haven’t spent a lot of time thinking about this, though.
I think robustness (or ambiguity aversion) favours reducing extinction risks without increasing s-risks and reducing s-risks without increasing extinction risks, or overall reducing both, perhaps with a portfolio of interventions. I think this would favour AI safety, especially that focused on cooperation, possibly other work on governance and conflict, and most other work to reduce s-risks (since it does not increase extinction risks), at least if we believe CRS and/or CLR that these do in fact reduce s-risks. I think Brian Tomasik comes to an overall positive view of MIRI in his recommendations page, and Raising for Effective Giving, also a project by the Effective Altruism Foundation like CLR, recommends MIRI in part because “MIRI’s work has the ability to prevent vast amounts of future suffering.”.
Some work to reduce extinction risks seems reasonably likely to me on its own to increase s-risks, like biosecurity and nuclear risk reduction work, although there may also be arguments in favour related to improving cooperation, but I’m skeptical.
For what it’s worth, I’m not personally convinced any particular AI safety work reduces s-risks overall, because it’s not clear it reduces s-risks directly more than it increases them by reducing extinction risks, although I would expect CLR and CRS to be better donation opportunities for this given their priorities. I haven’t spent a lot of time thinking about this, though.