Whilst this works for saving individual lives (de Sousa Mendes, starfish), it unfortunately doesn’t work for AI x-risk. Whether or not AI kills everyone is pretty binary. And we probably haven’t got long left. Some donations (e.g. those to orgs pushing for a global moratorium on further AGI development) might incrementally reduce x-risk[1], but I think most won’t (AI Safety research without a moratorium first[2]). And failing at preventing extinction is not “ok”! We need to be putting much more effort into it.
I guess you are much more optimistic about AI Safety research paying off, if your p(doom) is “only” 10%. But I think the default outcome is doom (p(doom|AGI)~90% and we are nowhere near solving alignment/control of ASI (the deep learning paradigm is statistical, and all the doom flows through the cracks of imperfect alignment).
Whilst this works for saving individual lives (de Sousa Mendes, starfish), it unfortunately doesn’t work for AI x-risk. Whether or not AI kills everyone is pretty binary. And we probably haven’t got long left. Some donations (e.g. those to orgs pushing for a global moratorium on further AGI development) might incrementally reduce x-risk[1], but I think most won’t (AI Safety research without a moratorium first[2]). And failing at preventing extinction is not “ok”! We need to be putting much more effort into it.
And at least kick the can down the road a few years, if successful.
I guess you are much more optimistic about AI Safety research paying off, if your p(doom) is “only” 10%. But I think the default outcome is doom (p(doom|AGI)~90% and we are nowhere near solving alignment/control of ASI (the deep learning paradigm is statistical, and all the doom flows through the cracks of imperfect alignment).