17da95c73e3d7b0c6728d11117fd10b6
Things I do:
Research agenda: bit.ly/artificial-moral-progress (Technical research + strategy/governance exploration to prevent value lock-in; keen to discuss collaborations)
Previously co-led AI Alignment: A Comprehensive Survey
My causes:
AI risk (s/x-risk, harms to nonhumans, socio-economic, …)
EA epistemic health (looking for collab on epistemic health infrastructure: bit.ly/website-EHI)
Animal advocacy
Thanks for the answers, they all make sense and upvoted all of them :)
So for a brief summary:
The action that I described in the question is far from optimal under EV framework (CarlShulman & Brian_Tomasik), and
Even it is optimal, a utilitarian may still have ethical reasons to reject it, if he or she:
endorses some kind of non-traditional utilitarianism, most notably SFE (TimothyChan); or
considers the uncertainty involved to be moral (instead of factual) uncertainty (Brian_Tomasik).