Actually, my summary of that post initially dropped the obligation frame because of these reasons :P (Not intentionally, since I try to have objective summaries, but I basically ignored the obligation point while reading and so forgot to put it in the summary.)
I do think the opportunity frame is much more reasonable in that setting, because “human safety problems” are something that you might have been resigned to in the past, and AI design is a surprising option that might let us fix them, so it really does sound like good news. On the other hand, the surprising part about effective altruism is “people are dying for such preventable reasons that we can stop it for thousands of dollars”, which is bad news that it’s really hard to be excited by.
Actually, my summary of that post initially dropped the obligation frame because of these reasons :P (Not intentionally, since I try to have objective summaries, but I basically ignored the obligation point while reading and so forgot to put it in the summary.)
I do think the opportunity frame is much more reasonable in that setting, because “human safety problems” are something that you might have been resigned to in the past, and AI design is a surprising option that might let us fix them, so it really does sound like good news. On the other hand, the surprising part about effective altruism is “people are dying for such preventable reasons that we can stop it for thousands of dollars”, which is bad news that it’s really hard to be excited by.