(I’m biased since I’ve mostly donated to animal welfare /​ digital minds. I’m also super busy now so it’s possible I just haven’t thought your argument through sufficiently.)
If you’re a pure EV maximizer I agree with your implicit claim that it’s probably best to prioritize AI safety and/​or helping steer AI for the benefit of neglected groups (animals and digital minds).
If like most people you have risk aversion, like wanting high confidence you’ve made a positive difference, or wanting to make sure a greater % of EA community resources are devoted to interventions which maximally reduce near-term suffering, I think animal welfare presents by far the best value option, dwarfing global health and especially an option like becoming a doctor.
So I feel like perhaps the crux of your discussion with Bob should be whether he’s a pure EV maximizer or if he has the types of risk aversion which make animal welfare look good. There are also options of working in AI safety and donating to animal welfare—no need to fully commit to one or the other! But I don’t think the Alice analogy goes through because becoming a teacher or doctor doesn’t really make sense under any optimizing view, whereas I think animal welfare makes sense under many such views.
(I’m biased since I’ve mostly donated to animal welfare /​ digital minds. I’m also super busy now so it’s possible I just haven’t thought your argument through sufficiently.)
If you’re a pure EV maximizer I agree with your implicit claim that it’s probably best to prioritize AI safety and/​or helping steer AI for the benefit of neglected groups (animals and digital minds).
If like most people you have risk aversion, like wanting high confidence you’ve made a positive difference, or wanting to make sure a greater % of EA community resources are devoted to interventions which maximally reduce near-term suffering, I think animal welfare presents by far the best value option, dwarfing global health and especially an option like becoming a doctor.
So I feel like perhaps the crux of your discussion with Bob should be whether he’s a pure EV maximizer or if he has the types of risk aversion which make animal welfare look good. There are also options of working in AI safety and donating to animal welfare—no need to fully commit to one or the other! But I don’t think the Alice analogy goes through because becoming a teacher or doctor doesn’t really make sense under any optimizing view, whereas I think animal welfare makes sense under many such views.