tl;dr: I think your strong argument based on MEC depends on pretty controversial assumptions, and your more modest argument doesn’t imply we shouldn’t support family planning at all in our portfolio, all-things-considered.
Your original argument does depend on MEC and (roughly) risk neutral EV maximization for the total view, or else high credence (>50%?) in moral views according to which it’s good to make more happy people. You were multiplying the number of lives prevented by the credence in totalism. The standard argument for strong longtermism does essentially the same.
Similar arguments don’t go through on most other popular approaches to moral uncertainty without high credence in the good of making more happy people. I’d guess most EA interventions are net negative according to some view that isn’t totally implausible, and this isn’t enough to stop us from pursuing them. Some approaches to moral uncertainty, like the property rights approach, could support family planning even if it’s (very) net negative according to a credal supermajority of views. (Maybe that counts against the approach, though!)
Your more modest (non-MEC) argument about recognizing goods (love, friendship) impersonally, on the other hand, would not be very persuasive to most people who endorse person-affecting views. It might be the most basic standard objection to them, and pretty close to just the direct assertion that PAVs are false. It also wouldn’t even follow that we shouldn’t support family planning, if we consider moral uncertainty and assign some credence to person-affecting views. That would depend on the specifics.
If the difference between family planning work and the next best opportunities are small enough for welfare maximizers with person-affecting views, those without person-affecting views can pay those with PAVs for the difference to prevent family planning work. Or maybe those with PAVs avoid family planning work to cooperate with other EAs, but I don’t think other EAs are very against family planning, and cooperation with non-EAs might actually support family planning instead (perhaps depending on the kind, maybe less so abortion specifically).
I agree that we can distinguish between net negative (compared to doing nothing or some other default) and net positive but worse than something else, but the result is the same under consequentialism and MEC (if the argument succeeds): you shouldn’t support either family planning or neartermist work generally, because there are better options, namely doing nothing (for family planning), or extinction risk reduction (for both). Again under other approaches to moral uncertainty, it isn’t obvious that family planning shouldn’t be supported at all.
(This is ignoring some movement building effects of neartermist work and probably some other indirect effects. Under strong longtermism, maybe GiveWell did a lot of good by funneling people towards AI safety, building general expertise or buying reputation.)
tl;dr: I think your strong argument based on MEC depends on pretty controversial assumptions, and your more modest argument doesn’t imply we shouldn’t support family planning at all in our portfolio, all-things-considered.
Your original argument does depend on MEC and (roughly) risk neutral EV maximization for the total view, or else high credence (>50%?) in moral views according to which it’s good to make more happy people. You were multiplying the number of lives prevented by the credence in totalism. The standard argument for strong longtermism does essentially the same.
Similar arguments don’t go through on most other popular approaches to moral uncertainty without high credence in the good of making more happy people. I’d guess most EA interventions are net negative according to some view that isn’t totally implausible, and this isn’t enough to stop us from pursuing them. Some approaches to moral uncertainty, like the property rights approach, could support family planning even if it’s (very) net negative according to a credal supermajority of views. (Maybe that counts against the approach, though!)
Your more modest (non-MEC) argument about recognizing goods (love, friendship) impersonally, on the other hand, would not be very persuasive to most people who endorse person-affecting views. It might be the most basic standard objection to them, and pretty close to just the direct assertion that PAVs are false. It also wouldn’t even follow that we shouldn’t support family planning, if we consider moral uncertainty and assign some credence to person-affecting views. That would depend on the specifics.
If the difference between family planning work and the next best opportunities are small enough for welfare maximizers with person-affecting views, those without person-affecting views can pay those with PAVs for the difference to prevent family planning work. Or maybe those with PAVs avoid family planning work to cooperate with other EAs, but I don’t think other EAs are very against family planning, and cooperation with non-EAs might actually support family planning instead (perhaps depending on the kind, maybe less so abortion specifically).
I agree that we can distinguish between net negative (compared to doing nothing or some other default) and net positive but worse than something else, but the result is the same under consequentialism and MEC (if the argument succeeds): you shouldn’t support either family planning or neartermist work generally, because there are better options, namely doing nothing (for family planning), or extinction risk reduction (for both). Again under other approaches to moral uncertainty, it isn’t obvious that family planning shouldn’t be supported at all.
(This is ignoring some movement building effects of neartermist work and probably some other indirect effects. Under strong longtermism, maybe GiveWell did a lot of good by funneling people towards AI safety, building general expertise or buying reputation.)