FWIW, basically the same argument would also undermine almost all global health work and other neartermist work. Why work on saving hundreds or thousands or even millions of lives when you can reduce the probability of extinction and marginally increase the probability of 10^50 (or whatever) happy conscious beings coming into existence?
The difference is mostly a matter of degree: in extinction prevention compared to family planning, we have much smaller probabilities for the high payoff possibility (of preventing extinction+total view) and much larger payoffs conditional on the payoff.
I don’t think it makes sense to single out family planning in particular with this kind of argument.
I think there’s a big difference between strong longtermism (the argument you state) and my comment’s argument that FEM’s intervention is net negative.
My comment argues that while FEM’s intentions are well-meaning, their intervention may be net negative because it prevents people from experiencing lives they would have been glad to have lived. For my comment’s argument to be plausible, all one needs to believe is that the loves and friendships future people may have is a positive good. Yes, my comment appeals to longtermism’s endorsement of this view, but its claims and requirements are far more modest than those of strong longtermism.
There is no double standard or singling out here. I think global health work is good, and support funding for it on the margin. I believe the same about animal welfare, and about longtermism. Yes, some interventions are more cost-effective than others, and I think broadly similar arguments (e.g. even if you think animals don’t matter, a small chance that they do matter should be enough to prioritize animal welfare over global health due to animal welfare’s scale and neglectedness) do indeed go through.
If you provided me another example of a neartermist intervention which prevents people from experiencing lives they would have been glad to have lived, I would make the same argument against it as in my earlier comment. It could be family planning, or it could be something else (e.g. advocacy of a one-child policy, perhaps for environmentalist purposes).
I’m also quite sympathetic to the pure philosophical case for strong longtermism, though I have some caveats in practice. So yes, I don’t think your statement of strong longtermism is unreasonable.
tl;dr: I think your strong argument based on MEC depends on pretty controversial assumptions, and your more modest argument doesn’t imply we shouldn’t support family planning at all in our portfolio, all-things-considered.
Your original argument does depend on MEC and (roughly) risk neutral EV maximization for the total view, or else high credence (>50%?) in moral views according to which it’s good to make more happy people. You were multiplying the number of lives prevented by the credence in totalism. The standard argument for strong longtermism does essentially the same.
Similar arguments don’t go through on most other popular approaches to moral uncertainty without high credence in the good of making more happy people. I’d guess most EA interventions are net negative according to some view that isn’t totally implausible, and this isn’t enough to stop us from pursuing them. Some approaches to moral uncertainty, like the property rights approach, could support family planning even if it’s (very) net negative according to a credal supermajority of views. (Maybe that counts against the approach, though!)
Your more modest (non-MEC) argument about recognizing goods (love, friendship) impersonally, on the other hand, would not be very persuasive to most people who endorse person-affecting views. It might be the most basic standard objection to them, and pretty close to just the direct assertion that PAVs are false. It also wouldn’t even follow that we shouldn’t support family planning, if we consider moral uncertainty and assign some credence to person-affecting views. That would depend on the specifics.
If the difference between family planning work and the next best opportunities are small enough for welfare maximizers with person-affecting views, those without person-affecting views can pay those with PAVs for the difference to prevent family planning work. Or maybe those with PAVs avoid family planning work to cooperate with other EAs, but I don’t think other EAs are very against family planning, and cooperation with non-EAs might actually support family planning instead (perhaps depending on the kind, maybe less so abortion specifically).
I agree that we can distinguish between net negative (compared to doing nothing or some other default) and net positive but worse than something else, but the result is the same under consequentialism and MEC (if the argument succeeds): you shouldn’t support either family planning or neartermist work generally, because there are better options, namely doing nothing (for family planning), or extinction risk reduction (for both). Again under other approaches to moral uncertainty, it isn’t obvious that family planning shouldn’t be supported at all.
(This is ignoring some movement building effects of neartermist work and probably some other indirect effects. Under strong longtermism, maybe GiveWell did a lot of good by funneling people towards AI safety, building general expertise or buying reputation.)
FWIW, basically the same argument would also undermine almost all global health work and other neartermist work. Why work on saving hundreds or thousands or even millions of lives when you can reduce the probability of extinction and marginally increase the probability of 10^50 (or whatever) happy conscious beings coming into existence?
The difference is mostly a matter of degree: in extinction prevention compared to family planning, we have much smaller probabilities for the high payoff possibility (of preventing extinction+total view) and much larger payoffs conditional on the payoff.
I don’t think it makes sense to single out family planning in particular with this kind of argument.
I think there’s a big difference between strong longtermism (the argument you state) and my comment’s argument that FEM’s intervention is net negative.
My comment argues that while FEM’s intentions are well-meaning, their intervention may be net negative because it prevents people from experiencing lives they would have been glad to have lived. For my comment’s argument to be plausible, all one needs to believe is that the loves and friendships future people may have is a positive good. Yes, my comment appeals to longtermism’s endorsement of this view, but its claims and requirements are far more modest than those of strong longtermism.
There is no double standard or singling out here. I think global health work is good, and support funding for it on the margin. I believe the same about animal welfare, and about longtermism. Yes, some interventions are more cost-effective than others, and I think broadly similar arguments (e.g. even if you think animals don’t matter, a small chance that they do matter should be enough to prioritize animal welfare over global health due to animal welfare’s scale and neglectedness) do indeed go through.
If you provided me another example of a neartermist intervention which prevents people from experiencing lives they would have been glad to have lived, I would make the same argument against it as in my earlier comment. It could be family planning, or it could be something else (e.g. advocacy of a one-child policy, perhaps for environmentalist purposes).
I’m also quite sympathetic to the pure philosophical case for strong longtermism, though I have some caveats in practice. So yes, I don’t think your statement of strong longtermism is unreasonable.
tl;dr: I think your strong argument based on MEC depends on pretty controversial assumptions, and your more modest argument doesn’t imply we shouldn’t support family planning at all in our portfolio, all-things-considered.
Your original argument does depend on MEC and (roughly) risk neutral EV maximization for the total view, or else high credence (>50%?) in moral views according to which it’s good to make more happy people. You were multiplying the number of lives prevented by the credence in totalism. The standard argument for strong longtermism does essentially the same.
Similar arguments don’t go through on most other popular approaches to moral uncertainty without high credence in the good of making more happy people. I’d guess most EA interventions are net negative according to some view that isn’t totally implausible, and this isn’t enough to stop us from pursuing them. Some approaches to moral uncertainty, like the property rights approach, could support family planning even if it’s (very) net negative according to a credal supermajority of views. (Maybe that counts against the approach, though!)
Your more modest (non-MEC) argument about recognizing goods (love, friendship) impersonally, on the other hand, would not be very persuasive to most people who endorse person-affecting views. It might be the most basic standard objection to them, and pretty close to just the direct assertion that PAVs are false. It also wouldn’t even follow that we shouldn’t support family planning, if we consider moral uncertainty and assign some credence to person-affecting views. That would depend on the specifics.
If the difference between family planning work and the next best opportunities are small enough for welfare maximizers with person-affecting views, those without person-affecting views can pay those with PAVs for the difference to prevent family planning work. Or maybe those with PAVs avoid family planning work to cooperate with other EAs, but I don’t think other EAs are very against family planning, and cooperation with non-EAs might actually support family planning instead (perhaps depending on the kind, maybe less so abortion specifically).
I agree that we can distinguish between net negative (compared to doing nothing or some other default) and net positive but worse than something else, but the result is the same under consequentialism and MEC (if the argument succeeds): you shouldn’t support either family planning or neartermist work generally, because there are better options, namely doing nothing (for family planning), or extinction risk reduction (for both). Again under other approaches to moral uncertainty, it isn’t obvious that family planning shouldn’t be supported at all.
(This is ignoring some movement building effects of neartermist work and probably some other indirect effects. Under strong longtermism, maybe GiveWell did a lot of good by funneling people towards AI safety, building general expertise or buying reputation.)