At the risk of repetition, I’d say that by the same reasoning, we could likewise add in our best estimates of saving a life on (just, say) total human welfare up to 2100.
Your response here was that “[p]opulation growth will be net good or bad depending on my credences about what the future would have looked like, but these credences are not robust”. But as with the first beneficiary, we can separate the direct welfare impact of population growth from all its other effects and observe that the former is a part of “sum u_i”, no?
Of course, estimates of shorter-term effects are usually more reliable than those of longer-term effects, for all sorts of reasons; but since we’re not arguing over whether saving lives in certain regions can be expected to increase population size up to 2100, that doesn’t seem to me like the point of dispute in this case.
I’m not sure where we’re failing to communicate exactly, but I’m a little worried that this is clogging the comments section! Let me know if you want to really try to get to the bottom of this sometime, in some other context.
I’m not trying to solve all complex cluelessness cases with my argument. I think population growth is plausibly a case with complex cluelessness, but this depends on your views.
If I were a total utilitarian with symmetric population ethics, and didn’t care much about nonhuman animals (neither of which is actually true for me), then I’d guess the negative externalities of a larger population would be strongly dominated by the benefits of a larger population, mostly just the direct benefits of the welfare of the extra people. I don’t think the effects of climate change are that important here, and I’m not aware of other important negative externalities. So for people with such views, it’s actually just not a case of complex cluelessness at all. The expectation that more people than just the one you saved will live probably increases the cost-effectiveness to someone with such views.
So, compared to doing nothing (or some specific default action), some actions do look robustly good in expectation. Compared to some other options, there will be complex cluelessness, but I’m happy to choose something that looks best in expectation compared to doing nothing. I suppose this might privilege a specific default action to compare to in a nonconsequentialist way, although maybe there’s a way that gives similar recommendations without such privileging (I’m only thinking about this now):
You could model this as a partial order with A strictly dominating B if the expected value of A is robustly greater than the expected value of B. At least, you should never choose dominated actions. You could also require that the action you choose dominates at least one action when there is any domination in the set of actions you’re considering, and maybe this would handle a lot of complex cluelessness, if actions are decomposed enough into pretty atomic actions. For example, with complex cluelessness about saving lives compared to nothing, saving a life and punching myself in the face is dominated by saving a life and not punching myself in the face, but I can treat saving a life or not and at a separate time punching myself in the face or not as two separate decisions.
At the risk of repetition, I’d say that by the same reasoning, we could likewise add in our best estimates of saving a life on (just, say) total human welfare up to 2100.
Your response here was that “[p]opulation growth will be net good or bad depending on my credences about what the future would have looked like, but these credences are not robust”. But as with the first beneficiary, we can separate the direct welfare impact of population growth from all its other effects and observe that the former is a part of “sum u_i”, no?
Of course, estimates of shorter-term effects are usually more reliable than those of longer-term effects, for all sorts of reasons; but since we’re not arguing over whether saving lives in certain regions can be expected to increase population size up to 2100, that doesn’t seem to me like the point of dispute in this case.
I’m not sure where we’re failing to communicate exactly, but I’m a little worried that this is clogging the comments section! Let me know if you want to really try to get to the bottom of this sometime, in some other context.
I’m not trying to solve all complex cluelessness cases with my argument. I think population growth is plausibly a case with complex cluelessness, but this depends on your views.
If I were a total utilitarian with symmetric population ethics, and didn’t care much about nonhuman animals (neither of which is actually true for me), then I’d guess the negative externalities of a larger population would be strongly dominated by the benefits of a larger population, mostly just the direct benefits of the welfare of the extra people. I don’t think the effects of climate change are that important here, and I’m not aware of other important negative externalities. So for people with such views, it’s actually just not a case of complex cluelessness at all. The expectation that more people than just the one you saved will live probably increases the cost-effectiveness to someone with such views.
Similarly, I think Brian Tomasik has supported the Humane Slaughter Association basically because he doesn’t think the effects on animal population sizes and wild animals generally are significant compared to the benefits. It does good with little risk of harm.
So, compared to doing nothing (or some specific default action), some actions do look robustly good in expectation. Compared to some other options, there will be complex cluelessness, but I’m happy to choose something that looks best in expectation compared to doing nothing. I suppose this might privilege a specific default action to compare to in a nonconsequentialist way, although maybe there’s a way that gives similar recommendations without such privileging (I’m only thinking about this now):
You could model this as a partial order with A strictly dominating B if the expected value of A is robustly greater than the expected value of B. At least, you should never choose dominated actions. You could also require that the action you choose dominates at least one action when there is any domination in the set of actions you’re considering, and maybe this would handle a lot of complex cluelessness, if actions are decomposed enough into pretty atomic actions. For example, with complex cluelessness about saving lives compared to nothing, saving a life and punching myself in the face is dominated by saving a life and not punching myself in the face, but I can treat saving a life or not and at a separate time punching myself in the face or not as two separate decisions.