āmaybe Iām misunderstanding you, but I believe the proposition being defended here is that the distribution of long-term welfare outcomes from a short-termist intervention differs substantially from the status quo distribution of long-term welfare outcomes (and that this distribution-difference is much larger than the interventionās direct benefits). Do you mean that youāre not convinced that this is the case for any short-termist intervention?
To be specific (and revising my claim somewhat), Iām not convinced of any netexpected longterm effect in any particular direction on my social welfare function/āutility function. I think there are many considerations that can go in either direction, the weight we give them is basically arbitrary, and I usually donāt have good reason to believe their effects persist very long or are that important, anyway.
Assuming that fertility in saved children isnāt dramatically lower than population fertility, this strikes me as a strong reason to think that the indirect welfare effects of saving a young personās life in Africa todayāindeed, even a majority of the effects on total human welfare before 2100āwill be larger than the direct welfare effect.
I am arguing from ignorance here, but I donāt yet have enough reason to believe the expected effect is good or bad. Unless I expect to be able to weigh opposing considerations against one another in a way that feels robust and satisfactory to me and be confident that Iām not missing crucial considerations, Iām inclined to not account for them until I can (but also try to learn more about them in hope of having more robust predictions). A sensitivity analysis might help, too, but only so much. The two studies you cite are worth looking into, but there are also effects of different population sizes that you need to weigh. How do you weigh them against each other?
this strikes me as a strong reason to think that the indirect welfare effects of saving a young personās life in Africa todayāindeed, even a majority of the effects on total human welfare before 2100āwill be larger than the direct welfare effect.
Whatās the expected value (on net) of the indirect effects to you? Is its absolute value much greater than the direct effectsā expected value? How robust do you think the sign of the expected value of the indirect effects is to your subjective weighting of different considerations and missed considerations?
Also, what do you think the expected change in population size is from saving one life through AMF?
Hold onānow it seems like you might be talking past the OP on the issue of complex cluelessness. I 1000% agree that changing population size has many effects beyond those I listed, and that we canāt weigh them; but thatās the whole problem!
The claim is that CC arises when (a) there are both predictably positive and predictably negative indirect effects of (say) saving lives which are larger in magnitude than the direct effects, and (b) you canāt weigh them all against each other so as to arise at an all-things-considered judgment of the sign of the value of the intervention.
A common response to the phenomenon of CC is to say, āI know that the direct effects are good, and I struggle to weigh all of the indirect effects, so the latter are zero for me in expectation, and the intervention is appealingā. But (unless thereās a strong counterargument to Hilaryās observation about this in āCluelessnessā which Iām unaware of), this response is invalid. We know this because if this response were valid, we could by identical reasoning pick out any category of effect whose effects we can estimateāthe effect on farmed chicken welfare next year from saving a chicken-eaterās life, sayāand say āI know that the next-year-chicken effects are bad, and I struggle to weigh all of the non-next-year-chicken effects, so the latter are zero for me in expectation, and the intervention is unappealingā.
The above reasoning doesnāt invalidate that kind of response to simple cluelessness, because there the indirect effects have a featureāsymmetryāwhich breaks when you cut up the space of consequences differently. But this means that, unless one can demonstrate that the distribution of non-direct effects has a sort of evidential symmetry that the distribution of non-next-year-chicken effects does not, one is not yet in a position to put a sign to the value of saving a life.
So, the response to
Whatās the expected value (on net) of the indirect effects to you? Is its absolute value much greater than the direct effectsā expected value?
is that, given an inability to weigh all the effects, and an absence of evidential symmetry, I simply donāt have an expected value (or even a sign) of the indirect effects, or the total effects, of saving a life.
Does that clarify things at all, or am I the one doing the talking-past?
No worries, sorry if I didnāt write it as clearly as I could have!
BTW, Iāve had this conversation enough times now that last summer I wrote down my thoughts on cluelessness in a document that Iāve been told is pretty accessibleāthis is the doc I link to from the words ādonāt have an expected valueā. I know it can be annoying just to be pointed off the page, but just letting you know in case you find it helpful or interesting.
Iām not sure if what Iām defending is quite the same as whatās in your example. Itās not really about direct or indirect effects or how to group effects to try to cancel them; itās just skepticism about effects.
The claim is that CC arises when (a) there are both predictably positive and predictably negative indirect effects of (say) saving lives which are larger in magnitude than the direct effects, and (b) you canāt weigh them all against each other so as to arise at an all-things-considered judgment of the sign of the value of the intervention.
Iāll exclude whichever I donāt have a good effect size estimate on my social welfare function for (possibly multiple), since Iāll assume the expected effect size is small. If I have effect sizes for both, then I can just estimate the net effect. As a first approximation, Iād just add the two effects. If I have reason to believe they should interact in certain ways and I can model this, I might.
If youāre saying I know the two opposite sign indirect effects are larger in magnitude than the direct ones, it sounds like I have estimates I can just sum (as a first approximation). Is the point that Iām confident theyāre larger in magnitude, but still not confident enough to estimate their expected magnitudes more precisely?
Maybe I have a good idea of the impacts over each possible future, but Iām very uncertain about the distribution of possible futures. I could be confident about the sign of the effect of population growth when comparing pairs of counterfactuals, one with the child saved, and the other not, but Iām not confident enough to form distributions over the two sets of counterfactuals to be able to determine the sign of the expected value.
I think Iām basically treating each effect without an estimate attached independently like simple cluelessness. Iām not looking at a group of positive and negative effects and assuming they cancel; Iām doubting the signs of the effects that donāt come with estimates. If I have a plausible argument that doing X affects Y and Y affects Z, which I value directly and the effect should be good, but I donāt have an estimate for the effect through this causal path, Iām not actually convinced that the effect through this path isnāt bad.
Now, Iām not relying on a nice symmetry argument to justify this treatment like simple cluelessness, but Iām also not cutting up the space of consequences and ignoring subsets; Iām just ignoring each effect Iām skeptical of.
This does push the problem to which effects I should try to estimate, though.
Is the point that Iām confident theyāre larger in magnitude, but still not confident enough to estimate their expected magnitudes more precisely?
Yes, exactlyāthatās the point of the African population growth example.
Maybe I have a good idea of the impacts over each possible future, but Iām very uncertain about the distribution of possible futures. I could be confident about the sign of the effect of population growth when comparing pairs of counterfactuals, one with the child saved, and the other not, but Iām not confident enough to form distributions over the two sets of counterfactuals to be able to determine the sign of the expected value.
I donāt understand this paragraph. Could you clarify?
I donāt think I understand this either:
Iām doubting the signs of the effects that donāt come with estimates. If I have a plausible argument that doing X affects Y and Y affects Z, which I value directly and the effect should be good, but I donāt have an estimate for the effect through this causal path, Iām not actually convinced that the effect through this path isnāt bad.
Say you have a plausible argument that pushing a switch (doing X) pulls some number n > 0 of strings (so Y := #strings_pulled goes from 0 to n), each of which releases some food to m > 0 hungry lab mice (so Z := #fed_mice goes from 0 to nm), and you know that X and Y have no other consequences. You know that n, m > 0 but donāt have estimates for them. At face value you seem to be saying youāre not convinced that the effect of pushing the switch isnāt bad, but that canāt be right!
I donāt understand this paragraph. Could you clarify?
Population growth will be net good or bad depending on my credences about what the future would have looked like, but these credences are not robust. E.g. I might think itās bad in cases like X and good in cases like notX and have conditional expectations for both, but Iām basically just guessing the probability of X, and which is better depends on the probability of X (under each action).
Say you have a plausible argument that pushing a switch (doing X) pulls some number n > 0 of strings (so Y := #strings_pulled goes from 0 to n), each of which releases some food to m > 0 hungry lab mice (so Z := #fed_mice goes from 0 to nm), and you know that X and Y have no other consequences. You know that n, m > 0 but donāt have estimates for them. At face value you seem to be saying youāre not convinced that the effect of pushing the switch isnāt bad, but that canāt be right!
So the assumption here is that I think the effect is nonnegative with probability 1. I donāt think mere plausibility arguments or considerations give me that kind of credence. As a specific example, is population growth actually bad for climate change? The argument is āMore people, more consumption, more emissionsā, but with no numbers attached. In this case, I think thereās some probability that population growth is good for climate change, and without estimates for the argument, Iād assume the amount of climate change would be identically distributed with and without population growth. Of course, in this case, I think we have enough data and models to actually estimate some of the effects.
Even with estimates, I still think thereās a chance population growth is good for climate change, although my expected value would be that itās bad. It could depend on what kind of people the extra people are like, and what kinds of effects they have on society.
Population growth will be net good or bad depending on my credences about what the future would have looked like, but these credences are not robust.
Suppose for simplicity that we can split the effects of saving a life into
1) benefits accruing to the beneficiary;
2) benefits accruing to future generations up to 2100, through increased size (following from (1)); and
3) further effects (following from (2)).
It seems like youāre saying that thereās some proposition X such that (3) is overall good if X and bad if not-X, where we can only guess at the probability of X; and that in this circumstance we can say that the overall effect of (2 & 3) is ~zero in expectation.
If thatās right, what Iām struggling to see is why we canāt likewise say that thereās some proposition Y such that (2 & 3) is overall good if Y and bad if not-Y, where we can only guess at the probability of Y, and that the overall effect of (1 & 2 & 3) is therefore ~zero in expectation.
It seems like youāre saying that thereās some proposition X such that (3) is overall good if X and bad if not-X, where we can only guess at the probability of X; and that in this circumstance we can say that the overall effect of (2 & 3) is ~zero in expectation.
I wasnāt saying we should cancel them this way; Iām just trying to understand exactly what the CC problem is here.
What I have been proposing is that Iām independently skeptical of each causal effect that doesnāt come with effect size estimates (and canāt, especially), as in my other comments, and Sauliusā here. If you give me a causal model, and claim A has a certain effect on B, without justifying rough effect sizes, I am by default skeptical of that claim and treat that like simple cluelessness: B conditional on changing A is identically distributed to B. You have not yet justified a systematic effect of A on B.
However, Iām thinking that I could be pretty confident about effect sizes conditional on X and notX, but have little idea about the probability of X. In this case, I shouldnāt just apply the same skepticism, and Iām stuck trying to figure out the probability of X, which would allow me to weigh the different effects against each other, but I donāt know how to do it. Is this an example of CC?
If you give me a causal model, and claim A has a certain effect on B, without justifying rough effect sizes, I am by default skeptical of that claim and treat that like simple cluelessness: B conditional on changing A is identically distributed to B. You have not yet justified a systematic effect of A on B.
What Iām saying is, āMichael: youāve given me a causal model, and claimed A (saving lives) has a positive effect on B (total moral value in the universe, given all the indirect effects), without justifying a rough effect size. You just justified a rough effect size on C (value to direct beneficiaries), but thatās not ultimately what matters. By default I think A has no systematic effect on B, and you have not yet justified one.ā
Is this an example of CC?
Yes, you have CC in that circumstance if you donāt have evidential symmetry with respect to X.
āMichael: youāve given me a causal model, and claimed A (saving lives) has a positive effect on B (total moral value in the universe, given all the indirect effects), without justifying a rough effect size. You just justified a rough effect size on C (value to direct beneficiaries), but thatās not ultimately what matters. By default I think A has no systematic effect on B, and you have not yet justified one.ā
The value to the universe is the sum of values to possible beneficiaries, including the direct ones C, so there is a direct and known causal effect of C on B.u1 has a causal effect on āiui, under any reasonable definition of causal effect, and itās the obvious one: any change in u1 directly causes an equal change in the sum, without affecting the other terms. The value in my life (or some moment of it) u1 doesnāt affect yours u2, although my life itself or your judgment about my u1 might affect your life and your u2. Similarly, any subset of the ui (including C) has a causal effect on the sum.
If you think A has no effect on B (in expectation), this is a claim that the effects through C are exactly negated by other effects from A (in expectation), but this is the kind of causal claim that Iāve been saying Iām skeptical of, since it doesnāt come with a (justified) effect size estimate (or even an plausible argument for how this happens, in this case).
This is pretty different from the skepticism I have about long-term effects: in this case, people are claiming that A affects a particular set of beneficiaries C where C is in the future, but they havenāt justified an effect size of A on C in the first place; many things could happen before C, completely drowning out the effect. Since Iām not convinced C is affected in any particular way, Iām not convinced B is either, through this proposed causal chain.
With short term effects, when thereās good feedback, I actually have proxy observations that tell me that in fact A affects C in certain ways (although there are still generalization error and the reference class problem to worry about).
At the risk of repetition, Iād say that by the same reasoning, we could likewise add in our best estimates of saving a life on (just, say) total human welfare up to 2100.
Your response here was that ā[p]opulation growth will be net good or bad depending on my credences about what the future would have looked like, but these credences are not robustā. But as with the first beneficiary, we can separate the direct welfare impact of population growth from all its other effects and observe that the former is a part of āsum u_iā, no?
Of course, estimates of shorter-term effects are usually more reliable than those of longer-term effects, for all sorts of reasons; but since weāre not arguing over whether saving lives in certain regions can be expected to increase population size up to 2100, that doesnāt seem to me like the point of dispute in this case.
Iām not sure where weāre failing to communicate exactly, but Iām a little worried that this is clogging the comments section! Let me know if you want to really try to get to the bottom of this sometime, in some other context.
Iām not trying to solve all complex cluelessness cases with my argument. I think population growth is plausibly a case with complex cluelessness, but this depends on your views.
If I were a total utilitarian with symmetric population ethics, and didnāt care much about nonhuman animals (neither of which is actually true for me), then Iād guess the negative externalities of a larger population would be strongly dominated by the benefits of a larger population, mostly just the direct benefits of the welfare of the extra people. I donāt think the effects of climate change are that important here, and Iām not aware of other important negative externalities. So for people with such views, itās actually just not a case of complex cluelessness at all. The expectation that more people than just the one you saved will live probably increases the cost-effectiveness to someone with such views.
So, compared to doing nothing (or some specific default action), some actions do look robustly good in expectation. Compared to some other options, there will be complex cluelessness, but Iām happy to choose something that looks best in expectation compared to doing nothing. I suppose this might privilege a specific default action to compare to in a nonconsequentialist way, although maybe thereās a way that gives similar recommendations without such privileging (Iām only thinking about this now):
You could model this as a partial order with A strictly dominating B if the expected value of A is robustly greater than the expected value of B. At least, you should never choose dominated actions. You could also require that the action you choose dominates at least one action when there is any domination in the set of actions youāre considering, and maybe this would handle a lot of complex cluelessness, if actions are decomposed enough into pretty atomic actions. For example, with complex cluelessness about saving lives compared to nothing, saving a life and punching myself in the face is dominated by saving a life and not punching myself in the face, but I can treat saving a life or not and at a separate time punching myself in the face or not as two separate decisions.
To be specific (and revising my claim somewhat), Iām not convinced of any net expected longterm effect in any particular direction on my social welfare function/āutility function. I think there are many considerations that can go in either direction, the weight we give them is basically arbitrary, and I usually donāt have good reason to believe their effects persist very long or are that important, anyway.
I am arguing from ignorance here, but I donāt yet have enough reason to believe the expected effect is good or bad. Unless I expect to be able to weigh opposing considerations against one another in a way that feels robust and satisfactory to me and be confident that Iām not missing crucial considerations, Iām inclined to not account for them until I can (but also try to learn more about them in hope of having more robust predictions). A sensitivity analysis might help, too, but only so much. The two studies you cite are worth looking into, but there are also effects of different population sizes that you need to weigh. How do you weigh them against each other?
Whatās the expected value (on net) of the indirect effects to you? Is its absolute value much greater than the direct effectsā expected value? How robust do you think the sign of the expected value of the indirect effects is to your subjective weighting of different considerations and missed considerations?
Also, what do you think the expected change in population size is from saving one life through AMF?
Hold onānow it seems like you might be talking past the OP on the issue of complex cluelessness. I 1000% agree that changing population size has many effects beyond those I listed, and that we canāt weigh them; but thatās the whole problem!
The claim is that CC arises when (a) there are both predictably positive and predictably negative indirect effects of (say) saving lives which are larger in magnitude than the direct effects, and (b) you canāt weigh them all against each other so as to arise at an all-things-considered judgment of the sign of the value of the intervention.
A common response to the phenomenon of CC is to say, āI know that the direct effects are good, and I struggle to weigh all of the indirect effects, so the latter are zero for me in expectation, and the intervention is appealingā. But (unless thereās a strong counterargument to Hilaryās observation about this in āCluelessnessā which Iām unaware of), this response is invalid. We know this because if this response were valid, we could by identical reasoning pick out any category of effect whose effects we can estimateāthe effect on farmed chicken welfare next year from saving a chicken-eaterās life, sayāand say āI know that the next-year-chicken effects are bad, and I struggle to weigh all of the non-next-year-chicken effects, so the latter are zero for me in expectation, and the intervention is unappealingā.
The above reasoning doesnāt invalidate that kind of response to simple cluelessness, because there the indirect effects have a featureāsymmetryāwhich breaks when you cut up the space of consequences differently. But this means that, unless one can demonstrate that the distribution of non-direct effects has a sort of evidential symmetry that the distribution of non-next-year-chicken effects does not, one is not yet in a position to put a sign to the value of saving a life.
So, the response to
is that, given an inability to weigh all the effects, and an absence of evidential symmetry, I simply donāt have an expected value (or even a sign) of the indirect effects, or the total effects, of saving a life.
Does that clarify things at all, or am I the one doing the talking-past?
Sorry, I misunderstood your comment on my first reading, so I retracted my first reply.
No worries, sorry if I didnāt write it as clearly as I could have!
BTW, Iāve had this conversation enough times now that last summer I wrote down my thoughts on cluelessness in a document that Iāve been told is pretty accessibleāthis is the doc I link to from the words ādonāt have an expected valueā. I know it can be annoying just to be pointed off the page, but just letting you know in case you find it helpful or interesting.
Iām not sure if what Iām defending is quite the same as whatās in your example. Itās not really about direct or indirect effects or how to group effects to try to cancel them; itās just skepticism about effects.
Iāll exclude whichever I donāt have a good effect size estimate on my social welfare function for (possibly multiple), since Iāll assume the expected effect size is small. If I have effect sizes for both, then I can just estimate the net effect. As a first approximation, Iād just add the two effects. If I have reason to believe they should interact in certain ways and I can model this, I might.
If youāre saying I know the two opposite sign indirect effects are larger in magnitude than the direct ones, it sounds like I have estimates I can just sum (as a first approximation). Is the point that Iām confident theyāre larger in magnitude, but still not confident enough to estimate their expected magnitudes more precisely?
Maybe I have a good idea of the impacts over each possible future, but Iām very uncertain about the distribution of possible futures. I could be confident about the sign of the effect of population growth when comparing pairs of counterfactuals, one with the child saved, and the other not, but Iām not confident enough to form distributions over the two sets of counterfactuals to be able to determine the sign of the expected value.
I think Iām basically treating each effect without an estimate attached independently like simple cluelessness. Iām not looking at a group of positive and negative effects and assuming they cancel; Iām doubting the signs of the effects that donāt come with estimates. If I have a plausible argument that doing X affects Y and Y affects Z, which I value directly and the effect should be good, but I donāt have an estimate for the effect through this causal path, Iām not actually convinced that the effect through this path isnāt bad.
Now, Iām not relying on a nice symmetry argument to justify this treatment like simple cluelessness, but Iām also not cutting up the space of consequences and ignoring subsets; Iām just ignoring each effect Iām skeptical of.
This does push the problem to which effects I should try to estimate, though.
Yes, exactlyāthatās the point of the African population growth example.
I donāt understand this paragraph. Could you clarify?
I donāt think I understand this either:
Say you have a plausible argument that pushing a switch (doing X) pulls some number n > 0 of strings (so Y := #strings_pulled goes from 0 to n), each of which releases some food to m > 0 hungry lab mice (so Z := #fed_mice goes from 0 to nm), and you know that X and Y have no other consequences. You know that n, m > 0 but donāt have estimates for them. At face value you seem to be saying youāre not convinced that the effect of pushing the switch isnāt bad, but that canāt be right!
Population growth will be net good or bad depending on my credences about what the future would have looked like, but these credences are not robust. E.g. I might think itās bad in cases like X and good in cases like notX and have conditional expectations for both, but Iām basically just guessing the probability of X, and which is better depends on the probability of X (under each action).
So the assumption here is that I think the effect is nonnegative with probability 1. I donāt think mere plausibility arguments or considerations give me that kind of credence. As a specific example, is population growth actually bad for climate change? The argument is āMore people, more consumption, more emissionsā, but with no numbers attached. In this case, I think thereās some probability that population growth is good for climate change, and without estimates for the argument, Iād assume the amount of climate change would be identically distributed with and without population growth. Of course, in this case, I think we have enough data and models to actually estimate some of the effects.
Even with estimates, I still think thereās a chance population growth is good for climate change, although my expected value would be that itās bad. It could depend on what kind of people the extra people are like, and what kinds of effects they have on society.
Suppose for simplicity that we can split the effects of saving a life into
1) benefits accruing to the beneficiary;
2) benefits accruing to future generations up to 2100, through increased size (following from (1)); and
3) further effects (following from (2)).
It seems like youāre saying that thereās some proposition X such that (3) is overall good if X and bad if not-X, where we can only guess at the probability of X; and that in this circumstance we can say that the overall effect of (2 & 3) is ~zero in expectation.
If thatās right, what Iām struggling to see is why we canāt likewise say that thereās some proposition Y such that (2 & 3) is overall good if Y and bad if not-Y, where we can only guess at the probability of Y, and that the overall effect of (1 & 2 & 3) is therefore ~zero in expectation.
I wasnāt saying we should cancel them this way; Iām just trying to understand exactly what the CC problem is here.
What I have been proposing is that Iām independently skeptical of each causal effect that doesnāt come with effect size estimates (and canāt, especially), as in my other comments, and Sauliusā here. If you give me a causal model, and claim A has a certain effect on B, without justifying rough effect sizes, I am by default skeptical of that claim and treat that like simple cluelessness: B conditional on changing A is identically distributed to B. You have not yet justified a systematic effect of A on B.
However, Iām thinking that I could be pretty confident about effect sizes conditional on X and notX, but have little idea about the probability of X. In this case, I shouldnāt just apply the same skepticism, and Iām stuck trying to figure out the probability of X, which would allow me to weigh the different effects against each other, but I donāt know how to do it. Is this an example of CC?
What Iām saying is, āMichael: youāve given me a causal model, and claimed A (saving lives) has a positive effect on B (total moral value in the universe, given all the indirect effects), without justifying a rough effect size. You just justified a rough effect size on C (value to direct beneficiaries), but thatās not ultimately what matters. By default I think A has no systematic effect on B, and you have not yet justified one.ā
Yes, you have CC in that circumstance if you donāt have evidential symmetry with respect to X.
The value to the universe is the sum of values to possible beneficiaries, including the direct ones C, so there is a direct and known causal effect of C on B.u1 has a causal effect on āiui, under any reasonable definition of causal effect, and itās the obvious one: any change in u1 directly causes an equal change in the sum, without affecting the other terms. The value in my life (or some moment of it) u1 doesnāt affect yours u2, although my life itself or your judgment about my u1 might affect your life and your u2. Similarly, any subset of the ui (including C) has a causal effect on the sum.
If you think A has no effect on B (in expectation), this is a claim that the effects through C are exactly negated by other effects from A (in expectation), but this is the kind of causal claim that Iāve been saying Iām skeptical of, since it doesnāt come with a (justified) effect size estimate (or even an plausible argument for how this happens, in this case).
This is pretty different from the skepticism I have about long-term effects: in this case, people are claiming that A affects a particular set of beneficiaries C where C is in the future, but they havenāt justified an effect size of A on C in the first place; many things could happen before C, completely drowning out the effect. Since Iām not convinced C is affected in any particular way, Iām not convinced B is either, through this proposed causal chain.
With short term effects, when thereās good feedback, I actually have proxy observations that tell me that in fact A affects C in certain ways (although there are still generalization error and the reference class problem to worry about).
At the risk of repetition, Iād say that by the same reasoning, we could likewise add in our best estimates of saving a life on (just, say) total human welfare up to 2100.
Your response here was that ā[p]opulation growth will be net good or bad depending on my credences about what the future would have looked like, but these credences are not robustā. But as with the first beneficiary, we can separate the direct welfare impact of population growth from all its other effects and observe that the former is a part of āsum u_iā, no?
Of course, estimates of shorter-term effects are usually more reliable than those of longer-term effects, for all sorts of reasons; but since weāre not arguing over whether saving lives in certain regions can be expected to increase population size up to 2100, that doesnāt seem to me like the point of dispute in this case.
Iām not sure where weāre failing to communicate exactly, but Iām a little worried that this is clogging the comments section! Let me know if you want to really try to get to the bottom of this sometime, in some other context.
Iām not trying to solve all complex cluelessness cases with my argument. I think population growth is plausibly a case with complex cluelessness, but this depends on your views.
If I were a total utilitarian with symmetric population ethics, and didnāt care much about nonhuman animals (neither of which is actually true for me), then Iād guess the negative externalities of a larger population would be strongly dominated by the benefits of a larger population, mostly just the direct benefits of the welfare of the extra people. I donāt think the effects of climate change are that important here, and Iām not aware of other important negative externalities. So for people with such views, itās actually just not a case of complex cluelessness at all. The expectation that more people than just the one you saved will live probably increases the cost-effectiveness to someone with such views.
Similarly, I think Brian Tomasik has supported the Humane Slaughter Association basically because he doesnāt think the effects on animal population sizes and wild animals generally are significant compared to the benefits. It does good with little risk of harm.
So, compared to doing nothing (or some specific default action), some actions do look robustly good in expectation. Compared to some other options, there will be complex cluelessness, but Iām happy to choose something that looks best in expectation compared to doing nothing. I suppose this might privilege a specific default action to compare to in a nonconsequentialist way, although maybe thereās a way that gives similar recommendations without such privileging (Iām only thinking about this now):
You could model this as a partial order with A strictly dominating B if the expected value of A is robustly greater than the expected value of B. At least, you should never choose dominated actions. You could also require that the action you choose dominates at least one action when there is any domination in the set of actions youāre considering, and maybe this would handle a lot of complex cluelessness, if actions are decomposed enough into pretty atomic actions. For example, with complex cluelessness about saving lives compared to nothing, saving a life and punching myself in the face is dominated by saving a life and not punching myself in the face, but I can treat saving a life or not and at a separate time punching myself in the face or not as two separate decisions.