I donāt understand this paragraph. Could you clarify?
Population growth will be net good or bad depending on my credences about what the future would have looked like, but these credences are not robust. E.g. I might think itās bad in cases like X and good in cases like notX and have conditional expectations for both, but Iām basically just guessing the probability of X, and which is better depends on the probability of X (under each action).
Say you have a plausible argument that pushing a switch (doing X) pulls some number n > 0 of strings (so Y := #strings_pulled goes from 0 to n), each of which releases some food to m > 0 hungry lab mice (so Z := #fed_mice goes from 0 to nm), and you know that X and Y have no other consequences. You know that n, m > 0 but donāt have estimates for them. At face value you seem to be saying youāre not convinced that the effect of pushing the switch isnāt bad, but that canāt be right!
So the assumption here is that I think the effect is nonnegative with probability 1. I donāt think mere plausibility arguments or considerations give me that kind of credence. As a specific example, is population growth actually bad for climate change? The argument is āMore people, more consumption, more emissionsā, but with no numbers attached. In this case, I think thereās some probability that population growth is good for climate change, and without estimates for the argument, Iād assume the amount of climate change would be identically distributed with and without population growth. Of course, in this case, I think we have enough data and models to actually estimate some of the effects.
Even with estimates, I still think thereās a chance population growth is good for climate change, although my expected value would be that itās bad. It could depend on what kind of people the extra people are like, and what kinds of effects they have on society.
Population growth will be net good or bad depending on my credences about what the future would have looked like, but these credences are not robust.
Suppose for simplicity that we can split the effects of saving a life into
1) benefits accruing to the beneficiary;
2) benefits accruing to future generations up to 2100, through increased size (following from (1)); and
3) further effects (following from (2)).
It seems like youāre saying that thereās some proposition X such that (3) is overall good if X and bad if not-X, where we can only guess at the probability of X; and that in this circumstance we can say that the overall effect of (2 & 3) is ~zero in expectation.
If thatās right, what Iām struggling to see is why we canāt likewise say that thereās some proposition Y such that (2 & 3) is overall good if Y and bad if not-Y, where we can only guess at the probability of Y, and that the overall effect of (1 & 2 & 3) is therefore ~zero in expectation.
It seems like youāre saying that thereās some proposition X such that (3) is overall good if X and bad if not-X, where we can only guess at the probability of X; and that in this circumstance we can say that the overall effect of (2 & 3) is ~zero in expectation.
I wasnāt saying we should cancel them this way; Iām just trying to understand exactly what the CC problem is here.
What I have been proposing is that Iām independently skeptical of each causal effect that doesnāt come with effect size estimates (and canāt, especially), as in my other comments, and Sauliusā here. If you give me a causal model, and claim A has a certain effect on B, without justifying rough effect sizes, I am by default skeptical of that claim and treat that like simple cluelessness: B conditional on changing A is identically distributed to B. You have not yet justified a systematic effect of A on B.
However, Iām thinking that I could be pretty confident about effect sizes conditional on X and notX, but have little idea about the probability of X. In this case, I shouldnāt just apply the same skepticism, and Iām stuck trying to figure out the probability of X, which would allow me to weigh the different effects against each other, but I donāt know how to do it. Is this an example of CC?
If you give me a causal model, and claim A has a certain effect on B, without justifying rough effect sizes, I am by default skeptical of that claim and treat that like simple cluelessness: B conditional on changing A is identically distributed to B. You have not yet justified a systematic effect of A on B.
What Iām saying is, āMichael: youāve given me a causal model, and claimed A (saving lives) has a positive effect on B (total moral value in the universe, given all the indirect effects), without justifying a rough effect size. You just justified a rough effect size on C (value to direct beneficiaries), but thatās not ultimately what matters. By default I think A has no systematic effect on B, and you have not yet justified one.ā
Is this an example of CC?
Yes, you have CC in that circumstance if you donāt have evidential symmetry with respect to X.
āMichael: youāve given me a causal model, and claimed A (saving lives) has a positive effect on B (total moral value in the universe, given all the indirect effects), without justifying a rough effect size. You just justified a rough effect size on C (value to direct beneficiaries), but thatās not ultimately what matters. By default I think A has no systematic effect on B, and you have not yet justified one.ā
The value to the universe is the sum of values to possible beneficiaries, including the direct ones C, so there is a direct and known causal effect of C on B.u1 has a causal effect on āiui, under any reasonable definition of causal effect, and itās the obvious one: any change in u1 directly causes an equal change in the sum, without affecting the other terms. The value in my life (or some moment of it) u1 doesnāt affect yours u2, although my life itself or your judgment about my u1 might affect your life and your u2. Similarly, any subset of the ui (including C) has a causal effect on the sum.
If you think A has no effect on B (in expectation), this is a claim that the effects through C are exactly negated by other effects from A (in expectation), but this is the kind of causal claim that Iāve been saying Iām skeptical of, since it doesnāt come with a (justified) effect size estimate (or even an plausible argument for how this happens, in this case).
This is pretty different from the skepticism I have about long-term effects: in this case, people are claiming that A affects a particular set of beneficiaries C where C is in the future, but they havenāt justified an effect size of A on C in the first place; many things could happen before C, completely drowning out the effect. Since Iām not convinced C is affected in any particular way, Iām not convinced B is either, through this proposed causal chain.
With short term effects, when thereās good feedback, I actually have proxy observations that tell me that in fact A affects C in certain ways (although there are still generalization error and the reference class problem to worry about).
At the risk of repetition, Iād say that by the same reasoning, we could likewise add in our best estimates of saving a life on (just, say) total human welfare up to 2100.
Your response here was that ā[p]opulation growth will be net good or bad depending on my credences about what the future would have looked like, but these credences are not robustā. But as with the first beneficiary, we can separate the direct welfare impact of population growth from all its other effects and observe that the former is a part of āsum u_iā, no?
Of course, estimates of shorter-term effects are usually more reliable than those of longer-term effects, for all sorts of reasons; but since weāre not arguing over whether saving lives in certain regions can be expected to increase population size up to 2100, that doesnāt seem to me like the point of dispute in this case.
Iām not sure where weāre failing to communicate exactly, but Iām a little worried that this is clogging the comments section! Let me know if you want to really try to get to the bottom of this sometime, in some other context.
Iām not trying to solve all complex cluelessness cases with my argument. I think population growth is plausibly a case with complex cluelessness, but this depends on your views.
If I were a total utilitarian with symmetric population ethics, and didnāt care much about nonhuman animals (neither of which is actually true for me), then Iād guess the negative externalities of a larger population would be strongly dominated by the benefits of a larger population, mostly just the direct benefits of the welfare of the extra people. I donāt think the effects of climate change are that important here, and Iām not aware of other important negative externalities. So for people with such views, itās actually just not a case of complex cluelessness at all. The expectation that more people than just the one you saved will live probably increases the cost-effectiveness to someone with such views.
So, compared to doing nothing (or some specific default action), some actions do look robustly good in expectation. Compared to some other options, there will be complex cluelessness, but Iām happy to choose something that looks best in expectation compared to doing nothing. I suppose this might privilege a specific default action to compare to in a nonconsequentialist way, although maybe thereās a way that gives similar recommendations without such privileging (Iām only thinking about this now):
You could model this as a partial order with A strictly dominating B if the expected value of A is robustly greater than the expected value of B. At least, you should never choose dominated actions. You could also require that the action you choose dominates at least one action when there is any domination in the set of actions youāre considering, and maybe this would handle a lot of complex cluelessness, if actions are decomposed enough into pretty atomic actions. For example, with complex cluelessness about saving lives compared to nothing, saving a life and punching myself in the face is dominated by saving a life and not punching myself in the face, but I can treat saving a life or not and at a separate time punching myself in the face or not as two separate decisions.
Population growth will be net good or bad depending on my credences about what the future would have looked like, but these credences are not robust. E.g. I might think itās bad in cases like X and good in cases like notX and have conditional expectations for both, but Iām basically just guessing the probability of X, and which is better depends on the probability of X (under each action).
So the assumption here is that I think the effect is nonnegative with probability 1. I donāt think mere plausibility arguments or considerations give me that kind of credence. As a specific example, is population growth actually bad for climate change? The argument is āMore people, more consumption, more emissionsā, but with no numbers attached. In this case, I think thereās some probability that population growth is good for climate change, and without estimates for the argument, Iād assume the amount of climate change would be identically distributed with and without population growth. Of course, in this case, I think we have enough data and models to actually estimate some of the effects.
Even with estimates, I still think thereās a chance population growth is good for climate change, although my expected value would be that itās bad. It could depend on what kind of people the extra people are like, and what kinds of effects they have on society.
Suppose for simplicity that we can split the effects of saving a life into
1) benefits accruing to the beneficiary;
2) benefits accruing to future generations up to 2100, through increased size (following from (1)); and
3) further effects (following from (2)).
It seems like youāre saying that thereās some proposition X such that (3) is overall good if X and bad if not-X, where we can only guess at the probability of X; and that in this circumstance we can say that the overall effect of (2 & 3) is ~zero in expectation.
If thatās right, what Iām struggling to see is why we canāt likewise say that thereās some proposition Y such that (2 & 3) is overall good if Y and bad if not-Y, where we can only guess at the probability of Y, and that the overall effect of (1 & 2 & 3) is therefore ~zero in expectation.
I wasnāt saying we should cancel them this way; Iām just trying to understand exactly what the CC problem is here.
What I have been proposing is that Iām independently skeptical of each causal effect that doesnāt come with effect size estimates (and canāt, especially), as in my other comments, and Sauliusā here. If you give me a causal model, and claim A has a certain effect on B, without justifying rough effect sizes, I am by default skeptical of that claim and treat that like simple cluelessness: B conditional on changing A is identically distributed to B. You have not yet justified a systematic effect of A on B.
However, Iām thinking that I could be pretty confident about effect sizes conditional on X and notX, but have little idea about the probability of X. In this case, I shouldnāt just apply the same skepticism, and Iām stuck trying to figure out the probability of X, which would allow me to weigh the different effects against each other, but I donāt know how to do it. Is this an example of CC?
What Iām saying is, āMichael: youāve given me a causal model, and claimed A (saving lives) has a positive effect on B (total moral value in the universe, given all the indirect effects), without justifying a rough effect size. You just justified a rough effect size on C (value to direct beneficiaries), but thatās not ultimately what matters. By default I think A has no systematic effect on B, and you have not yet justified one.ā
Yes, you have CC in that circumstance if you donāt have evidential symmetry with respect to X.
The value to the universe is the sum of values to possible beneficiaries, including the direct ones C, so there is a direct and known causal effect of C on B.u1 has a causal effect on āiui, under any reasonable definition of causal effect, and itās the obvious one: any change in u1 directly causes an equal change in the sum, without affecting the other terms. The value in my life (or some moment of it) u1 doesnāt affect yours u2, although my life itself or your judgment about my u1 might affect your life and your u2. Similarly, any subset of the ui (including C) has a causal effect on the sum.
If you think A has no effect on B (in expectation), this is a claim that the effects through C are exactly negated by other effects from A (in expectation), but this is the kind of causal claim that Iāve been saying Iām skeptical of, since it doesnāt come with a (justified) effect size estimate (or even an plausible argument for how this happens, in this case).
This is pretty different from the skepticism I have about long-term effects: in this case, people are claiming that A affects a particular set of beneficiaries C where C is in the future, but they havenāt justified an effect size of A on C in the first place; many things could happen before C, completely drowning out the effect. Since Iām not convinced C is affected in any particular way, Iām not convinced B is either, through this proposed causal chain.
With short term effects, when thereās good feedback, I actually have proxy observations that tell me that in fact A affects C in certain ways (although there are still generalization error and the reference class problem to worry about).
At the risk of repetition, Iād say that by the same reasoning, we could likewise add in our best estimates of saving a life on (just, say) total human welfare up to 2100.
Your response here was that ā[p]opulation growth will be net good or bad depending on my credences about what the future would have looked like, but these credences are not robustā. But as with the first beneficiary, we can separate the direct welfare impact of population growth from all its other effects and observe that the former is a part of āsum u_iā, no?
Of course, estimates of shorter-term effects are usually more reliable than those of longer-term effects, for all sorts of reasons; but since weāre not arguing over whether saving lives in certain regions can be expected to increase population size up to 2100, that doesnāt seem to me like the point of dispute in this case.
Iām not sure where weāre failing to communicate exactly, but Iām a little worried that this is clogging the comments section! Let me know if you want to really try to get to the bottom of this sometime, in some other context.
Iām not trying to solve all complex cluelessness cases with my argument. I think population growth is plausibly a case with complex cluelessness, but this depends on your views.
If I were a total utilitarian with symmetric population ethics, and didnāt care much about nonhuman animals (neither of which is actually true for me), then Iād guess the negative externalities of a larger population would be strongly dominated by the benefits of a larger population, mostly just the direct benefits of the welfare of the extra people. I donāt think the effects of climate change are that important here, and Iām not aware of other important negative externalities. So for people with such views, itās actually just not a case of complex cluelessness at all. The expectation that more people than just the one you saved will live probably increases the cost-effectiveness to someone with such views.
Similarly, I think Brian Tomasik has supported the Humane Slaughter Association basically because he doesnāt think the effects on animal population sizes and wild animals generally are significant compared to the benefits. It does good with little risk of harm.
So, compared to doing nothing (or some specific default action), some actions do look robustly good in expectation. Compared to some other options, there will be complex cluelessness, but Iām happy to choose something that looks best in expectation compared to doing nothing. I suppose this might privilege a specific default action to compare to in a nonconsequentialist way, although maybe thereās a way that gives similar recommendations without such privileging (Iām only thinking about this now):
You could model this as a partial order with A strictly dominating B if the expected value of A is robustly greater than the expected value of B. At least, you should never choose dominated actions. You could also require that the action you choose dominates at least one action when there is any domination in the set of actions youāre considering, and maybe this would handle a lot of complex cluelessness, if actions are decomposed enough into pretty atomic actions. For example, with complex cluelessness about saving lives compared to nothing, saving a life and punching myself in the face is dominated by saving a life and not punching myself in the face, but I can treat saving a life or not and at a separate time punching myself in the face or not as two separate decisions.