Is the point that I’m confident they’re larger in magnitude, but still not confident enough to estimate their expected magnitudes more precisely?
Yes, exactly—that’s the point of the African population growth example.
Maybe I have a good idea of the impacts over each possible future, but I’m very uncertain about the distribution of possible futures. I could be confident about the sign of the effect of population growth when comparing pairs of counterfactuals, one with the child saved, and the other not, but I’m not confident enough to form distributions over the two sets of counterfactuals to be able to determine the sign of the expected value.
I don’t understand this paragraph. Could you clarify?
I don’t think I understand this either:
I’m doubting the signs of the effects that don’t come with estimates. If I have a plausible argument that doing X affects Y and Y affects Z, which I value directly and the effect should be good, but I don’t have an estimate for the effect through this causal path, I’m not actually convinced that the effect through this path isn’t bad.
Say you have a plausible argument that pushing a switch (doing X) pulls some number n > 0 of strings (so Y := #strings_pulled goes from 0 to n), each of which releases some food to m > 0 hungry lab mice (so Z := #fed_mice goes from 0 to nm), and you know that X and Y have no other consequences. You know that n, m > 0 but don’t have estimates for them. At face value you seem to be saying you’re not convinced that the effect of pushing the switch isn’t bad, but that can’t be right!
I don’t understand this paragraph. Could you clarify?
Population growth will be net good or bad depending on my credences about what the future would have looked like, but these credences are not robust. E.g. I might think it’s bad in cases like X and good in cases like notX and have conditional expectations for both, but I’m basically just guessing the probability of X, and which is better depends on the probability of X (under each action).
Say you have a plausible argument that pushing a switch (doing X) pulls some number n > 0 of strings (so Y := #strings_pulled goes from 0 to n), each of which releases some food to m > 0 hungry lab mice (so Z := #fed_mice goes from 0 to nm), and you know that X and Y have no other consequences. You know that n, m > 0 but don’t have estimates for them. At face value you seem to be saying you’re not convinced that the effect of pushing the switch isn’t bad, but that can’t be right!
So the assumption here is that I think the effect is nonnegative with probability 1. I don’t think mere plausibility arguments or considerations give me that kind of credence. As a specific example, is population growth actually bad for climate change? The argument is “More people, more consumption, more emissions”, but with no numbers attached. In this case, I think there’s some probability that population growth is good for climate change, and without estimates for the argument, I’d assume the amount of climate change would be identically distributed with and without population growth. Of course, in this case, I think we have enough data and models to actually estimate some of the effects.
Even with estimates, I still think there’s a chance population growth is good for climate change, although my expected value would be that it’s bad. It could depend on what kind of people the extra people are like, and what kinds of effects they have on society.
Population growth will be net good or bad depending on my credences about what the future would have looked like, but these credences are not robust.
Suppose for simplicity that we can split the effects of saving a life into
1) benefits accruing to the beneficiary;
2) benefits accruing to future generations up to 2100, through increased size (following from (1)); and
3) further effects (following from (2)).
It seems like you’re saying that there’s some proposition X such that (3) is overall good if X and bad if not-X, where we can only guess at the probability of X; and that in this circumstance we can say that the overall effect of (2 & 3) is ~zero in expectation.
If that’s right, what I’m struggling to see is why we can’t likewise say that there’s some proposition Y such that (2 & 3) is overall good if Y and bad if not-Y, where we can only guess at the probability of Y, and that the overall effect of (1 & 2 & 3) is therefore ~zero in expectation.
It seems like you’re saying that there’s some proposition X such that (3) is overall good if X and bad if not-X, where we can only guess at the probability of X; and that in this circumstance we can say that the overall effect of (2 & 3) is ~zero in expectation.
I wasn’t saying we should cancel them this way; I’m just trying to understand exactly what the CC problem is here.
What I have been proposing is that I’m independently skeptical of each causal effect that doesn’t come with effect size estimates (and can’t, especially), as in my other comments, and Saulius’ here. If you give me a causal model, and claim A has a certain effect on B, without justifying rough effect sizes, I am by default skeptical of that claim and treat that like simple cluelessness: B conditional on changing A is identically distributed to B. You have not yet justified a systematic effect of A on B.
However, I’m thinking that I could be pretty confident about effect sizes conditional on X and notX, but have little idea about the probability of X. In this case, I shouldn’t just apply the same skepticism, and I’m stuck trying to figure out the probability of X, which would allow me to weigh the different effects against each other, but I don’t know how to do it. Is this an example of CC?
If you give me a causal model, and claim A has a certain effect on B, without justifying rough effect sizes, I am by default skeptical of that claim and treat that like simple cluelessness: B conditional on changing A is identically distributed to B. You have not yet justified a systematic effect of A on B.
What I’m saying is, “Michael: you’ve given me a causal model, and claimed A (saving lives) has a positive effect on B (total moral value in the universe, given all the indirect effects), without justifying a rough effect size. You just justified a rough effect size on C (value to direct beneficiaries), but that’s not ultimately what matters. By default I think A has no systematic effect on B, and you have not yet justified one.”
Is this an example of CC?
Yes, you have CC in that circumstance if you don’t have evidential symmetry with respect to X.
“Michael: you’ve given me a causal model, and claimed A (saving lives) has a positive effect on B (total moral value in the universe, given all the indirect effects), without justifying a rough effect size. You just justified a rough effect size on C (value to direct beneficiaries), but that’s not ultimately what matters. By default I think A has no systematic effect on B, and you have not yet justified one.”
The value to the universe is the sum of values to possible beneficiaries, including the direct ones C, so there is a direct and known causal effect of C on B.u1 has a causal effect on ∑iui, under any reasonable definition of causal effect, and it’s the obvious one: any change in u1 directly causes an equal change in the sum, without affecting the other terms. The value in my life (or some moment of it) u1 doesn’t affect yours u2, although my life itself or your judgment about my u1 might affect your life and your u2. Similarly, any subset of the ui (including C) has a causal effect on the sum.
If you think A has no effect on B (in expectation), this is a claim that the effects through C are exactly negated by other effects from A (in expectation), but this is the kind of causal claim that I’ve been saying I’m skeptical of, since it doesn’t come with a (justified) effect size estimate (or even an plausible argument for how this happens, in this case).
This is pretty different from the skepticism I have about long-term effects: in this case, people are claiming that A affects a particular set of beneficiaries C where C is in the future, but they haven’t justified an effect size of A on C in the first place; many things could happen before C, completely drowning out the effect. Since I’m not convinced C is affected in any particular way, I’m not convinced B is either, through this proposed causal chain.
With short term effects, when there’s good feedback, I actually have proxy observations that tell me that in fact A affects C in certain ways (although there are still generalization error and the reference class problem to worry about).
At the risk of repetition, I’d say that by the same reasoning, we could likewise add in our best estimates of saving a life on (just, say) total human welfare up to 2100.
Your response here was that “[p]opulation growth will be net good or bad depending on my credences about what the future would have looked like, but these credences are not robust”. But as with the first beneficiary, we can separate the direct welfare impact of population growth from all its other effects and observe that the former is a part of “sum u_i”, no?
Of course, estimates of shorter-term effects are usually more reliable than those of longer-term effects, for all sorts of reasons; but since we’re not arguing over whether saving lives in certain regions can be expected to increase population size up to 2100, that doesn’t seem to me like the point of dispute in this case.
I’m not sure where we’re failing to communicate exactly, but I’m a little worried that this is clogging the comments section! Let me know if you want to really try to get to the bottom of this sometime, in some other context.
I’m not trying to solve all complex cluelessness cases with my argument. I think population growth is plausibly a case with complex cluelessness, but this depends on your views.
If I were a total utilitarian with symmetric population ethics, and didn’t care much about nonhuman animals (neither of which is actually true for me), then I’d guess the negative externalities of a larger population would be strongly dominated by the benefits of a larger population, mostly just the direct benefits of the welfare of the extra people. I don’t think the effects of climate change are that important here, and I’m not aware of other important negative externalities. So for people with such views, it’s actually just not a case of complex cluelessness at all. The expectation that more people than just the one you saved will live probably increases the cost-effectiveness to someone with such views.
So, compared to doing nothing (or some specific default action), some actions do look robustly good in expectation. Compared to some other options, there will be complex cluelessness, but I’m happy to choose something that looks best in expectation compared to doing nothing. I suppose this might privilege a specific default action to compare to in a nonconsequentialist way, although maybe there’s a way that gives similar recommendations without such privileging (I’m only thinking about this now):
You could model this as a partial order with A strictly dominating B if the expected value of A is robustly greater than the expected value of B. At least, you should never choose dominated actions. You could also require that the action you choose dominates at least one action when there is any domination in the set of actions you’re considering, and maybe this would handle a lot of complex cluelessness, if actions are decomposed enough into pretty atomic actions. For example, with complex cluelessness about saving lives compared to nothing, saving a life and punching myself in the face is dominated by saving a life and not punching myself in the face, but I can treat saving a life or not and at a separate time punching myself in the face or not as two separate decisions.
Yes, exactly—that’s the point of the African population growth example.
I don’t understand this paragraph. Could you clarify?
I don’t think I understand this either:
Say you have a plausible argument that pushing a switch (doing X) pulls some number n > 0 of strings (so Y := #strings_pulled goes from 0 to n), each of which releases some food to m > 0 hungry lab mice (so Z := #fed_mice goes from 0 to nm), and you know that X and Y have no other consequences. You know that n, m > 0 but don’t have estimates for them. At face value you seem to be saying you’re not convinced that the effect of pushing the switch isn’t bad, but that can’t be right!
Population growth will be net good or bad depending on my credences about what the future would have looked like, but these credences are not robust. E.g. I might think it’s bad in cases like X and good in cases like notX and have conditional expectations for both, but I’m basically just guessing the probability of X, and which is better depends on the probability of X (under each action).
So the assumption here is that I think the effect is nonnegative with probability 1. I don’t think mere plausibility arguments or considerations give me that kind of credence. As a specific example, is population growth actually bad for climate change? The argument is “More people, more consumption, more emissions”, but with no numbers attached. In this case, I think there’s some probability that population growth is good for climate change, and without estimates for the argument, I’d assume the amount of climate change would be identically distributed with and without population growth. Of course, in this case, I think we have enough data and models to actually estimate some of the effects.
Even with estimates, I still think there’s a chance population growth is good for climate change, although my expected value would be that it’s bad. It could depend on what kind of people the extra people are like, and what kinds of effects they have on society.
Suppose for simplicity that we can split the effects of saving a life into
1) benefits accruing to the beneficiary;
2) benefits accruing to future generations up to 2100, through increased size (following from (1)); and
3) further effects (following from (2)).
It seems like you’re saying that there’s some proposition X such that (3) is overall good if X and bad if not-X, where we can only guess at the probability of X; and that in this circumstance we can say that the overall effect of (2 & 3) is ~zero in expectation.
If that’s right, what I’m struggling to see is why we can’t likewise say that there’s some proposition Y such that (2 & 3) is overall good if Y and bad if not-Y, where we can only guess at the probability of Y, and that the overall effect of (1 & 2 & 3) is therefore ~zero in expectation.
I wasn’t saying we should cancel them this way; I’m just trying to understand exactly what the CC problem is here.
What I have been proposing is that I’m independently skeptical of each causal effect that doesn’t come with effect size estimates (and can’t, especially), as in my other comments, and Saulius’ here. If you give me a causal model, and claim A has a certain effect on B, without justifying rough effect sizes, I am by default skeptical of that claim and treat that like simple cluelessness: B conditional on changing A is identically distributed to B. You have not yet justified a systematic effect of A on B.
However, I’m thinking that I could be pretty confident about effect sizes conditional on X and notX, but have little idea about the probability of X. In this case, I shouldn’t just apply the same skepticism, and I’m stuck trying to figure out the probability of X, which would allow me to weigh the different effects against each other, but I don’t know how to do it. Is this an example of CC?
What I’m saying is, “Michael: you’ve given me a causal model, and claimed A (saving lives) has a positive effect on B (total moral value in the universe, given all the indirect effects), without justifying a rough effect size. You just justified a rough effect size on C (value to direct beneficiaries), but that’s not ultimately what matters. By default I think A has no systematic effect on B, and you have not yet justified one.”
Yes, you have CC in that circumstance if you don’t have evidential symmetry with respect to X.
The value to the universe is the sum of values to possible beneficiaries, including the direct ones C, so there is a direct and known causal effect of C on B.u1 has a causal effect on ∑iui, under any reasonable definition of causal effect, and it’s the obvious one: any change in u1 directly causes an equal change in the sum, without affecting the other terms. The value in my life (or some moment of it) u1 doesn’t affect yours u2, although my life itself or your judgment about my u1 might affect your life and your u2. Similarly, any subset of the ui (including C) has a causal effect on the sum.
If you think A has no effect on B (in expectation), this is a claim that the effects through C are exactly negated by other effects from A (in expectation), but this is the kind of causal claim that I’ve been saying I’m skeptical of, since it doesn’t come with a (justified) effect size estimate (or even an plausible argument for how this happens, in this case).
This is pretty different from the skepticism I have about long-term effects: in this case, people are claiming that A affects a particular set of beneficiaries C where C is in the future, but they haven’t justified an effect size of A on C in the first place; many things could happen before C, completely drowning out the effect. Since I’m not convinced C is affected in any particular way, I’m not convinced B is either, through this proposed causal chain.
With short term effects, when there’s good feedback, I actually have proxy observations that tell me that in fact A affects C in certain ways (although there are still generalization error and the reference class problem to worry about).
At the risk of repetition, I’d say that by the same reasoning, we could likewise add in our best estimates of saving a life on (just, say) total human welfare up to 2100.
Your response here was that “[p]opulation growth will be net good or bad depending on my credences about what the future would have looked like, but these credences are not robust”. But as with the first beneficiary, we can separate the direct welfare impact of population growth from all its other effects and observe that the former is a part of “sum u_i”, no?
Of course, estimates of shorter-term effects are usually more reliable than those of longer-term effects, for all sorts of reasons; but since we’re not arguing over whether saving lives in certain regions can be expected to increase population size up to 2100, that doesn’t seem to me like the point of dispute in this case.
I’m not sure where we’re failing to communicate exactly, but I’m a little worried that this is clogging the comments section! Let me know if you want to really try to get to the bottom of this sometime, in some other context.
I’m not trying to solve all complex cluelessness cases with my argument. I think population growth is plausibly a case with complex cluelessness, but this depends on your views.
If I were a total utilitarian with symmetric population ethics, and didn’t care much about nonhuman animals (neither of which is actually true for me), then I’d guess the negative externalities of a larger population would be strongly dominated by the benefits of a larger population, mostly just the direct benefits of the welfare of the extra people. I don’t think the effects of climate change are that important here, and I’m not aware of other important negative externalities. So for people with such views, it’s actually just not a case of complex cluelessness at all. The expectation that more people than just the one you saved will live probably increases the cost-effectiveness to someone with such views.
Similarly, I think Brian Tomasik has supported the Humane Slaughter Association basically because he doesn’t think the effects on animal population sizes and wild animals generally are significant compared to the benefits. It does good with little risk of harm.
So, compared to doing nothing (or some specific default action), some actions do look robustly good in expectation. Compared to some other options, there will be complex cluelessness, but I’m happy to choose something that looks best in expectation compared to doing nothing. I suppose this might privilege a specific default action to compare to in a nonconsequentialist way, although maybe there’s a way that gives similar recommendations without such privileging (I’m only thinking about this now):
You could model this as a partial order with A strictly dominating B if the expected value of A is robustly greater than the expected value of B. At least, you should never choose dominated actions. You could also require that the action you choose dominates at least one action when there is any domination in the set of actions you’re considering, and maybe this would handle a lot of complex cluelessness, if actions are decomposed enough into pretty atomic actions. For example, with complex cluelessness about saving lives compared to nothing, saving a life and punching myself in the face is dominated by saving a life and not punching myself in the face, but I can treat saving a life or not and at a separate time punching myself in the face or not as two separate decisions.