If you give me a causal model, and claim A has a certain effect on B, without justifying rough effect sizes, I am by default skeptical of that claim and treat that like simple cluelessness: B conditional on changing A is identically distributed to B. You have not yet justified a systematic effect of A on B.
What I’m saying is, “Michael: you’ve given me a causal model, and claimed A (saving lives) has a positive effect on B (total moral value in the universe, given all the indirect effects), without justifying a rough effect size. You just justified a rough effect size on C (value to direct beneficiaries), but that’s not ultimately what matters. By default I think A has no systematic effect on B, and you have not yet justified one.”
Is this an example of CC?
Yes, you have CC in that circumstance if you don’t have evidential symmetry with respect to X.
“Michael: you’ve given me a causal model, and claimed A (saving lives) has a positive effect on B (total moral value in the universe, given all the indirect effects), without justifying a rough effect size. You just justified a rough effect size on C (value to direct beneficiaries), but that’s not ultimately what matters. By default I think A has no systematic effect on B, and you have not yet justified one.”
The value to the universe is the sum of values to possible beneficiaries, including the direct ones C, so there is a direct and known causal effect of C on B.u1 has a causal effect on ∑iui, under any reasonable definition of causal effect, and it’s the obvious one: any change in u1 directly causes an equal change in the sum, without affecting the other terms. The value in my life (or some moment of it) u1 doesn’t affect yours u2, although my life itself or your judgment about my u1 might affect your life and your u2. Similarly, any subset of the ui (including C) has a causal effect on the sum.
If you think A has no effect on B (in expectation), this is a claim that the effects through C are exactly negated by other effects from A (in expectation), but this is the kind of causal claim that I’ve been saying I’m skeptical of, since it doesn’t come with a (justified) effect size estimate (or even an plausible argument for how this happens, in this case).
This is pretty different from the skepticism I have about long-term effects: in this case, people are claiming that A affects a particular set of beneficiaries C where C is in the future, but they haven’t justified an effect size of A on C in the first place; many things could happen before C, completely drowning out the effect. Since I’m not convinced C is affected in any particular way, I’m not convinced B is either, through this proposed causal chain.
With short term effects, when there’s good feedback, I actually have proxy observations that tell me that in fact A affects C in certain ways (although there are still generalization error and the reference class problem to worry about).
At the risk of repetition, I’d say that by the same reasoning, we could likewise add in our best estimates of saving a life on (just, say) total human welfare up to 2100.
Your response here was that “[p]opulation growth will be net good or bad depending on my credences about what the future would have looked like, but these credences are not robust”. But as with the first beneficiary, we can separate the direct welfare impact of population growth from all its other effects and observe that the former is a part of “sum u_i”, no?
Of course, estimates of shorter-term effects are usually more reliable than those of longer-term effects, for all sorts of reasons; but since we’re not arguing over whether saving lives in certain regions can be expected to increase population size up to 2100, that doesn’t seem to me like the point of dispute in this case.
I’m not sure where we’re failing to communicate exactly, but I’m a little worried that this is clogging the comments section! Let me know if you want to really try to get to the bottom of this sometime, in some other context.
I’m not trying to solve all complex cluelessness cases with my argument. I think population growth is plausibly a case with complex cluelessness, but this depends on your views.
If I were a total utilitarian with symmetric population ethics, and didn’t care much about nonhuman animals (neither of which is actually true for me), then I’d guess the negative externalities of a larger population would be strongly dominated by the benefits of a larger population, mostly just the direct benefits of the welfare of the extra people. I don’t think the effects of climate change are that important here, and I’m not aware of other important negative externalities. So for people with such views, it’s actually just not a case of complex cluelessness at all. The expectation that more people than just the one you saved will live probably increases the cost-effectiveness to someone with such views.
So, compared to doing nothing (or some specific default action), some actions do look robustly good in expectation. Compared to some other options, there will be complex cluelessness, but I’m happy to choose something that looks best in expectation compared to doing nothing. I suppose this might privilege a specific default action to compare to in a nonconsequentialist way, although maybe there’s a way that gives similar recommendations without such privileging (I’m only thinking about this now):
You could model this as a partial order with A strictly dominating B if the expected value of A is robustly greater than the expected value of B. At least, you should never choose dominated actions. You could also require that the action you choose dominates at least one action when there is any domination in the set of actions you’re considering, and maybe this would handle a lot of complex cluelessness, if actions are decomposed enough into pretty atomic actions. For example, with complex cluelessness about saving lives compared to nothing, saving a life and punching myself in the face is dominated by saving a life and not punching myself in the face, but I can treat saving a life or not and at a separate time punching myself in the face or not as two separate decisions.
What I’m saying is, “Michael: you’ve given me a causal model, and claimed A (saving lives) has a positive effect on B (total moral value in the universe, given all the indirect effects), without justifying a rough effect size. You just justified a rough effect size on C (value to direct beneficiaries), but that’s not ultimately what matters. By default I think A has no systematic effect on B, and you have not yet justified one.”
Yes, you have CC in that circumstance if you don’t have evidential symmetry with respect to X.
The value to the universe is the sum of values to possible beneficiaries, including the direct ones C, so there is a direct and known causal effect of C on B.u1 has a causal effect on ∑iui, under any reasonable definition of causal effect, and it’s the obvious one: any change in u1 directly causes an equal change in the sum, without affecting the other terms. The value in my life (or some moment of it) u1 doesn’t affect yours u2, although my life itself or your judgment about my u1 might affect your life and your u2. Similarly, any subset of the ui (including C) has a causal effect on the sum.
If you think A has no effect on B (in expectation), this is a claim that the effects through C are exactly negated by other effects from A (in expectation), but this is the kind of causal claim that I’ve been saying I’m skeptical of, since it doesn’t come with a (justified) effect size estimate (or even an plausible argument for how this happens, in this case).
This is pretty different from the skepticism I have about long-term effects: in this case, people are claiming that A affects a particular set of beneficiaries C where C is in the future, but they haven’t justified an effect size of A on C in the first place; many things could happen before C, completely drowning out the effect. Since I’m not convinced C is affected in any particular way, I’m not convinced B is either, through this proposed causal chain.
With short term effects, when there’s good feedback, I actually have proxy observations that tell me that in fact A affects C in certain ways (although there are still generalization error and the reference class problem to worry about).
At the risk of repetition, I’d say that by the same reasoning, we could likewise add in our best estimates of saving a life on (just, say) total human welfare up to 2100.
Your response here was that “[p]opulation growth will be net good or bad depending on my credences about what the future would have looked like, but these credences are not robust”. But as with the first beneficiary, we can separate the direct welfare impact of population growth from all its other effects and observe that the former is a part of “sum u_i”, no?
Of course, estimates of shorter-term effects are usually more reliable than those of longer-term effects, for all sorts of reasons; but since we’re not arguing over whether saving lives in certain regions can be expected to increase population size up to 2100, that doesn’t seem to me like the point of dispute in this case.
I’m not sure where we’re failing to communicate exactly, but I’m a little worried that this is clogging the comments section! Let me know if you want to really try to get to the bottom of this sometime, in some other context.
I’m not trying to solve all complex cluelessness cases with my argument. I think population growth is plausibly a case with complex cluelessness, but this depends on your views.
If I were a total utilitarian with symmetric population ethics, and didn’t care much about nonhuman animals (neither of which is actually true for me), then I’d guess the negative externalities of a larger population would be strongly dominated by the benefits of a larger population, mostly just the direct benefits of the welfare of the extra people. I don’t think the effects of climate change are that important here, and I’m not aware of other important negative externalities. So for people with such views, it’s actually just not a case of complex cluelessness at all. The expectation that more people than just the one you saved will live probably increases the cost-effectiveness to someone with such views.
Similarly, I think Brian Tomasik has supported the Humane Slaughter Association basically because he doesn’t think the effects on animal population sizes and wild animals generally are significant compared to the benefits. It does good with little risk of harm.
So, compared to doing nothing (or some specific default action), some actions do look robustly good in expectation. Compared to some other options, there will be complex cluelessness, but I’m happy to choose something that looks best in expectation compared to doing nothing. I suppose this might privilege a specific default action to compare to in a nonconsequentialist way, although maybe there’s a way that gives similar recommendations without such privileging (I’m only thinking about this now):
You could model this as a partial order with A strictly dominating B if the expected value of A is robustly greater than the expected value of B. At least, you should never choose dominated actions. You could also require that the action you choose dominates at least one action when there is any domination in the set of actions you’re considering, and maybe this would handle a lot of complex cluelessness, if actions are decomposed enough into pretty atomic actions. For example, with complex cluelessness about saving lives compared to nothing, saving a life and punching myself in the face is dominated by saving a life and not punching myself in the face, but I can treat saving a life or not and at a separate time punching myself in the face or not as two separate decisions.