I think one important crux here is differing theories of value.
My preferred theory is the (in my view, commonsensical) view that for something to be good or bad, it has to be good or bad for someone. (This is essentially Christine Korsgaardâs argument; she calls it âtethered valueâ.) That is, value is conditional on some valuer. So where a utilitarian might say that happiness/âwell-being/âwhatever is the good and that we therefore ought to maximise it, I say that the good is always dependent on some creature who values things. If all the creatures in the world valued totally different things than what they do in our dimension, then that would be the good instead.
(I should mention that, though Iâm not very confident about moral philosophy, to me the most plausible view is a version of Kantianism. Maybe I give 70% weight to that, 20% to some form of utilitarianism and the rest to Schopenhauerian ethics/ânorms/âintuitions. I can recommend being a Kantian effective altruist: it keeps you on your toes. Anyway, Iâm closer to non-utilitarian Holden in the post, but with some differences.)
This view has two important implications:
It no longer makes sense to aggregate value. As Korsgaard puts it, âIf Jack would get more pleasure from owning Jillâs convertible than Jill does, the utilitarian thinks you should take the car away from Jill and give it to Jack. I donât think that makes things better for everyone. I think it makes it better for Jack and worse for Jill, and thatâs all. It doesnât make it better on the whole.â
It no longer makes sense to talk about the value of potential people. Their non-existence is neither good nor bad because there is no one for it to be good or bad for. (Exception: They can still be valued by people who are alive. But letâs ignore that.)
I havenât spent tons of time thinking about how this shakes out in longtermism, so quite a lot of uncertainty here. But hereâs roughly how I think this view would apply to your thought experiments:
Challenge 1Aâclimate change. If we decide to ignore climate change, then we wrong future people (because climate change is bad for them). If we donât ignore it, then we donât wrong those people (because they wonât exist); we also donât wrong the future people who will exist, because we did our best to mitigate the problem. In a sense, we have a duty to future generations, whoever they may be.
Challenge 1Bâworld A/âB/âC. It doesnât make sense to compare different world in this way, because that would necessarily involve aggregation. Instead, we have to evaluate every action based on whether it wrongs (or not, or benefits) people in the world it produces.
Challenge 2 -- asymmetry. This objection I think doesnât apply now. The relevant question is still: does our action wrong the person that does come into existence? If we have good reason to believe that a new life will be full of suffering, and we choose to bring it into existence, plausibly we do wrong that person. If we have good reason to believe that the life will be great, and we choose to bring it into existence, obviously we donât wrong the person. (If we do not bring it into existence, we donât wrong anyone, because thereâs no one to wrong.)
Additional thoughts:
I want to mention a harder problem than the âshould we have as many children as possible?â one you mention. It is that it seems ok to abort a fetus that would have a happy life, but it seems really wrong not to abort a fetus we know would have a terrible life full of pain and suffering. (This is apparently called the asymmetry problem in philosophy.) These intuitions make perfect sense if we take the view that value is tethered. But they donât really make sense in total utilitarianism.
Extinction would still be very bad, but it would be bad for the people who are alive when it happens, and for all the people in history whose work to improve things in the far future is being thwarted.
(I recognise that my view gets weirder when we bring probability into the picture (as we have to). Thatâs something I want to think more about. I also totally recognise that my view is pretty complicated, and simplicity is one of the things I admire in utilitarianism.)
I think one important difference between me and non-utilitarian Holden is that I am not a consequentialist, but I kind of suspect that he is? Otherwise I would say that he is ceding too much ground to his evil twin. ;)
I share a number of your intuitions as a starting point, but this dialogue (and previous ones) is intended to pose challenges to those intuitions. To follow up on those:
On Challenge 1A (and as a more general point) - if we take action against climate change, that presumably means making some sort of sacrifice today for the sake of future generations. Does your position imply that this is âsimply better for some and worse for others, and not better or worse on the whole?â Does that imply that it is not particularly good or bad to take action on climate change, such that we may as well do whatâs best for our own generation?
Also on Challenge 1Aâunder your model, who specifically are the people it is âbetter forâ to take action on climate change, if we presume that the set of people that exists conditional on taking action is completely distinct from the set of people that exists conditional on not taking action (due to chaotic effects as discussed in the dialogue)?
On Challenge 1B, are you saying there is no answer to how to ethically choose between those two worlds, if one is simply presented with a choice?
On Challenge 2, does your position imply that it is wrong to bring someone into existence, because there is a risk that they will suffer greatly (which will mean theyâve been wronged), and no way to âoffsetâ this potential wrong?
Non-utilitarian Holden has a lot of consequentialist intuitions that he ideally would like to accommodate, but is not all-in on consequentialism.
As you noticed, I limited the scope of the original comment to axiology (partly because moral theory is messier and more confusing to me), hence the handwaviness. Generally speaking, I trust my intuitions about axiology more than my intuitions about moral theory, because I feel like my intuition is more likely to âoverfitâ on more complicated and specific moral dilemmas than on more basic questions of value, or something in that vein.
Anyway, Iâll just preface the rest of this comment with this: Iâm not very confident about all this and at any rate not sure whether deontology is the most plausible view. (I know that there are consequentialists who take person-affecting views too, but I havenât really read much about it. It seems weird to me because the view of value as tethered seems to resist aggregation, and it seems like you need to aggregate to evaluate and compare different consequences?)
On Challenge 1A (and as a more general point) - if we take action against climate change, that presumably means making some sort of sacrifice today for the sake of future generations. Does your position imply that this is âsimply better for some and worse for others, and not better or worse on the whole?â Does that imply that it is not particularly good or bad to take action on climate change, such that we may as well do whatâs best for our own generation?
Since in deontology we canât compare two consequences and say which one is better, the answer depends on the action used to get there. I guess what matters is whether the action that brings about world X involves us doing or neglecting (or neither) the duties we have towards people in world X (and people alive now). Whether world X is good/âbad for the population of world X (or for people alive today) only matters to the extent that it tells us something about our duties to those people.
Example: Say we can do something about climate change either (1) by becoming benevolent dictators and implementing a carbon tax that way, or (2) by inventing a new travel simulation device, which reduces carbon emissions from flights but is also really addictive. (Assume the consequences of these two scenarios have equivalent expected utility, though I know the example is unfair since âdictatorshipâ sounds really badâI just couldnât think of a better one off the top of my head.) Here, I think the Kantian should reject (1) and permit or even recommend (2), roughly speaking because (2) respects peopleâs autonomy (though the âaddictiveâ part may complicate this a bit) in a way that (1) does not.
Also on Challenge 1Aâunder your model, who specifically are the people it is âbetter forâ to take action on climate change, if we presume that the set of people that exists conditional on taking action is completely distinct from the set of people that exists conditional on not taking action (due to chaotic effects as discussed in the dialogue)?
I donât mean to say that a certain action is better or worse for the people that will exist if we take it. I mean more that what is good or bad for those people matters when deciding what duties we have to them, and this matters when deciding whether the action we take wrongs them. But of course the action canât be said to be âbetterâ for them as they wouldnât have existed otherwise.
On Challenge 1B, are you saying there is no answer to how to ethically choose between those two worlds, if one is simply presented with a choice?
I am imagining this scenario as a choice between two actions, one involving waving a magic wand that brings world X into existence, and the other waving it to bring world Y into existence.
I guess deontology has less to say about this thought experiment than consequentialism does, given that the latter is concerned with the values of states of affair and the former more with the values of actions. What this thought experiment does is almost eliminate the action, reducing it to a choice of value. (Of course choosing is still an action, but it seems qualitatively different to me in a way that I canât really explain.) Most actions weâre faced with in practice probably arenât like that, so it seems like ambivalence in the face of pure value choices isnât too problematic?
I realise that Iâm kind of dodging the question here, but in my defense you are, in a way, asking me to make a decision about consequences, and not actions. :)
On Challenge 2, does your position imply that it is wrong to bring someone into existence, because there is a risk that they will suffer greatly (which will mean theyâve been wronged), and no way to âoffsetâ this potential wrong?
One of the weaknesses in deontology is its awkwardness with uncertainty. I think one ok approach is to put values on outcomes (by âoutcomeâ I mean e.g. âviolating duty Xâ or âcarrying out duty Yâ, not a state of affairs as in consequentialism) and multiplying by probability. So I could put a value on âwronging someone by bringing them into a life of terrible sufferingâ and on âcarrying out my duty to bring a flourishing person into the worldâ (if we have such a duty) and calculating expected value that way. Then whether or not the action is wrong would depend on the level of risk. But that is very tentative âŚ
Really like this post!
I think one important crux here is differing theories of value.
My preferred theory is the (in my view, commonsensical) view that for something to be good or bad, it has to be good or bad for someone. (This is essentially Christine Korsgaardâs argument; she calls it âtethered valueâ.) That is, value is conditional on some valuer. So where a utilitarian might say that happiness/âwell-being/âwhatever is the good and that we therefore ought to maximise it, I say that the good is always dependent on some creature who values things. If all the creatures in the world valued totally different things than what they do in our dimension, then that would be the good instead.
(I should mention that, though Iâm not very confident about moral philosophy, to me the most plausible view is a version of Kantianism. Maybe I give 70% weight to that, 20% to some form of utilitarianism and the rest to Schopenhauerian ethics/ânorms/âintuitions. I can recommend being a Kantian effective altruist: it keeps you on your toes. Anyway, Iâm closer to non-utilitarian Holden in the post, but with some differences.)
This view has two important implications:
It no longer makes sense to aggregate value. As Korsgaard puts it, âIf Jack would get more pleasure from owning Jillâs convertible than Jill does, the utilitarian thinks you should take the car away from Jill and give it to Jack. I donât think that makes things better for everyone. I think it makes it better for Jack and worse for Jill, and thatâs all. It doesnât make it better on the whole.â
It no longer makes sense to talk about the value of potential people. Their non-existence is neither good nor bad because there is no one for it to be good or bad for. (Exception: They can still be valued by people who are alive. But letâs ignore that.)
I havenât spent tons of time thinking about how this shakes out in longtermism, so quite a lot of uncertainty here. But hereâs roughly how I think this view would apply to your thought experiments:
Challenge 1Aâclimate change. If we decide to ignore climate change, then we wrong future people (because climate change is bad for them). If we donât ignore it, then we donât wrong those people (because they wonât exist); we also donât wrong the future people who will exist, because we did our best to mitigate the problem. In a sense, we have a duty to future generations, whoever they may be.
Challenge 1Bâworld A/âB/âC. It doesnât make sense to compare different world in this way, because that would necessarily involve aggregation. Instead, we have to evaluate every action based on whether it wrongs (or not, or benefits) people in the world it produces.
Challenge 2 -- asymmetry. This objection I think doesnât apply now. The relevant question is still: does our action wrong the person that does come into existence? If we have good reason to believe that a new life will be full of suffering, and we choose to bring it into existence, plausibly we do wrong that person. If we have good reason to believe that the life will be great, and we choose to bring it into existence, obviously we donât wrong the person. (If we do not bring it into existence, we donât wrong anyone, because thereâs no one to wrong.)
Additional thoughts:
I want to mention a harder problem than the âshould we have as many children as possible?â one you mention. It is that it seems ok to abort a fetus that would have a happy life, but it seems really wrong not to abort a fetus we know would have a terrible life full of pain and suffering. (This is apparently called the asymmetry problem in philosophy.) These intuitions make perfect sense if we take the view that value is tethered. But they donât really make sense in total utilitarianism.
Extinction would still be very bad, but it would be bad for the people who are alive when it happens, and for all the people in history whose work to improve things in the far future is being thwarted.
(I recognise that my view gets weirder when we bring probability into the picture (as we have to). Thatâs something I want to think more about. I also totally recognise that my view is pretty complicated, and simplicity is one of the things I admire in utilitarianism.)
I think one important difference between me and non-utilitarian Holden is that I am not a consequentialist, but I kind of suspect that he is? Otherwise I would say that he is ceding too much ground to his evil twin. ;)
I share a number of your intuitions as a starting point, but this dialogue (and previous ones) is intended to pose challenges to those intuitions. To follow up on those:
On Challenge 1A (and as a more general point) - if we take action against climate change, that presumably means making some sort of sacrifice today for the sake of future generations. Does your position imply that this is âsimply better for some and worse for others, and not better or worse on the whole?â Does that imply that it is not particularly good or bad to take action on climate change, such that we may as well do whatâs best for our own generation?
Also on Challenge 1Aâunder your model, who specifically are the people it is âbetter forâ to take action on climate change, if we presume that the set of people that exists conditional on taking action is completely distinct from the set of people that exists conditional on not taking action (due to chaotic effects as discussed in the dialogue)?
On Challenge 1B, are you saying there is no answer to how to ethically choose between those two worlds, if one is simply presented with a choice?
On Challenge 2, does your position imply that it is wrong to bring someone into existence, because there is a risk that they will suffer greatly (which will mean theyâve been wronged), and no way to âoffsetâ this potential wrong?
Non-utilitarian Holden has a lot of consequentialist intuitions that he ideally would like to accommodate, but is not all-in on consequentialism.
As you noticed, I limited the scope of the original comment to axiology (partly because moral theory is messier and more confusing to me), hence the handwaviness. Generally speaking, I trust my intuitions about axiology more than my intuitions about moral theory, because I feel like my intuition is more likely to âoverfitâ on more complicated and specific moral dilemmas than on more basic questions of value, or something in that vein.
Anyway, Iâll just preface the rest of this comment with this: Iâm not very confident about all this and at any rate not sure whether deontology is the most plausible view. (I know that there are consequentialists who take person-affecting views too, but I havenât really read much about it. It seems weird to me because the view of value as tethered seems to resist aggregation, and it seems like you need to aggregate to evaluate and compare different consequences?)
Since in deontology we canât compare two consequences and say which one is better, the answer depends on the action used to get there. I guess what matters is whether the action that brings about world X involves us doing or neglecting (or neither) the duties we have towards people in world X (and people alive now). Whether world X is good/âbad for the population of world X (or for people alive today) only matters to the extent that it tells us something about our duties to those people.
Example: Say we can do something about climate change either (1) by becoming benevolent dictators and implementing a carbon tax that way, or (2) by inventing a new travel simulation device, which reduces carbon emissions from flights but is also really addictive. (Assume the consequences of these two scenarios have equivalent expected utility, though I know the example is unfair since âdictatorshipâ sounds really badâI just couldnât think of a better one off the top of my head.) Here, I think the Kantian should reject (1) and permit or even recommend (2), roughly speaking because (2) respects peopleâs autonomy (though the âaddictiveâ part may complicate this a bit) in a way that (1) does not.
I donât mean to say that a certain action is better or worse for the people that will exist if we take it. I mean more that what is good or bad for those people matters when deciding what duties we have to them, and this matters when deciding whether the action we take wrongs them. But of course the action canât be said to be âbetterâ for them as they wouldnât have existed otherwise.
I am imagining this scenario as a choice between two actions, one involving waving a magic wand that brings world X into existence, and the other waving it to bring world Y into existence.
I guess deontology has less to say about this thought experiment than consequentialism does, given that the latter is concerned with the values of states of affair and the former more with the values of actions. What this thought experiment does is almost eliminate the action, reducing it to a choice of value. (Of course choosing is still an action, but it seems qualitatively different to me in a way that I canât really explain.) Most actions weâre faced with in practice probably arenât like that, so it seems like ambivalence in the face of pure value choices isnât too problematic?
I realise that Iâm kind of dodging the question here, but in my defense you are, in a way, asking me to make a decision about consequences, and not actions. :)
One of the weaknesses in deontology is its awkwardness with uncertainty. I think one ok approach is to put values on outcomes (by âoutcomeâ I mean e.g. âviolating duty Xâ or âcarrying out duty Yâ, not a state of affairs as in consequentialism) and multiplying by probability. So I could put a value on âwronging someone by bringing them into a life of terrible sufferingâ and on âcarrying out my duty to bring a flourishing person into the worldâ (if we have such a duty) and calculating expected value that way. Then whether or not the action is wrong would depend on the level of risk. But that is very tentative âŚ