Thanks for this great post and discussion—I really enjoyed the audio too.
I began to compose a comment here but then it rambled on and on, and dived into various weird rabbit holes, and then I realised I needed to do more reading.
I ended up writing a full-length essay over Easter and have just posted it on my new blog ‘Path findings’. I launched this a few weeks ago inspired by reading your post ‘Learning by Writing’ - and yay it seems that really works!
Riffing with Karnofsky on the value of present and future lives, to celebrate the 50th anniversaries of ‘Watership Down’, ‘Limits to Growth’ and the Alcor foundation…
I’d be thrilled if you could take a few moments to read or at least skim it, and would welcome any and all feedback, however brutal!
Up front I confess not all the arguments are consistent, and the puns are consistently terrible, but I hope it makes some kind of sense. It will appeal particularly to people who like philosophy, ecology and rabbits, and features a lovely illustration Lyndsey Green.
As a taster, here are some of the section headers (and most of the terrible puns):
I think one important crux here is differing theories of value.
My preferred theory is the (in my view, commonsensical) view that for something to be good or bad, it has to be good or bad for someone. (This is essentially Christine Korsgaard’s argument; she calls it “tethered value”.) That is, value is conditional on some valuer. So where a utilitarian might say that happiness/well-being/whatever is the good and that we therefore ought to maximise it, I say that the good is always dependent on some creature who values things. If all the creatures in the world valued totally different things than what they do in our dimension, then that would be the good instead.
(I should mention that, though I’m not very confident about moral philosophy, to me the most plausible view is a version of Kantianism. Maybe I give 70% weight to that, 20% to some form of utilitarianism and the rest to Schopenhauerian ethics/norms/intuitions. I can recommend being a Kantian effective altruist: it keeps you on your toes. Anyway, I’m closer to non-utilitarian Holden in the post, but with some differences.)
This view has two important implications:
It no longer makes sense to aggregate value. As Korsgaard puts it, “If Jack would get more pleasure from owning Jill’s convertible than Jill does, the utilitarian thinks you should take the car away from Jill and give it to Jack. I don’t think that makes things better for everyone. I think it makes it better for Jack and worse for Jill, and that’s all. It doesn’t make it better on the whole.”
It no longer makes sense to talk about the value of potential people. Their non-existence is neither good nor bad because there is no one for it to be good or bad for. (Exception: They can still be valued by people who are alive. But let’s ignore that.)
I haven’t spent tons of time thinking about how this shakes out in longtermism, so quite a lot of uncertainty here. But here’s roughly how I think this view would apply to your thought experiments:
Challenge 1A—climate change. If we decide to ignore climate change, then we wrong future people (because climate change is bad for them). If we don’t ignore it, then we don’t wrong those people (because they won’t exist); we also don’t wrong the future people who will exist, because we did our best to mitigate the problem. In a sense, we have a duty to future generations, whoever they may be.
Challenge 1B—world A/B/C. It doesn’t make sense to compare different world in this way, because that would necessarily involve aggregation. Instead, we have to evaluate every action based on whether it wrongs (or not, or benefits) people in the world it produces.
Challenge 2 -- asymmetry. This objection I think doesn’t apply now. The relevant question is still: does our action wrong the person that does come into existence? If we have good reason to believe that a new life will be full of suffering, and we choose to bring it into existence, plausibly we do wrong that person. If we have good reason to believe that the life will be great, and we choose to bring it into existence, obviously we don’t wrong the person. (If we do not bring it into existence, we don’t wrong anyone, because there’s no one to wrong.)
Additional thoughts:
I want to mention a harder problem than the “should we have as many children as possible?” one you mention. It is that it seems ok to abort a fetus that would have a happy life, but it seems really wrong not to abort a fetus we know would have a terrible life full of pain and suffering. (This is apparently called the asymmetry problem in philosophy.) These intuitions make perfect sense if we take the view that value is tethered. But they don’t really make sense in total utilitarianism.
Extinction would still be very bad, but it would be bad for the people who are alive when it happens, and for all the people in history whose work to improve things in the far future is being thwarted.
(I recognise that my view gets weirder when we bring probability into the picture (as we have to). That’s something I want to think more about. I also totally recognise that my view is pretty complicated, and simplicity is one of the things I admire in utilitarianism.)
I think one important difference between me and non-utilitarian Holden is that I am not a consequentialist, but I kind of suspect that he is? Otherwise I would say that he is ceding too much ground to his evil twin. ;)
I share a number of your intuitions as a starting point, but this dialogue (and previous ones) is intended to pose challenges to those intuitions. To follow up on those:
On Challenge 1A (and as a more general point) - if we take action against climate change, that presumably means making some sort of sacrifice today for the sake of future generations. Does your position imply that this is “simply better for some and worse for others, and not better or worse on the whole?” Does that imply that it is not particularly good or bad to take action on climate change, such that we may as well do what’s best for our own generation?
Also on Challenge 1A—under your model, who specifically are the people it is “better for” to take action on climate change, if we presume that the set of people that exists conditional on taking action is completely distinct from the set of people that exists conditional on not taking action (due to chaotic effects as discussed in the dialogue)?
On Challenge 1B, are you saying there is no answer to how to ethically choose between those two worlds, if one is simply presented with a choice?
On Challenge 2, does your position imply that it is wrong to bring someone into existence, because there is a risk that they will suffer greatly (which will mean they’ve been wronged), and no way to “offset” this potential wrong?
Non-utilitarian Holden has a lot of consequentialist intuitions that he ideally would like to accommodate, but is not all-in on consequentialism.
As you noticed, I limited the scope of the original comment to axiology (partly because moral theory is messier and more confusing to me), hence the handwaviness. Generally speaking, I trust my intuitions about axiology more than my intuitions about moral theory, because I feel like my intuition is more likely to “overfit” on more complicated and specific moral dilemmas than on more basic questions of value, or something in that vein.
Anyway, I’ll just preface the rest of this comment with this: I’m not very confident about all this and at any rate not sure whether deontology is the most plausible view. (I know that there are consequentialists who take person-affecting views too, but I haven’t really read much about it. It seems weird to me because the view of value as tethered seems to resist aggregation, and it seems like you need to aggregate to evaluate and compare different consequences?)
On Challenge 1A (and as a more general point) - if we take action against climate change, that presumably means making some sort of sacrifice today for the sake of future generations. Does your position imply that this is “simply better for some and worse for others, and not better or worse on the whole?” Does that imply that it is not particularly good or bad to take action on climate change, such that we may as well do what’s best for our own generation?
Since in deontology we can’t compare two consequences and say which one is better, the answer depends on the action used to get there. I guess what matters is whether the action that brings about world X involves us doing or neglecting (or neither) the duties we have towards people in world X (and people alive now). Whether world X is good/bad for the population of world X (or for people alive today) only matters to the extent that it tells us something about our duties to those people.
Example: Say we can do something about climate change either (1) by becoming benevolent dictators and implementing a carbon tax that way, or (2) by inventing a new travel simulation device, which reduces carbon emissions from flights but is also really addictive. (Assume the consequences of these two scenarios have equivalent expected utility, though I know the example is unfair since “dictatorship” sounds really bad—I just couldn’t think of a better one off the top of my head.) Here, I think the Kantian should reject (1) and permit or even recommend (2), roughly speaking because (2) respects people’s autonomy (though the “addictive” part may complicate this a bit) in a way that (1) does not.
Also on Challenge 1A—under your model, who specifically are the people it is “better for” to take action on climate change, if we presume that the set of people that exists conditional on taking action is completely distinct from the set of people that exists conditional on not taking action (due to chaotic effects as discussed in the dialogue)?
I don’t mean to say that a certain action is better or worse for the people that will exist if we take it. I mean more that what is good or bad for those people matters when deciding what duties we have to them, and this matters when deciding whether the action we take wrongs them. But of course the action can’t be said to be “better” for them as they wouldn’t have existed otherwise.
On Challenge 1B, are you saying there is no answer to how to ethically choose between those two worlds, if one is simply presented with a choice?
I am imagining this scenario as a choice between two actions, one involving waving a magic wand that brings world X into existence, and the other waving it to bring world Y into existence.
I guess deontology has less to say about this thought experiment than consequentialism does, given that the latter is concerned with the values of states of affair and the former more with the values of actions. What this thought experiment does is almost eliminate the action, reducing it to a choice of value. (Of course choosing is still an action, but it seems qualitatively different to me in a way that I can’t really explain.) Most actions we’re faced with in practice probably aren’t like that, so it seems like ambivalence in the face of pure value choices isn’t too problematic?
I realise that I’m kind of dodging the question here, but in my defense you are, in a way, asking me to make a decision about consequences, and not actions. :)
On Challenge 2, does your position imply that it is wrong to bring someone into existence, because there is a risk that they will suffer greatly (which will mean they’ve been wronged), and no way to “offset” this potential wrong?
One of the weaknesses in deontology is its awkwardness with uncertainty. I think one ok approach is to put values on outcomes (by “outcome” I mean e.g. “violating duty X” or “carrying out duty Y”, not a state of affairs as in consequentialism) and multiplying by probability. So I could put a value on “wronging someone by bringing them into a life of terrible suffering” and on “carrying out my duty to bring a flourishing person into the world” (if we have such a duty) and calculating expected value that way. Then whether or not the action is wrong would depend on the level of risk. But that is very tentative …
Great dialogue! As an additional ‘further reading’ suggestion, I just want to plug the ‘Population Ethics’ chapter at utilitarianism.net. It summarizes some less well-known possibilities (such as “value blur” in the context of a critical range view) that might avoid some of the problems of the (blur-free) total view.
Huh, maybe someone else wants to weigh in? When I view in an incognito window, it prompts me to login. When I view it logged in, it says “You need access. Ask for access, or switch to an account with access.” I’m not sure if you are the owner, but if so, you likely just need to click on “Share”, then “Restricted” in the Get Link dialog (it doesn’t really look like you can click there, but you can), then change the setting to “Anyone with the link”.
I think the title of this post doesn’t quite match the dialogue. Most of the dialogue is about whether additional good lives is at least somewhat good. But that’s different from whether each additional good life is morally equivalent to a prevented death. The former seems more plausible than the latter, to me.
Separating the two will lead to some situations where a life is bad to create but also good to save, once started. That seems more like a feature than a bug. If you ask people in surveys, my impression is that some small fraction of people say that they’d prefer to not have been born and that some larger fraction of people say that they’d not want to relive their life again — without this necessarily implying that they currently want to die.
Comments on Debating myself on whether “extra lives lived” are as good as “deaths prevented” will go here.
Dear Holden and all Karnofskyites ,
Thanks for this great post and discussion—I really enjoyed the audio too.
I began to compose a comment here but then it rambled on and on, and dived into various weird rabbit holes, and then I realised I needed to do more reading.
I ended up writing a full-length essay over Easter and have just posted it on my new blog ‘Path findings’. I launched this a few weeks ago inspired by reading your post ‘Learning by Writing’ - and yay it seems that really works!
Anyway, here’s the post , fresh off the slab
Rabbits, robots and resurrection
Riffing with Karnofsky on the value of present and future lives, to celebrate the 50th anniversaries of ‘Watership Down’, ‘Limits to Growth’ and the Alcor foundation…
I’d be thrilled if you could take a few moments to read or at least skim it, and would welcome any and all feedback, however brutal!
Up front I confess not all the arguments are consistent, and the puns are consistently terrible, but I hope it makes some kind of sense. It will appeal particularly to people who like philosophy, ecology and rabbits, and features a lovely illustration Lyndsey Green.
As a taster, here are some of the section headers (and most of the terrible puns):
Warren peace: a brief history of British rabbits
Too many bunnies? Malthus bites back
Abundant lives: valuing people now and in future
Staying alive: trolling the trolley problems
Of bunnies and bugs: who qualifies as people?
Back to life, back to reality… being human
You have been warned!
Best regards,
Patrick
Really like this post!
I think one important crux here is differing theories of value.
My preferred theory is the (in my view, commonsensical) view that for something to be good or bad, it has to be good or bad for someone. (This is essentially Christine Korsgaard’s argument; she calls it “tethered value”.) That is, value is conditional on some valuer. So where a utilitarian might say that happiness/well-being/whatever is the good and that we therefore ought to maximise it, I say that the good is always dependent on some creature who values things. If all the creatures in the world valued totally different things than what they do in our dimension, then that would be the good instead.
(I should mention that, though I’m not very confident about moral philosophy, to me the most plausible view is a version of Kantianism. Maybe I give 70% weight to that, 20% to some form of utilitarianism and the rest to Schopenhauerian ethics/norms/intuitions. I can recommend being a Kantian effective altruist: it keeps you on your toes. Anyway, I’m closer to non-utilitarian Holden in the post, but with some differences.)
This view has two important implications:
It no longer makes sense to aggregate value. As Korsgaard puts it, “If Jack would get more pleasure from owning Jill’s convertible than Jill does, the utilitarian thinks you should take the car away from Jill and give it to Jack. I don’t think that makes things better for everyone. I think it makes it better for Jack and worse for Jill, and that’s all. It doesn’t make it better on the whole.”
It no longer makes sense to talk about the value of potential people. Their non-existence is neither good nor bad because there is no one for it to be good or bad for. (Exception: They can still be valued by people who are alive. But let’s ignore that.)
I haven’t spent tons of time thinking about how this shakes out in longtermism, so quite a lot of uncertainty here. But here’s roughly how I think this view would apply to your thought experiments:
Challenge 1A—climate change. If we decide to ignore climate change, then we wrong future people (because climate change is bad for them). If we don’t ignore it, then we don’t wrong those people (because they won’t exist); we also don’t wrong the future people who will exist, because we did our best to mitigate the problem. In a sense, we have a duty to future generations, whoever they may be.
Challenge 1B—world A/B/C. It doesn’t make sense to compare different world in this way, because that would necessarily involve aggregation. Instead, we have to evaluate every action based on whether it wrongs (or not, or benefits) people in the world it produces.
Challenge 2 -- asymmetry. This objection I think doesn’t apply now. The relevant question is still: does our action wrong the person that does come into existence? If we have good reason to believe that a new life will be full of suffering, and we choose to bring it into existence, plausibly we do wrong that person. If we have good reason to believe that the life will be great, and we choose to bring it into existence, obviously we don’t wrong the person. (If we do not bring it into existence, we don’t wrong anyone, because there’s no one to wrong.)
Additional thoughts:
I want to mention a harder problem than the “should we have as many children as possible?” one you mention. It is that it seems ok to abort a fetus that would have a happy life, but it seems really wrong not to abort a fetus we know would have a terrible life full of pain and suffering. (This is apparently called the asymmetry problem in philosophy.) These intuitions make perfect sense if we take the view that value is tethered. But they don’t really make sense in total utilitarianism.
Extinction would still be very bad, but it would be bad for the people who are alive when it happens, and for all the people in history whose work to improve things in the far future is being thwarted.
(I recognise that my view gets weirder when we bring probability into the picture (as we have to). That’s something I want to think more about. I also totally recognise that my view is pretty complicated, and simplicity is one of the things I admire in utilitarianism.)
I think one important difference between me and non-utilitarian Holden is that I am not a consequentialist, but I kind of suspect that he is? Otherwise I would say that he is ceding too much ground to his evil twin. ;)
I share a number of your intuitions as a starting point, but this dialogue (and previous ones) is intended to pose challenges to those intuitions. To follow up on those:
On Challenge 1A (and as a more general point) - if we take action against climate change, that presumably means making some sort of sacrifice today for the sake of future generations. Does your position imply that this is “simply better for some and worse for others, and not better or worse on the whole?” Does that imply that it is not particularly good or bad to take action on climate change, such that we may as well do what’s best for our own generation?
Also on Challenge 1A—under your model, who specifically are the people it is “better for” to take action on climate change, if we presume that the set of people that exists conditional on taking action is completely distinct from the set of people that exists conditional on not taking action (due to chaotic effects as discussed in the dialogue)?
On Challenge 1B, are you saying there is no answer to how to ethically choose between those two worlds, if one is simply presented with a choice?
On Challenge 2, does your position imply that it is wrong to bring someone into existence, because there is a risk that they will suffer greatly (which will mean they’ve been wronged), and no way to “offset” this potential wrong?
Non-utilitarian Holden has a lot of consequentialist intuitions that he ideally would like to accommodate, but is not all-in on consequentialism.
As you noticed, I limited the scope of the original comment to axiology (partly because moral theory is messier and more confusing to me), hence the handwaviness. Generally speaking, I trust my intuitions about axiology more than my intuitions about moral theory, because I feel like my intuition is more likely to “overfit” on more complicated and specific moral dilemmas than on more basic questions of value, or something in that vein.
Anyway, I’ll just preface the rest of this comment with this: I’m not very confident about all this and at any rate not sure whether deontology is the most plausible view. (I know that there are consequentialists who take person-affecting views too, but I haven’t really read much about it. It seems weird to me because the view of value as tethered seems to resist aggregation, and it seems like you need to aggregate to evaluate and compare different consequences?)
Since in deontology we can’t compare two consequences and say which one is better, the answer depends on the action used to get there. I guess what matters is whether the action that brings about world X involves us doing or neglecting (or neither) the duties we have towards people in world X (and people alive now). Whether world X is good/bad for the population of world X (or for people alive today) only matters to the extent that it tells us something about our duties to those people.
Example: Say we can do something about climate change either (1) by becoming benevolent dictators and implementing a carbon tax that way, or (2) by inventing a new travel simulation device, which reduces carbon emissions from flights but is also really addictive. (Assume the consequences of these two scenarios have equivalent expected utility, though I know the example is unfair since “dictatorship” sounds really bad—I just couldn’t think of a better one off the top of my head.) Here, I think the Kantian should reject (1) and permit or even recommend (2), roughly speaking because (2) respects people’s autonomy (though the “addictive” part may complicate this a bit) in a way that (1) does not.
I don’t mean to say that a certain action is better or worse for the people that will exist if we take it. I mean more that what is good or bad for those people matters when deciding what duties we have to them, and this matters when deciding whether the action we take wrongs them. But of course the action can’t be said to be “better” for them as they wouldn’t have existed otherwise.
I am imagining this scenario as a choice between two actions, one involving waving a magic wand that brings world X into existence, and the other waving it to bring world Y into existence.
I guess deontology has less to say about this thought experiment than consequentialism does, given that the latter is concerned with the values of states of affair and the former more with the values of actions. What this thought experiment does is almost eliminate the action, reducing it to a choice of value. (Of course choosing is still an action, but it seems qualitatively different to me in a way that I can’t really explain.) Most actions we’re faced with in practice probably aren’t like that, so it seems like ambivalence in the face of pure value choices isn’t too problematic?
I realise that I’m kind of dodging the question here, but in my defense you are, in a way, asking me to make a decision about consequences, and not actions. :)
One of the weaknesses in deontology is its awkwardness with uncertainty. I think one ok approach is to put values on outcomes (by “outcome” I mean e.g. “violating duty X” or “carrying out duty Y”, not a state of affairs as in consequentialism) and multiplying by probability. So I could put a value on “wronging someone by bringing them into a life of terrible suffering” and on “carrying out my duty to bring a flourishing person into the world” (if we have such a duty) and calculating expected value that way. Then whether or not the action is wrong would depend on the level of risk. But that is very tentative …
Great dialogue! As an additional ‘further reading’ suggestion, I just want to plug the ‘Population Ethics’ chapter at utilitarianism.net. It summarizes some less well-known possibilities (such as “value blur” in the context of a critical range view) that might avoid some of the problems of the (blur-free) total view.
FYI, the audio on the recording is slightly weird. :)
Thanks for this post! I found the inner dialogue very relatable and it was helpful in thinking about my own uncertainties.
The link to Chapter 2 of On the Overwhelming Importance of Shaping the Far Future at the end links to a non-public Google Drive file.
The link works for me in incognito mode (it is a Google Drive file).
Huh, maybe someone else wants to weigh in? When I view in an incognito window, it prompts me to login. When I view it logged in, it says “You need access. Ask for access, or switch to an account with access.” I’m not sure if you are the owner, but if so, you likely just need to click on “Share”, then “Restricted” in the Get Link dialog (it doesn’t really look like you can click there, but you can), then change the setting to “Anyone with the link”.
Hm. I contacted Nick and replaced it with another link—does that work?
Yup, works for me now.
I think the title of this post doesn’t quite match the dialogue. Most of the dialogue is about whether additional good lives is at least somewhat good. But that’s different from whether each additional good life is morally equivalent to a prevented death. The former seems more plausible than the latter, to me.
Separating the two will lead to some situations where a life is bad to create but also good to save, once started. That seems more like a feature than a bug. If you ask people in surveys, my impression is that some small fraction of people say that they’d prefer to not have been born and that some larger fraction of people say that they’d not want to relive their life again — without this necessarily implying that they currently want to die.
I think that’s a fair point. These positions just pretty much end up in the same place when it comes to valuing existential risk.