By what standard are you judging it to be crazy? I don’t think the view that there are no good states is crazy, and I’m pretty sympathetic to it myself. The view that it’s good to create beings for their own sake is totally unintuitive to me (although I wouldn’t call it or really any other view crazy).
How I would personally deal with your hypothetical under the kind of person-affecting views to which I’m sympathetic is this:
We don’t have reason to press the first button if we’d expect to later undo the welfare improvement of the original person with the second button. This sequence of pressing both isn’t better on person-affecting intuitions than doing nothing. When you reason about what to do, you should, in general, use backwards induction and consider what options you’ll have later and what you’d do later.
If you don’t use backwards induction, you will tend to do worse than otherwise and can be exploited, e.g. money pumped. This is true even for total utilitarians.
I address that in the article. FIrst of all, so long as we buy the transitivity of the better than relation that won’t work. Second, it’s highly counterintuitive that the addition of extra good options makes an action worse.
FIrst of all, so long as we buy the transitivity of the better than relation that won’t work.
This isn’t true. I can just deny the independence of irrelevant alternatives instead.
Second, it’s highly counterintuitive that the addition of extra good options makes an action worse.
It’s highly counterintutive to you. It’s intuitive to me because I’m sympathetic to the reasons that would justify it in some cases, and I outlined how this would work on my intuitions. The kinds of arguments you give probably aren’t very persuasive to people with strong enough person-affecting intuitions, because those intuitions justify to them what you find counterintuitive.
I find it crazy and I think nearly all people do.
This doesn’t seem like a reason that should really change anyone’s mind about the issue. Or, at least not the mind of any moral antirealist like me.
I suppose a moral realist could be persuaded via epistemic modesty, but if you are epistemically modest, then this will undermine your own personal views that aren’t (near-)consensus (among the informed). For example, you should give more weight to nonconsequentialist views.
//This isn’t true. I can just deny the independence of irrelevant alternatives instead.//
That doesn’t help. The world where only button 1 is pressed is better than the world where neither is pressed, the world where both are pressed is better than the world where only button 1 is pressed, so by transitivity, an extra happy person is good.
You can always deny any intuition, but I’d hope this would convince people without fairly extreme views.
On a person-affecting view violating IIA but not transitivity, we could have the following:
button 1 >1 neither, when exactly these two options are available
both buttons >2 button 1, when exactly these two options are available
both buttons ≃3 neither, when exactly these two options are available
button 1 >4 both buttons ≃4 neither, when exactly these three options are available
There’s no issue for transitivitiy, because the 4 cases involve 4 distinct relations (distinguished by their subsripts), each of which is transitive. The 4 relations don’t have to agree.
I’m guessing there isn’t much more we can gain by discussing further, and we’ll have to agree to disagree. I’ll just report my own intuitions here, largely reframing things I’ve already said in this thread.
It’s useful to separate the outcomes from the actions here. Let’s label the outcomes:
Nothing: the result of pressing neither button.
A: Bob getting an extra 1 util and Todd being created with a util, the result of only button 1 being pressed.
B: Todd being created with 3 utils, the result of both buttons being pressed.
On my person-affecting intuitions, I’d rank the outcomes as follows (using a different betterness relation for each set of outcomes, violating the independence of irrelevant alternatives but not transitivity):
When only Nothing and A are available, A > Nothing.
When only A and B are available, B > A.
When only Nothing and B are available, Nothing ~ B.
When all three outcomes are available, Nothing ~ B. I’m undecided on how to compare A to Nothing and B, other than that its comparison with Nothing and its comparison with B are the same. I have some sympathy for different ways of comparing A to the other two.
Now, I can say how I’d act, given the above.
If I already pressed button 1 and Nothing is no longer attainable, then we’re in case 2, so pressing button 2 and so pressing both buttons is better than only pressing button 1, becaus it means choosing B over A.
If starting with all three options still available, I expect with certainty that if I press button 1, I will then press button 2, say because I know I will follow the rankings in the previous paragraph at that point, then the outcome of pressing button 1 is B, by backward induction. Then I would be indifferent between pressing button 1 and getting outcome B, and not pressing it and getting Nothing, because B ~ Nothing.
If starting with all three options still available, for whatever reason I think there’s a chance I won’t press button 2 after pressing button 1, then
If and because A > Nothing (and because B ~ Nothing) at this point, pressing button 1 would be better than not pressing either button.
If and because A < Nothing (and because B ~ Nothing) at this point, pressing button 1 would be worse than not pressing either button.
If and because A ~ Nothing (and because B ~ Nothing) at this point, I’d be indifferent.
Similarly if my credence that button 2 will actually be available after pressing button 1 is between 0 and 100%.
My intuitions are guided mostly by the (actualist) object interpretation and participant model of Rabinowicz and Österberg (1996)[1] and backward induction.
To the satisfaction and the object interpretations of the preference-based conception of value correspond, we believe, two different ways of viewing utilitarianism: the spectator and the participant models.
According to the former, the utilitarian attitude is embodied in an impartial benevolent spectator, who evaluates the situation objectively and from the ‘outside’. An ordinary person can approximate this attitude by detaching himself from his personal engagement in the situation. (...)
The participant model, on the other hand, puts forward as a utilitarian ideal an attitude of emotional participation in other people’s projects: the situation is to be viewed from ‘within’, not just from my own perspective, but also from the others’ points of view. The participant model assumes that, instead of distancing myself from my particular position in the world, I identify with other subjects: what it recommends is not a detached objectivity but a universalized subjectivity.
And
the object interpretation presupposes a subjectivist (or ‘projectivist’) theory of value. Values are not part of the mind-independent world but something that we project upon the world, or — more precisely — upon the whole set of possible worlds. In this sense, our intrinsic value claims, while not world-bound in their range of application, constitute an expression of a particular world-bound perspective: the perspective determined by the preferences we actually have.
By what standard are you judging it to be crazy? I don’t think the view that there are no good states is crazy, and I’m pretty sympathetic to it myself. The view that it’s good to create beings for their own sake is totally unintuitive to me (although I wouldn’t call it or really any other view crazy).
How I would personally deal with your hypothetical under the kind of person-affecting views to which I’m sympathetic is this:
We don’t have reason to press the first button if we’d expect to later undo the welfare improvement of the original person with the second button. This sequence of pressing both isn’t better on person-affecting intuitions than doing nothing. When you reason about what to do, you should, in general, use backwards induction and consider what options you’ll have later and what you’d do later.
If you don’t use backwards induction, you will tend to do worse than otherwise and can be exploited, e.g. money pumped. This is true even for total utilitarians.
I address that in the article. FIrst of all, so long as we buy the transitivity of the better than relation that won’t work. Second, it’s highly counterintuitive that the addition of extra good options makes an action worse.
I find it crazy and I think nearly all people do.
This isn’t true. I can just deny the independence of irrelevant alternatives instead.
It’s highly counterintutive to you. It’s intuitive to me because I’m sympathetic to the reasons that would justify it in some cases, and I outlined how this would work on my intuitions. The kinds of arguments you give probably aren’t very persuasive to people with strong enough person-affecting intuitions, because those intuitions justify to them what you find counterintuitive.
This doesn’t seem like a reason that should really change anyone’s mind about the issue. Or, at least not the mind of any moral antirealist like me.
I suppose a moral realist could be persuaded via epistemic modesty, but if you are epistemically modest, then this will undermine your own personal views that aren’t (near-)consensus (among the informed). For example, you should give more weight to nonconsequentialist views.
//This isn’t true. I can just deny the independence of irrelevant alternatives instead.//
That doesn’t help. The world where only button 1 is pressed is better than the world where neither is pressed, the world where both are pressed is better than the world where only button 1 is pressed, so by transitivity, an extra happy person is good.
You can always deny any intuition, but I’d hope this would convince people without fairly extreme views.
Your argument is implicitly assuming IIA.
On a person-affecting view violating IIA but not transitivity, we could have the following:
button 1 >1 neither, when exactly these two options are available
both buttons >2 button 1, when exactly these two options are available
both buttons ≃3 neither, when exactly these two options are available
button 1 >4 both buttons ≃4 neither, when exactly these three options are available
There’s no issue for transitivitiy, because the 4 cases involve 4 distinct relations (distinguished by their subsripts), each of which is transitive. The 4 relations don’t have to agree.
I was assuming both buttons are available. Specifically, suppose Bob exists:
Bob getting an extra 1 util and Todd being created with a util is better than that not happening.
Todd being created with 3 utils is better than the scenario in 1.
I’m guessing there isn’t much more we can gain by discussing further, and we’ll have to agree to disagree. I’ll just report my own intuitions here, largely reframing things I’ve already said in this thread.
It’s useful to separate the outcomes from the actions here. Let’s label the outcomes:
Nothing: the result of pressing neither button.
A: Bob getting an extra 1 util and Todd being created with a util, the result of only button 1 being pressed.
B: Todd being created with 3 utils, the result of both buttons being pressed.
On my person-affecting intuitions, I’d rank the outcomes as follows (using a different betterness relation for each set of outcomes, violating the independence of irrelevant alternatives but not transitivity):
When only Nothing and A are available, A > Nothing.
When only A and B are available, B > A.
When only Nothing and B are available, Nothing ~ B.
When all three outcomes are available, Nothing ~ B. I’m undecided on how to compare A to Nothing and B, other than that its comparison with Nothing and its comparison with B are the same. I have some sympathy for different ways of comparing A to the other two.
Now, I can say how I’d act, given the above.
If I already pressed button 1 and Nothing is no longer attainable, then we’re in case 2, so pressing button 2 and so pressing both buttons is better than only pressing button 1, becaus it means choosing B over A.
If starting with all three options still available, I expect with certainty that if I press button 1, I will then press button 2, say because I know I will follow the rankings in the previous paragraph at that point, then the outcome of pressing button 1 is B, by backward induction. Then I would be indifferent between pressing button 1 and getting outcome B, and not pressing it and getting Nothing, because B ~ Nothing.
If starting with all three options still available, for whatever reason I think there’s a chance I won’t press button 2 after pressing button 1, then
If and because A > Nothing (and because B ~ Nothing) at this point, pressing button 1 would be better than not pressing either button.
If and because A < Nothing (and because B ~ Nothing) at this point, pressing button 1 would be worse than not pressing either button.
If and because A ~ Nothing (and because B ~ Nothing) at this point, I’d be indifferent.
Similarly if my credence that button 2 will actually be available after pressing button 1 is between 0 and 100%.
My intuitions are guided mostly by the (actualist) object interpretation and participant model of Rabinowicz and Österberg (1996)[1] and backward induction.
And