As you noticed, I limited the scope of the original comment to axiology (partly because moral theory is messier and more confusing to me), hence the handwaviness. Generally speaking, I trust my intuitions about axiology more than my intuitions about moral theory, because I feel like my intuition is more likely to âoverfitâ on more complicated and specific moral dilemmas than on more basic questions of value, or something in that vein.
Anyway, Iâll just preface the rest of this comment with this: Iâm not very confident about all this and at any rate not sure whether deontology is the most plausible view. (I know that there are consequentialists who take person-affecting views too, but I havenât really read much about it. It seems weird to me because the view of value as tethered seems to resist aggregation, and it seems like you need to aggregate to evaluate and compare different consequences?)
On Challenge 1A (and as a more general point) - if we take action against climate change, that presumably means making some sort of sacrifice today for the sake of future generations. Does your position imply that this is âsimply better for some and worse for others, and not better or worse on the whole?â Does that imply that it is not particularly good or bad to take action on climate change, such that we may as well do whatâs best for our own generation?
Since in deontology we canât compare two consequences and say which one is better, the answer depends on the action used to get there. I guess what matters is whether the action that brings about world X involves us doing or neglecting (or neither) the duties we have towards people in world X (and people alive now). Whether world X is good/âbad for the population of world X (or for people alive today) only matters to the extent that it tells us something about our duties to those people.
Example: Say we can do something about climate change either (1) by becoming benevolent dictators and implementing a carbon tax that way, or (2) by inventing a new travel simulation device, which reduces carbon emissions from flights but is also really addictive. (Assume the consequences of these two scenarios have equivalent expected utility, though I know the example is unfair since âdictatorshipâ sounds really badâI just couldnât think of a better one off the top of my head.) Here, I think the Kantian should reject (1) and permit or even recommend (2), roughly speaking because (2) respects peopleâs autonomy (though the âaddictiveâ part may complicate this a bit) in a way that (1) does not.
Also on Challenge 1Aâunder your model, who specifically are the people it is âbetter forâ to take action on climate change, if we presume that the set of people that exists conditional on taking action is completely distinct from the set of people that exists conditional on not taking action (due to chaotic effects as discussed in the dialogue)?
I donât mean to say that a certain action is better or worse for the people that will exist if we take it. I mean more that what is good or bad for those people matters when deciding what duties we have to them, and this matters when deciding whether the action we take wrongs them. But of course the action canât be said to be âbetterâ for them as they wouldnât have existed otherwise.
On Challenge 1B, are you saying there is no answer to how to ethically choose between those two worlds, if one is simply presented with a choice?
I am imagining this scenario as a choice between two actions, one involving waving a magic wand that brings world X into existence, and the other waving it to bring world Y into existence.
I guess deontology has less to say about this thought experiment than consequentialism does, given that the latter is concerned with the values of states of affair and the former more with the values of actions. What this thought experiment does is almost eliminate the action, reducing it to a choice of value. (Of course choosing is still an action, but it seems qualitatively different to me in a way that I canât really explain.) Most actions weâre faced with in practice probably arenât like that, so it seems like ambivalence in the face of pure value choices isnât too problematic?
I realise that Iâm kind of dodging the question here, but in my defense you are, in a way, asking me to make a decision about consequences, and not actions. :)
On Challenge 2, does your position imply that it is wrong to bring someone into existence, because there is a risk that they will suffer greatly (which will mean theyâve been wronged), and no way to âoffsetâ this potential wrong?
One of the weaknesses in deontology is its awkwardness with uncertainty. I think one ok approach is to put values on outcomes (by âoutcomeâ I mean e.g. âviolating duty Xâ or âcarrying out duty Yâ, not a state of affairs as in consequentialism) and multiplying by probability. So I could put a value on âwronging someone by bringing them into a life of terrible sufferingâ and on âcarrying out my duty to bring a flourishing person into the worldâ (if we have such a duty) and calculating expected value that way. Then whether or not the action is wrong would depend on the level of risk. But that is very tentative âŚ
As you noticed, I limited the scope of the original comment to axiology (partly because moral theory is messier and more confusing to me), hence the handwaviness. Generally speaking, I trust my intuitions about axiology more than my intuitions about moral theory, because I feel like my intuition is more likely to âoverfitâ on more complicated and specific moral dilemmas than on more basic questions of value, or something in that vein.
Anyway, Iâll just preface the rest of this comment with this: Iâm not very confident about all this and at any rate not sure whether deontology is the most plausible view. (I know that there are consequentialists who take person-affecting views too, but I havenât really read much about it. It seems weird to me because the view of value as tethered seems to resist aggregation, and it seems like you need to aggregate to evaluate and compare different consequences?)
Since in deontology we canât compare two consequences and say which one is better, the answer depends on the action used to get there. I guess what matters is whether the action that brings about world X involves us doing or neglecting (or neither) the duties we have towards people in world X (and people alive now). Whether world X is good/âbad for the population of world X (or for people alive today) only matters to the extent that it tells us something about our duties to those people.
Example: Say we can do something about climate change either (1) by becoming benevolent dictators and implementing a carbon tax that way, or (2) by inventing a new travel simulation device, which reduces carbon emissions from flights but is also really addictive. (Assume the consequences of these two scenarios have equivalent expected utility, though I know the example is unfair since âdictatorshipâ sounds really badâI just couldnât think of a better one off the top of my head.) Here, I think the Kantian should reject (1) and permit or even recommend (2), roughly speaking because (2) respects peopleâs autonomy (though the âaddictiveâ part may complicate this a bit) in a way that (1) does not.
I donât mean to say that a certain action is better or worse for the people that will exist if we take it. I mean more that what is good or bad for those people matters when deciding what duties we have to them, and this matters when deciding whether the action we take wrongs them. But of course the action canât be said to be âbetterâ for them as they wouldnât have existed otherwise.
I am imagining this scenario as a choice between two actions, one involving waving a magic wand that brings world X into existence, and the other waving it to bring world Y into existence.
I guess deontology has less to say about this thought experiment than consequentialism does, given that the latter is concerned with the values of states of affair and the former more with the values of actions. What this thought experiment does is almost eliminate the action, reducing it to a choice of value. (Of course choosing is still an action, but it seems qualitatively different to me in a way that I canât really explain.) Most actions weâre faced with in practice probably arenât like that, so it seems like ambivalence in the face of pure value choices isnât too problematic?
I realise that Iâm kind of dodging the question here, but in my defense you are, in a way, asking me to make a decision about consequences, and not actions. :)
One of the weaknesses in deontology is its awkwardness with uncertainty. I think one ok approach is to put values on outcomes (by âoutcomeâ I mean e.g. âviolating duty Xâ or âcarrying out duty Yâ, not a state of affairs as in consequentialism) and multiplying by probability. So I could put a value on âwronging someone by bringing them into a life of terrible sufferingâ and on âcarrying out my duty to bring a flourishing person into the worldâ (if we have such a duty) and calculating expected value that way. Then whether or not the action is wrong would depend on the level of risk. But that is very tentative âŚ