A couple of thoughts, probably none of them very helpful, I’m afraid!
One effort to move the conversation on is to think about to act under moral uncertainty, which is what Will MacAskill did his doctorate on. As far as I understand it, you try to work out the Expected Moral Value (EMV) of an action by multiplying the credence you attach to an ethical view by how good that view says the outcome is. Long story short, acc. MacAskill, we all end up doing what total utilitarianism says because Total Util says there’s so much value to keeping the species alive.
Second, I think you’ll find people have been trying to find convergence on moral views since the dawn of moral arguments: people are trying to decide what to do and disagree because they have different values. On the basis that we’ve tried to do this for the whole of human history, I’d also doubt there is a tractable solution. Persuading each other doesn’t seem to work. Maybe moral uncertainty theorising will help, but that remains to be seem.
Third, saying “it would be good if we agreed what was good” is rather question begging. Would it be good if we agreed what was good? Well, only if you think agreement is good. But why would anyone thing agreement would have instrinsic value, rather than, say, happiness? Also, what happens if I think what you want to do is stupid: why should I agree to that? This happens all the time in politics: it would be nice if people agreed, but they often don’t, because they think there are things that are more important that agreement!
As far as I understand it, you try to work out the Expected Moral Value (EMV) of an action by multiplying the credence you attach to an ethical view by how good that view says the outcome is.
Small correction, he talks about choiceworthiness. And he seems to handle it a little differently from moral value. For one thing, not all moral systems have a clear quantification or cardinality of moral value, which would make it impossible to directly do this calculation. For another, he seems to consider all-things-considered choiceworthiness as part of the decisionmaking process. So under some moral theories, maybe your personal desires, legal obligations, or pragmatic interests don’t provide any ‘moral value’, but they can still be a source of choiceworthiness.
Long story short, acc. MacAskill, we all end up doing what total utilitarianism says because Total Util says there’s so much value to keeping the species alive.
No no no no no, he never says this. IIRC, he does say that keeping the species alive is better but just because almost every common moral theory says so. There are other theories besides utilitarianism which also assign huge weight to planetary/specieswide concerns.
MacAskill also says that there are cases where demanding views like utilitarianism dominate, like eating meat/not eating meat, where eating meat isn’t particularly valuable even if you happen to be right. But not all cases will turn out this way.
(Unless you mean to refer to the specific scenario in the OP, in which case moral uncertainty seems likely to tell us to keep people alive, but if you’re really confident in NU or just pessimistic about happiness and suffering, then maybe it wouldn’t.)
Thanks Michael, some good points. I had forgotten about EMV, which is certainly applicable here. The trick would be convincing people to think in that way!
Your third point is well taken—I would hope that we converge on the best moral theory. Converging on the worst would be pretty bad.
A couple of thoughts, probably none of them very helpful, I’m afraid!
One effort to move the conversation on is to think about to act under moral uncertainty, which is what Will MacAskill did his doctorate on. As far as I understand it, you try to work out the Expected Moral Value (EMV) of an action by multiplying the credence you attach to an ethical view by how good that view says the outcome is. Long story short, acc. MacAskill, we all end up doing what total utilitarianism says because Total Util says there’s so much value to keeping the species alive.
Second, I think you’ll find people have been trying to find convergence on moral views since the dawn of moral arguments: people are trying to decide what to do and disagree because they have different values. On the basis that we’ve tried to do this for the whole of human history, I’d also doubt there is a tractable solution. Persuading each other doesn’t seem to work. Maybe moral uncertainty theorising will help, but that remains to be seem.
Third, saying “it would be good if we agreed what was good” is rather question begging. Would it be good if we agreed what was good? Well, only if you think agreement is good. But why would anyone thing agreement would have instrinsic value, rather than, say, happiness? Also, what happens if I think what you want to do is stupid: why should I agree to that? This happens all the time in politics: it would be nice if people agreed, but they often don’t, because they think there are things that are more important that agreement!
Small correction, he talks about choiceworthiness. And he seems to handle it a little differently from moral value. For one thing, not all moral systems have a clear quantification or cardinality of moral value, which would make it impossible to directly do this calculation. For another, he seems to consider all-things-considered choiceworthiness as part of the decisionmaking process. So under some moral theories, maybe your personal desires, legal obligations, or pragmatic interests don’t provide any ‘moral value’, but they can still be a source of choiceworthiness.
No no no no no, he never says this. IIRC, he does say that keeping the species alive is better but just because almost every common moral theory says so. There are other theories besides utilitarianism which also assign huge weight to planetary/specieswide concerns.
MacAskill also says that there are cases where demanding views like utilitarianism dominate, like eating meat/not eating meat, where eating meat isn’t particularly valuable even if you happen to be right. But not all cases will turn out this way.
(Unless you mean to refer to the specific scenario in the OP, in which case moral uncertainty seems likely to tell us to keep people alive, but if you’re really confident in NU or just pessimistic about happiness and suffering, then maybe it wouldn’t.)
Thanks Michael, some good points. I had forgotten about EMV, which is certainly applicable here. The trick would be convincing people to think in that way!
Your third point is well taken—I would hope that we converge on the best moral theory. Converging on the worst would be pretty bad.