As far as I understand it, you try to work out the Expected Moral Value (EMV) of an action by multiplying the credence you attach to an ethical view by how good that view says the outcome is.
Small correction, he talks about choiceworthiness. And he seems to handle it a little differently from moral value. For one thing, not all moral systems have a clear quantification or cardinality of moral value, which would make it impossible to directly do this calculation. For another, he seems to consider all-things-considered choiceworthiness as part of the decisionmaking process. So under some moral theories, maybe your personal desires, legal obligations, or pragmatic interests don’t provide any ‘moral value’, but they can still be a source of choiceworthiness.
Long story short, acc. MacAskill, we all end up doing what total utilitarianism says because Total Util says there’s so much value to keeping the species alive.
No no no no no, he never says this. IIRC, he does say that keeping the species alive is better but just because almost every common moral theory says so. There are other theories besides utilitarianism which also assign huge weight to planetary/specieswide concerns.
MacAskill also says that there are cases where demanding views like utilitarianism dominate, like eating meat/not eating meat, where eating meat isn’t particularly valuable even if you happen to be right. But not all cases will turn out this way.
(Unless you mean to refer to the specific scenario in the OP, in which case moral uncertainty seems likely to tell us to keep people alive, but if you’re really confident in NU or just pessimistic about happiness and suffering, then maybe it wouldn’t.)
Small correction, he talks about choiceworthiness. And he seems to handle it a little differently from moral value. For one thing, not all moral systems have a clear quantification or cardinality of moral value, which would make it impossible to directly do this calculation. For another, he seems to consider all-things-considered choiceworthiness as part of the decisionmaking process. So under some moral theories, maybe your personal desires, legal obligations, or pragmatic interests don’t provide any ‘moral value’, but they can still be a source of choiceworthiness.
No no no no no, he never says this. IIRC, he does say that keeping the species alive is better but just because almost every common moral theory says so. There are other theories besides utilitarianism which also assign huge weight to planetary/specieswide concerns.
MacAskill also says that there are cases where demanding views like utilitarianism dominate, like eating meat/not eating meat, where eating meat isn’t particularly valuable even if you happen to be right. But not all cases will turn out this way.
(Unless you mean to refer to the specific scenario in the OP, in which case moral uncertainty seems likely to tell us to keep people alive, but if you’re really confident in NU or just pessimistic about happiness and suffering, then maybe it wouldn’t.)