This is the kind of scenario where something that would typically be welfare maximising (and right according to commonsense morality) is actually not welfare maximising and is wrong according to commonsense morality. i.e. typically people who are greatly in need of pain medication are the people who would benefit most from pain medication; typically you shouldn’t give strong pain medication to people with no medical need of it; typically there are flow-through effects to consider like addiction, upholding norms, social relations and moral character (because the world isn’t ending); typically you don’t have futuristic super-computers giving you extremely high confidence that the typically-wrong thing is actually welfare maximising.
In this kind of scenario, I think it makes sense that intuitively one would think that it’s right to do the typically-right-but-by-stipulation-not-welfare-maximising thing, but that one has reasonable (though not conclusive) grounds for just biting the bullet and saying that you should do the highly unusual welfare maximising thing.
It’s also not clear that one couldn’t, in principle, account for the choice to give the medicine to Alice as a value monist, e.g. if you only care about weighted welfare (weighting more negative states more heavily).
I agree with your general thrust. The thought experiment is a little bit contrived but deliberately designed to make both options look somewhat plausible. A value monist negative utilitarian could also give the medicine to Alice, so it’s not even clear what option one would go for.
However, what I really wonder though is if “welfare” is the only thing we care about at the end of times? Or is there maybe also the question of how we got there? How we handled ourselves in difficult situations? What values we embodied when we were alive? Are we not at risk of losing our humanity if we subordinate all of our behavior to a “principled” but “acontextual” value monist algorithm (e.g., always maximize “expected welfare”)? These are the kind of questions that I want to trigger reflection about with the thought experiment.
This is the kind of scenario where something that would typically be welfare maximising (and right according to commonsense morality) is actually not welfare maximising and is wrong according to commonsense morality. i.e. typically people who are greatly in need of pain medication are the people who would benefit most from pain medication; typically you shouldn’t give strong pain medication to people with no medical need of it; typically there are flow-through effects to consider like addiction, upholding norms, social relations and moral character (because the world isn’t ending); typically you don’t have futuristic super-computers giving you extremely high confidence that the typically-wrong thing is actually welfare maximising.
In this kind of scenario, I think it makes sense that intuitively one would think that it’s right to do the typically-right-but-by-stipulation-not-welfare-maximising thing, but that one has reasonable (though not conclusive) grounds for just biting the bullet and saying that you should do the highly unusual welfare maximising thing.
It’s also not clear that one couldn’t, in principle, account for the choice to give the medicine to Alice as a value monist, e.g. if you only care about weighted welfare (weighting more negative states more heavily).
I agree with your general thrust. The thought experiment is a little bit contrived but deliberately designed to make both options look somewhat plausible. A value monist negative utilitarian could also give the medicine to Alice, so it’s not even clear what option one would go for.
However, what I really wonder though is if “welfare” is the only thing we care about at the end of times? Or is there maybe also the question of how we got there? How we handled ourselves in difficult situations? What values we embodied when we were alive? Are we not at risk of losing our humanity if we subordinate all of our behavior to a “principled” but “acontextual” value monist algorithm (e.g., always maximize “expected welfare”)? These are the kind of questions that I want to trigger reflection about with the thought experiment.