Thank you for the clarification. I think I understand your position now.
this is not something that bothers me at all. If the thought experiment ever becomes relevant, I’ll worry about how to patch up the theory then. In the meantime, I’ll carry on trying to live by my moral theory.
Why doesn’t it bother you at all that a theory has counterintuitive implications in counterfactual scenarios? Shouldn’t this lower your confidence in the theory? After all, our justification for believing a moral theory seems to turn on (1) the theory’s simplicity and (2) the degree to which it fits our intuitions. When you learn that your theory has counterintuitive implications, this causes you to either restrict the scope of the theory, and thus make it more complex, or recognize that it doesn’t fit the data as well as you thought before. In either case, it seems you should update by believing the theory to a lesser degree.
Why doesn’t it bother you at all that a theory has counterintuitive implications in counterfactual scenarios? Shouldn’t this lower your confidence in the theory?
I think my disagreement is mostly on (1) -- I expect that a correct moral theory would be horrendously complicated. I certainly can’t reduce my moral theory to some simple set of principles: there are many realistic circumstances where my principles clash (individual rights versus greater good, say, or plenty of legal battles where it’s not clear what a moral decision would be), and I don’t know of any simple rules to decide what principles I deem more important in which situations. Certainly there are many realistic problems which I think could go either way.
But I agree that all other things equal, simplicity is a good feature to have, and enough simplicity might sometimes outweigh intuition. Perhaps, once future-me carefully consider enormous aggregative ethics problems, I will have an insight that allows a drastically simplified moral theory. The new theory would solve the repugnant conclusion (whatever I think ‘repugnant’ means in this future!). Applied to present-me’s day-to-day problems, such a simplified theory will likely give slightly different answers to what I think today: maybe the uncertainty I have today about certain court cases would be solved by one of the principles that future-me thinks of.
But I don’t think the answers will change a lot. I think my current moral theory basically gives appropriate answers (sometimes uncertain ones) to my problems today. There’s wiggle-room in places, but there are also some really solid intuitions that I don’t expect future-me to sacrifice. Rescuing the drowning child (at least when I live in a world without the ability to create large numbers of sentient beings!) would be one of these.
Thank you for the clarification. I think I understand your position now.
Why doesn’t it bother you at all that a theory has counterintuitive implications in counterfactual scenarios? Shouldn’t this lower your confidence in the theory? After all, our justification for believing a moral theory seems to turn on (1) the theory’s simplicity and (2) the degree to which it fits our intuitions. When you learn that your theory has counterintuitive implications, this causes you to either restrict the scope of the theory, and thus make it more complex, or recognize that it doesn’t fit the data as well as you thought before. In either case, it seems you should update by believing the theory to a lesser degree.
I think my disagreement is mostly on (1) -- I expect that a correct moral theory would be horrendously complicated. I certainly can’t reduce my moral theory to some simple set of principles: there are many realistic circumstances where my principles clash (individual rights versus greater good, say, or plenty of legal battles where it’s not clear what a moral decision would be), and I don’t know of any simple rules to decide what principles I deem more important in which situations. Certainly there are many realistic problems which I think could go either way.
But I agree that all other things equal, simplicity is a good feature to have, and enough simplicity might sometimes outweigh intuition. Perhaps, once future-me carefully consider enormous aggregative ethics problems, I will have an insight that allows a drastically simplified moral theory. The new theory would solve the repugnant conclusion (whatever I think ‘repugnant’ means in this future!). Applied to present-me’s day-to-day problems, such a simplified theory will likely give slightly different answers to what I think today: maybe the uncertainty I have today about certain court cases would be solved by one of the principles that future-me thinks of.
But I don’t think the answers will change a lot. I think my current moral theory basically gives appropriate answers (sometimes uncertain ones) to my problems today. There’s wiggle-room in places, but there are also some really solid intuitions that I don’t expect future-me to sacrifice. Rescuing the drowning child (at least when I live in a world without the ability to create large numbers of sentient beings!) would be one of these.