One response to these objections to rounding down is that similar objections could be raised against treating consciousness, pleasure, unpleasantness and desires sharply if it turns out to be vague whether some systems are capable of them. We wouldn’t stop caring about consciousness, pleasure, unpleasantness or desires just because they turn out to be vague.
And one potential “fix” to avoid these objections is to just put a probability distribution over the threshold, and use something like a (non-fanatical) method for normative uncertainty like a moral parliament over the resulting views. Maybe the threshold is distributed uniformly over the interval [a,b],0≤a<b≤1.
Now, you might say that this is just a probability distribution over views to which the objections apply, so we can still just object to each view separately as before. However, someone could just consider the normative view that is (extensionally) equivalent to a moral parliament over the views across different thresholds. It’s one view. If we take the interval to just be [0,1], then the view doesn’t ignore important outcomes, it doesn’t neglect decisions under any threshold, and the normative laws don’t change sharply at some arbitrary point.
The specific choice of distribution for the threshold may still seem arbitrary. But this seems like a much weaker objection, because it’s much harder to avoid in general, e.g. precise cardinal tradeoffs between pleasures, between displeasures, between desires and between different kinds of interests could be similarly arbitrary.
This view may seem somewhat ad hoc. However, I do think treating vagueness/imprecision like normative uncertainty is independently plausible. At any rate, in case some of the things we care about turn out to be vague but we’ll want to keep caring about them anyway, we’ll want to have a way to deal with vagueness, and whatever that is could be applied here. Treating vagueness like normative uncertainty is just one possibility, which I happen to like.
One response to these objections to rounding down is that similar objections could be raised against treating consciousness, pleasure, unpleasantness and desires sharply if it turns out to be vague whether some systems are capable of them. We wouldn’t stop caring about consciousness, pleasure, unpleasantness or desires just because they turn out to be vague.
And one potential “fix” to avoid these objections is to just put a probability distribution over the threshold, and use something like a (non-fanatical) method for normative uncertainty like a moral parliament over the resulting views. Maybe the threshold is distributed uniformly over the interval [a,b],0≤a<b≤1.
Now, you might say that this is just a probability distribution over views to which the objections apply, so we can still just object to each view separately as before. However, someone could just consider the normative view that is (extensionally) equivalent to a moral parliament over the views across different thresholds. It’s one view. If we take the interval to just be [0,1], then the view doesn’t ignore important outcomes, it doesn’t neglect decisions under any threshold, and the normative laws don’t change sharply at some arbitrary point.
The specific choice of distribution for the threshold may still seem arbitrary. But this seems like a much weaker objection, because it’s much harder to avoid in general, e.g. precise cardinal tradeoffs between pleasures, between displeasures, between desires and between different kinds of interests could be similarly arbitrary.
This view may seem somewhat ad hoc. However, I do think treating vagueness/imprecision like normative uncertainty is independently plausible. At any rate, in case some of the things we care about turn out to be vague but we’ll want to keep caring about them anyway, we’ll want to have a way to deal with vagueness, and whatever that is could be applied here. Treating vagueness like normative uncertainty is just one possibility, which I happen to like.