Ya, bracketing on its own wouldn’t tell you to ignore a potential group of moral patients just because its probability of sentience is very small. The numbers could compensate. It’s more that conditional on sentience, we’d have to be clueless about whether they’re made better or worse off. And we may often be in this position in practice.
I think you could still want some kind of difference-making view or bounded utility function used with bracketing, so that you can discount extreme overall downsides more than proportionally to their probability, along with extreme upsides. Or do something like Nicolausian discounting, i.e. ignoring small probabilities.
Ya, bracketing on its own wouldn’t tell you to ignore a potential group of moral patients just because its probability of sentience is very small. The numbers could compensate. It’s more that conditional on sentience, we’d have to be clueless about whether they’re made better or worse off. And we may often be in this position in practice.
I think you could still want some kind of difference-making view or bounded utility function used with bracketing, so that you can discount extreme overall downsides more than proportionally to their probability, along with extreme upsides. Or do something like Nicolausian discounting, i.e. ignoring small probabilities.