I suppose my main objection to the collective decision-making response to the no difference argument is that there doesn’t seem to be any sufficiently well-motivated and theoretically satisfying way of taking into account the collective probability of making a difference, especially as something like a welfarist consequentialist, e.g. a utilitarian. (It could still be the case that probability difference discounting is wrong, but it wouldn’t be because we should take collective probabilities into account.)
Why should I care about this collective probability, rather than the probability differences between outcomes of the choices actually available to me? As a welfarist consequentialist, I would compare probability distributions over population welfare values[1] for each of my options, and use rules that rank them pairwise, or within subsets of options, or selects maximal options from subsets of options. Collective probabilities don’t seem to be intrinsically important here. They wouldn’t be part of the descriptions of the probability distributions of population welfare values for options actually available to you or calculable from them, if and because you don’t have enough actual influence over what others will do. You’d need to consider the probability distributions of unavailable options to calculate them. Taking those distributions into account in a way that would properly address collective action problems seems to require non-welfarist reasons. This seems incompatible with act utilitarianism.
The differences in probabilities between options, on the other hand, do seem potentially important, if the probabilities matter at all. They’re calculable from the probability measures over welfare for the options available to you.
Two possibilities Kosonen discusses that seem compatible with utilitarianism are rule utilitarianism and evidential decision theory. Here are my takes on them:
For rule utilitarianism, you could assume you should follow universalizable procedures that lead to the best outcomes (or best distributions of outcomes) if everyone (had the same beliefs and) followed them (e.g. Greaves et al., 2022 (section 4.5), Kant (his first formulation of the categorical imperative), Parfit and others). Pairwise probability difference discounting without collective thresholds leads to collective defeat, so it would be ruled out.
I’m not really convinced that collective defeat under these unrealistic universalizations is a big deal at all. I don’t get to choose what procedures everyone else follows (or what they believe), so why should I care about that?
I expect that changing my preferences or procedures to satisfy this constraint will lead to worse outcomes according to my current preferences. That seems self-defeating in a way, too.
Mendola (1986, p.158, and responses to objections to his argument in the rest of the paper) argues that this is too strong as a formal constraint, based on aliens or machines that harshly punish you and ensure your actions are net negative for following your theory, because even consequentialists couldn’t meet it. That being said, I’m not entirely convinced by this. Maybe you just need to follow the right decision theory. Or, we could still think avoiding collective defeat under universalization counts in favour of some procedures over others, and we should try to (more approximately) satisfy it in more situations than fewer.
You’d probably need to be only boundedly sensitive to stakes, at least in many conceivable decision situations, e.g. my post and Pruss, 2022. Depending on how exactly you are sensitive to the stakes, this could undermine longtermism.
Evidential decision theory and other acausal decision theories seem pretty plausible to me, but I’m very skeptical they make enough difference unless you take into account correlations with agents across a large (e.g. infinite) multiverse. I do in fact think it’s pretty likely we live in the right kind of multiverse that longtermism seems pretty plausible (or at least worthy of being a big part of my portfolio) even with probability difference discounting, though!
I suppose my main objection to the collective decision-making response to the no difference argument is that there doesn’t seem to be any sufficiently well-motivated and theoretically satisfying way of taking into account the collective probability of making a difference, especially as something like a welfarist consequentialist, e.g. a utilitarian. (It could still be the case that probability difference discounting is wrong, but it wouldn’t be because we should take collective probabilities into account.)
Why should I care about this collective probability, rather than the probability differences between outcomes of the choices actually available to me? As a welfarist consequentialist, I would compare probability distributions over population welfare values[1] for each of my options, and use rules that rank them pairwise, or within subsets of options, or selects maximal options from subsets of options. Collective probabilities don’t seem to be intrinsically important here. They wouldn’t be part of the descriptions of the probability distributions of population welfare values for options actually available to you or calculable from them, if and because you don’t have enough actual influence over what others will do. You’d need to consider the probability distributions of unavailable options to calculate them. Taking those distributions into account in a way that would properly address collective action problems seems to require non-welfarist reasons. This seems incompatible with act utilitarianism.
The differences in probabilities between options, on the other hand, do seem potentially important, if the probabilities matter at all. They’re calculable from the probability measures over welfare for the options available to you.
Two possibilities Kosonen discusses that seem compatible with utilitarianism are rule utilitarianism and evidential decision theory. Here are my takes on them:
For rule utilitarianism, you could assume you should follow universalizable procedures that lead to the best outcomes (or best distributions of outcomes) if everyone (had the same beliefs and) followed them (e.g. Greaves et al., 2022 (section 4.5), Kant (his first formulation of the categorical imperative), Parfit and others). Pairwise probability difference discounting without collective thresholds leads to collective defeat, so it would be ruled out.
I’m not really convinced that collective defeat under these unrealistic universalizations is a big deal at all. I don’t get to choose what procedures everyone else follows (or what they believe), so why should I care about that?
I expect that changing my preferences or procedures to satisfy this constraint will lead to worse outcomes according to my current preferences. That seems self-defeating in a way, too.
Mendola (1986, p.158, and responses to objections to his argument in the rest of the paper) argues that this is too strong as a formal constraint, based on aliens or machines that harshly punish you and ensure your actions are net negative for following your theory, because even consequentialists couldn’t meet it. That being said, I’m not entirely convinced by this. Maybe you just need to follow the right decision theory. Or, we could still think avoiding collective defeat under universalization counts in favour of some procedures over others, and we should try to (more approximately) satisfy it in more situations than fewer.
You’d probably need to be only boundedly sensitive to stakes, at least in many conceivable decision situations, e.g. my post and Pruss, 2022. Depending on how exactly you are sensitive to the stakes, this could undermine longtermism.
Evidential decision theory and other acausal decision theories seem pretty plausible to me, but I’m very skeptical they make enough difference unless you take into account correlations with agents across a large (e.g. infinite) multiverse. I do in fact think it’s pretty likely we live in the right kind of multiverse that longtermism seems pretty plausible (or at least worthy of being a big part of my portfolio) even with probability difference discounting, though!
Or aggregate welfare, or pairwise differences in individual welfare between outcomes.