I’m concerned that this approach is structurally very vulnerable to fanaticism / muggings. This matters for insect experience, and for possible moral relevance of single-cell organisms (ok, before getting to this case we’d likely want to revisit your section on subsystems of the brain and consider the possibility of individual neurons having morally relevant experience that our consciousness doesn’t get proper access to). It could matter especially for how much we chase after the possibility of artificial minds with far far greater capacity for morally relevant experience than humans.
I guess I see this as the central issue with normalizing this way, and was sort of hoping you’d say more about it. It gets discussed a little when you talk about the possibility of overlapping conscious subsystems of the brain, but I’m unclear what your stance is towards it in general, or what you would say to someone who objected to this approach because it seemed to give a fanatical weight to chickens in the human/chicken comparison? (perhaps having somewhat different probabilities than you on the likelihood of different levels of chicken moral relevance)
I agree that this approach, if you’re something like a (risk neutral) expectational utilitarian, is very vulnerable to fanaticism / muggings, but that to me is a problem for expectational utilitarianism. To you and “to someone who objected to this approach because it seemed to give a fanatical weight to chickens in the human/chicken comparison”, I’d say to put more weight on normative stances that are less fanatical than expectational utilitarianism.
I personally reserve substantial skepticism of expected value maximization in general (both within moral stances and for handling moral uncertainty between them), expected value maximization with unbounded value specifically, aggregation in general and aggregation by summation. I’d probably end up with “worldview buckets” based on different attitudes towards risk/uncertainty, aggregation and grounds for moral value (types of welfare, non-welfarist values, as in the problem of multiple (human) reference points). RP’s CURVE sequence goes over attitudes to risk and their implications for intervention and cause prioritization. Then, I doubt these stances would be intertheoretically comparable. For uncertainty between them, I’d use an approach to moral uncertainty that didn’t depend on intertheoretic comparisons, like a moral parliament, a bargain-theoretic approach, variance voting or just sizing worldview buckets proportionally to credences.
In practice, within a neartermist focus (and ignoring artificial consciousness), this could conceivably roughly end up looking like a set of resource buckets: a human-centric bucket, a bucket for mammals and birds, a bucket for all vertebrates, a bucket for all vertebrates + sufficiently sophisticated invertebrates, a bucket for all animals, and a ~panpsychist bucket.[1] However, the boundaries between these buckets would be soft (and softer), because the actual buckets don’t specifically track a human-centric view, a vertebrate view, etc.. My approach would also inform how to size the buckets and limit risky interventions within them.
For example, fix some normative stance, and suppose within it:
you thought a typical chicken had a 1% chance of having roughly the same moral weight (per year) as a typical human (according to specific moral grounds), and didn’t matter at all otherwise.
you aggregate via summation.
you thought helping chickens (much) at all would be too fanatical.
Then that view would also recommend against human-helping interventions with at most a 1% probability of success.[2] Or, you could include some chicken interventions with many more roughly statistically risky independent human-helping interventions, because many independent risky (positive expected value) bets together don’t look as risky. Still, this stance shouldn’t bet everything on an intervention helping humans with only a 1% chance of success, because otherwise it could just bet everything on chickens with a similar payoff distribution. This stance would limit risky bets. Every stance could limit risky bets, but the ones that end up human-centric in practice would tend to do so more than others.
Or, maybe some of the later buckets are just replaced with longtermist buckets, if and because longtermist bets could have similar probabilities of making a difference, but better payoffs when they succeed.
Depending on how the nature of your attitudes to risk. This could follow from difference-making risk aversion or probability difference discounting of some kind. On the other hand, if you maximized the expected utility of the arctan of total welfare, a bounded function, then you’d prioritize marginal local improvements to worlds with small populations and switching between big and small populations, while ignoring marginal local improvements to worlds with large populations. This could also mean ignoring chickens but not marginal local improvements for humans, because if chickens don’t count and we go extinct soon (or future people don’t count), then the population is much smaller.
Is the two-envelope problem, as you understand it, a problem for anything except expectational utilitarianism?
I’m asking because it feels to me like you’re saying roughly “yes yes although I proposed a solution to the two-envelope problem I agree it’s very much still a problem, so you also need an entirely different type of solution to address it”. I think this is a bit of a caricature of what you’re saying, and I suspect that it’s an unfair one, but I can’t immediately see how it’s unfair, so I’m asking this way to try to get quickly to the heart of what’s going on.
Is the two-envelope problem, as you understand it, a problem for anything except expectational utilitarianism?
I think it is or would have been a problem for basically any normative stance (moral theory + attitudes towards risk, etc.) that is at all sensitive to risk/uncertainty and stakes roughly according to expected value.[1]
I think I’ve given a general solution here to the two envelopes problem for moral weights (between moral patients) when you fix your normative stance but have remaining empirical/descriptive uncertainty about the moral weights of beings conditional on that stance. It can be adapted to different normative stances, but I illustrated it with versions of expectational utilitarianism. (EDIT: And I’m arguing that a lot of the relevant uncertainty actually is just empirical, not normative, more than some have assumed.)
For two envelopes problems between normative stances, I’m usually skeptical of intertheoretic comparisons, so would mostly recommend approaches that don’t depend on them.
For example, I think there’s no two envelopes problem for someone who maximizes the median value, because the reciprocal of the median is the median of the reciprocal.
But I’d take it to be a problem for anyone who roughly maximizes an expected value or counts higher expected value in favour of an act, e.g. does so with constraints, or after discounting small probabilities. They don’t have to be utilitarian or aggregate welfare at all, either.
OK thanks. I’m going to attempt a summary of where I think things are:
In trying to assess moral weights, you can get two-envelope problems for both empirical uncertainty and normative uncertainty
Re. empirical uncertainty, you argue that there isn’t a two-envelope problem, and you can just treat it like any other empirical uncertainty
In my other comment thread I argue that just like the classic money-based two-envelope problem, there’s still a problem to be addressed, and it probably needs to involve priors
Re. normative uncertainty, you would tend to advise approaches which help to dodge facing two-envelope problems in the first place, alongside dodging facing a bunch of other issues
I’m sympathetic to this, although I don’t think it’s uncontroversial
You argue that a lot of the uncertainty should be understood to be empirical rather than normative — but you also think quite a bit of it is normative (insofar as you recommend people allocating resources into buckets associated with different worldviews)
I kind of get where you’re coming from here, although I feel that the lines between what’s empirical and what’s normative uncertainty are often confusing, and so I kind of want action-guiding advice to be available for actors who haven’t yet worked out how to disentangle them. (I’m also not certain that the “different buckets for different worldviews” is the best approach to normative uncertainty, although as a pragmatic matter I certainly don’t hate it, and it has some theoretical appeal.)
(I wouldn’t pick out the worldview bucket approach as the solution everyone should necessarily find most satisfying, given their own intuitions/preferences, but it is one I tend to prefer now.)
Ok great. In that case one view I have is that it would be clearer to summarize your position (e.g. in the post title) as “there isn’t a two envelope problem for moral weights”, rather than as presenting a solution.
Thanks for the exploration of this.
I’m concerned that this approach is structurally very vulnerable to fanaticism / muggings. This matters for insect experience, and for possible moral relevance of single-cell organisms (ok, before getting to this case we’d likely want to revisit your section on subsystems of the brain and consider the possibility of individual neurons having morally relevant experience that our consciousness doesn’t get proper access to). It could matter especially for how much we chase after the possibility of artificial minds with far far greater capacity for morally relevant experience than humans.
I guess I see this as the central issue with normalizing this way, and was sort of hoping you’d say more about it. It gets discussed a little when you talk about the possibility of overlapping conscious subsystems of the brain, but I’m unclear what your stance is towards it in general, or what you would say to someone who objected to this approach because it seemed to give a fanatical weight to chickens in the human/chicken comparison? (perhaps having somewhat different probabilities than you on the likelihood of different levels of chicken moral relevance)
I agree that this approach, if you’re something like a (risk neutral) expectational utilitarian, is very vulnerable to fanaticism / muggings, but that to me is a problem for expectational utilitarianism. To you and “to someone who objected to this approach because it seemed to give a fanatical weight to chickens in the human/chicken comparison”, I’d say to put more weight on normative stances that are less fanatical than expectational utilitarianism.
I personally reserve substantial skepticism of expected value maximization in general (both within moral stances and for handling moral uncertainty between them), expected value maximization with unbounded value specifically, aggregation in general and aggregation by summation. I’d probably end up with “worldview buckets” based on different attitudes towards risk/uncertainty, aggregation and grounds for moral value (types of welfare, non-welfarist values, as in the problem of multiple (human) reference points). RP’s CURVE sequence goes over attitudes to risk and their implications for intervention and cause prioritization. Then, I doubt these stances would be intertheoretically comparable. For uncertainty between them, I’d use an approach to moral uncertainty that didn’t depend on intertheoretic comparisons, like a moral parliament, a bargain-theoretic approach, variance voting or just sizing worldview buckets proportionally to credences.
In practice, within a neartermist focus (and ignoring artificial consciousness), this could conceivably roughly end up looking like a set of resource buckets: a human-centric bucket, a bucket for mammals and birds, a bucket for all vertebrates, a bucket for all vertebrates + sufficiently sophisticated invertebrates, a bucket for all animals, and a ~panpsychist bucket.[1] However, the boundaries between these buckets would be soft (and softer), because the actual buckets don’t specifically track a human-centric view, a vertebrate view, etc.. My approach would also inform how to size the buckets and limit risky interventions within them.
For example, fix some normative stance, and suppose within it:
you thought a typical chicken had a 1% chance of having roughly the same moral weight (per year) as a typical human (according to specific moral grounds), and didn’t matter at all otherwise.
you aggregate via summation.
you thought helping chickens (much) at all would be too fanatical.
Then that view would also recommend against human-helping interventions with at most a 1% probability of success.[2] Or, you could include some chicken interventions with many more roughly statistically risky independent human-helping interventions, because many independent risky (positive expected value) bets together don’t look as risky. Still, this stance shouldn’t bet everything on an intervention helping humans with only a 1% chance of success, because otherwise it could just bet everything on chickens with a similar payoff distribution. This stance would limit risky bets. Every stance could limit risky bets, but the ones that end up human-centric in practice would tend to do so more than others.
Or, maybe some of the later buckets are just replaced with longtermist buckets, if and because longtermist bets could have similar probabilities of making a difference, but better payoffs when they succeed.
Depending on how the nature of your attitudes to risk. This could follow from difference-making risk aversion or probability difference discounting of some kind. On the other hand, if you maximized the expected utility of the arctan of total welfare, a bounded function, then you’d prioritize marginal local improvements to worlds with small populations and switching between big and small populations, while ignoring marginal local improvements to worlds with large populations. This could also mean ignoring chickens but not marginal local improvements for humans, because if chickens don’t count and we go extinct soon (or future people don’t count), then the population is much smaller.
Is the two-envelope problem, as you understand it, a problem for anything except expectational utilitarianism?
I’m asking because it feels to me like you’re saying roughly “yes yes although I proposed a solution to the two-envelope problem I agree it’s very much still a problem, so you also need an entirely different type of solution to address it”. I think this is a bit of a caricature of what you’re saying, and I suspect that it’s an unfair one, but I can’t immediately see how it’s unfair, so I’m asking this way to try to get quickly to the heart of what’s going on.
I think it is or would have been a problem for basically any normative stance (moral theory + attitudes towards risk, etc.) that is
at allsensitive to risk/uncertainty and stakes roughly according to expected value.[1]I think I’ve given a general solution here to the two envelopes problem for moral weights (between moral patients) when you fix your normative stance but have remaining empirical/descriptive uncertainty about the moral weights of beings conditional on that stance. It can be adapted to different normative stances, but I illustrated it with versions of expectational utilitarianism. (EDIT: And I’m arguing that a lot of the relevant uncertainty actually is just empirical, not normative, more than some have assumed.)
For two envelopes problems between normative stances, I’m usually skeptical of intertheoretic comparisons, so would mostly recommend approaches that don’t depend on them.
(Footnote added in an edit of this comment.)
For example, I think there’s no two envelopes problem for someone who maximizes the median value, because the reciprocal of the median is the median of the reciprocal.
But I’d take it to be a problem for anyone who roughly maximizes an expected value or counts higher expected value in favour of an act, e.g. does so with constraints, or after discounting small probabilities. They don’t have to be utilitarian or aggregate welfare at all, either.
OK thanks. I’m going to attempt a summary of where I think things are:
In trying to assess moral weights, you can get two-envelope problems for both empirical uncertainty and normative uncertainty
Re. empirical uncertainty, you argue that there isn’t a two-envelope problem, and you can just treat it like any other empirical uncertainty
In my other comment thread I argue that just like the classic money-based two-envelope problem, there’s still a problem to be addressed, and it probably needs to involve priors
Re. normative uncertainty, you would tend to advise approaches which help to dodge facing two-envelope problems in the first place, alongside dodging facing a bunch of other issues
I’m sympathetic to this, although I don’t think it’s uncontroversial
You argue that a lot of the uncertainty should be understood to be empirical rather than normative — but you also think quite a bit of it is normative (insofar as you recommend people allocating resources into buckets associated with different worldviews)
I kind of get where you’re coming from here, although I feel that the lines between what’s empirical and what’s normative uncertainty are often confusing, and so I kind of want action-guiding advice to be available for actors who haven’t yet worked out how to disentangle them. (I’m also not certain that the “different buckets for different worldviews” is the best approach to normative uncertainty, although as a pragmatic matter I certainly don’t hate it, and it has some theoretical appeal.)
Does that seem wrong anywhere to you?
This all seems right to me.
(I wouldn’t pick out the worldview bucket approach as the solution everyone should necessarily find most satisfying, given their own intuitions/preferences, but it is one I tend to prefer now.)
Ok great. In that case one view I have is that it would be clearer to summarize your position (e.g. in the post title) as “there isn’t a two envelope problem for moral weights”, rather than as presenting a solution.