Is the two-envelope problem, as you understand it, a problem for anything except expectational utilitarianism?
I think it is or would have been a problem for basically any normative stance (moral theory + attitudes towards risk, etc.) that is at all sensitive to risk/âuncertainty and stakes roughly according to expected value.[1]
I think Iâve given a general solution here to the two envelopes problem for moral weights (between moral patients) when you fix your normative stance but have remaining empirical/âdescriptive uncertainty about the moral weights of beings conditional on that stance. It can be adapted to different normative stances, but I illustrated it with versions of expectational utilitarianism. (EDIT: And Iâm arguing that a lot of the relevant uncertainty actually is just empirical, not normative, more than some have assumed.)
For two envelopes problems between normative stances, Iâm usually skeptical of intertheoretic comparisons, so would mostly recommend approaches that donât depend on them.
For example, I think thereâs no two envelopes problem for someone who maximizes the median value, because the reciprocal of the median is the median of the reciprocal.
But Iâd take it to be a problem for anyone who roughly maximizes an expected value or counts higher expected value in favour of an act, e.g. does so with constraints, or after discounting small probabilities. They donât have to be utilitarian or aggregate welfare at all, either.
OK thanks. Iâm going to attempt a summary of where I think things are:
In trying to assess moral weights, you can get two-envelope problems for both empirical uncertainty and normative uncertainty
Re. empirical uncertainty, you argue that there isnât a two-envelope problem, and you can just treat it like any other empirical uncertainty
In my other comment thread I argue that just like the classic money-based two-envelope problem, thereâs still a problem to be addressed, and it probably needs to involve priors
Re. normative uncertainty, you would tend to advise approaches which help to dodge facing two-envelope problems in the first place, alongside dodging facing a bunch of other issues
Iâm sympathetic to this, although I donât think itâs uncontroversial
You argue that a lot of the uncertainty should be understood to be empirical rather than normative â but you also think quite a bit of it is normative (insofar as you recommend people allocating resources into buckets associated with different worldviews)
I kind of get where youâre coming from here, although I feel that the lines between whatâs empirical and whatâs normative uncertainty are often confusing, and so I kind of want action-guiding advice to be available for actors who havenât yet worked out how to disentangle them. (Iâm also not certain that the âdifferent buckets for different worldviewsâ is the best approach to normative uncertainty, although as a pragmatic matter I certainly donât hate it, and it has some theoretical appeal.)
(I wouldnât pick out the worldview bucket approach as the solution everyone should necessarily find most satisfying, given their own intuitions/âpreferences, but it is one I tend to prefer now.)
Ok great. In that case one view I have is that it would be clearer to summarize your position (e.g. in the post title) as âthere isnât a two envelope problem for moral weightsâ, rather than as presenting a solution.
I think it is or would have been a problem for basically any normative stance (moral theory + attitudes towards risk, etc.) that is
at allsensitive to risk/âuncertainty and stakes roughly according to expected value.[1]I think Iâve given a general solution here to the two envelopes problem for moral weights (between moral patients) when you fix your normative stance but have remaining empirical/âdescriptive uncertainty about the moral weights of beings conditional on that stance. It can be adapted to different normative stances, but I illustrated it with versions of expectational utilitarianism. (EDIT: And Iâm arguing that a lot of the relevant uncertainty actually is just empirical, not normative, more than some have assumed.)
For two envelopes problems between normative stances, Iâm usually skeptical of intertheoretic comparisons, so would mostly recommend approaches that donât depend on them.
(Footnote added in an edit of this comment.)
For example, I think thereâs no two envelopes problem for someone who maximizes the median value, because the reciprocal of the median is the median of the reciprocal.
But Iâd take it to be a problem for anyone who roughly maximizes an expected value or counts higher expected value in favour of an act, e.g. does so with constraints, or after discounting small probabilities. They donât have to be utilitarian or aggregate welfare at all, either.
OK thanks. Iâm going to attempt a summary of where I think things are:
In trying to assess moral weights, you can get two-envelope problems for both empirical uncertainty and normative uncertainty
Re. empirical uncertainty, you argue that there isnât a two-envelope problem, and you can just treat it like any other empirical uncertainty
In my other comment thread I argue that just like the classic money-based two-envelope problem, thereâs still a problem to be addressed, and it probably needs to involve priors
Re. normative uncertainty, you would tend to advise approaches which help to dodge facing two-envelope problems in the first place, alongside dodging facing a bunch of other issues
Iâm sympathetic to this, although I donât think itâs uncontroversial
You argue that a lot of the uncertainty should be understood to be empirical rather than normative â but you also think quite a bit of it is normative (insofar as you recommend people allocating resources into buckets associated with different worldviews)
I kind of get where youâre coming from here, although I feel that the lines between whatâs empirical and whatâs normative uncertainty are often confusing, and so I kind of want action-guiding advice to be available for actors who havenât yet worked out how to disentangle them. (Iâm also not certain that the âdifferent buckets for different worldviewsâ is the best approach to normative uncertainty, although as a pragmatic matter I certainly donât hate it, and it has some theoretical appeal.)
Does that seem wrong anywhere to you?
This all seems right to me.
(I wouldnât pick out the worldview bucket approach as the solution everyone should necessarily find most satisfying, given their own intuitions/âpreferences, but it is one I tend to prefer now.)
Ok great. In that case one view I have is that it would be clearer to summarize your position (e.g. in the post title) as âthere isnât a two envelope problem for moral weightsâ, rather than as presenting a solution.