Is the two-envelope problem, as you understand it, a problem for anything except expectational utilitarianism?
I think it is or would have been a problem for basically any normative stance (moral theory + attitudes towards risk, etc.) that is at all sensitive to risk/uncertainty and stakes roughly according to expected value.[1]
I think I’ve given a general solution here to the two envelopes problem for moral weights (between moral patients) when you fix your normative stance but have remaining empirical/descriptive uncertainty about the moral weights of beings conditional on that stance. It can be adapted to different normative stances, but I illustrated it with versions of expectational utilitarianism. (EDIT: And I’m arguing that a lot of the relevant uncertainty actually is just empirical, not normative, more than some have assumed.)
For two envelopes problems between normative stances, I’m usually skeptical of intertheoretic comparisons, so would mostly recommend approaches that don’t depend on them.
For example, I think there’s no two envelopes problem for someone who maximizes the median value, because the reciprocal of the median is the median of the reciprocal.
But I’d take it to be a problem for anyone who roughly maximizes an expected value or counts higher expected value in favour of an act, e.g. does so with constraints, or after discounting small probabilities. They don’t have to be utilitarian or aggregate welfare at all, either.
OK thanks. I’m going to attempt a summary of where I think things are:
In trying to assess moral weights, you can get two-envelope problems for both empirical uncertainty and normative uncertainty
Re. empirical uncertainty, you argue that there isn’t a two-envelope problem, and you can just treat it like any other empirical uncertainty
In my other comment thread I argue that just like the classic money-based two-envelope problem, there’s still a problem to be addressed, and it probably needs to involve priors
Re. normative uncertainty, you would tend to advise approaches which help to dodge facing two-envelope problems in the first place, alongside dodging facing a bunch of other issues
I’m sympathetic to this, although I don’t think it’s uncontroversial
You argue that a lot of the uncertainty should be understood to be empirical rather than normative — but you also think quite a bit of it is normative (insofar as you recommend people allocating resources into buckets associated with different worldviews)
I kind of get where you’re coming from here, although I feel that the lines between what’s empirical and what’s normative uncertainty are often confusing, and so I kind of want action-guiding advice to be available for actors who haven’t yet worked out how to disentangle them. (I’m also not certain that the “different buckets for different worldviews” is the best approach to normative uncertainty, although as a pragmatic matter I certainly don’t hate it, and it has some theoretical appeal.)
(I wouldn’t pick out the worldview bucket approach as the solution everyone should necessarily find most satisfying, given their own intuitions/preferences, but it is one I tend to prefer now.)
Ok great. In that case one view I have is that it would be clearer to summarize your position (e.g. in the post title) as “there isn’t a two envelope problem for moral weights”, rather than as presenting a solution.
I think it is or would have been a problem for basically any normative stance (moral theory + attitudes towards risk, etc.) that is
at allsensitive to risk/uncertainty and stakes roughly according to expected value.[1]I think I’ve given a general solution here to the two envelopes problem for moral weights (between moral patients) when you fix your normative stance but have remaining empirical/descriptive uncertainty about the moral weights of beings conditional on that stance. It can be adapted to different normative stances, but I illustrated it with versions of expectational utilitarianism. (EDIT: And I’m arguing that a lot of the relevant uncertainty actually is just empirical, not normative, more than some have assumed.)
For two envelopes problems between normative stances, I’m usually skeptical of intertheoretic comparisons, so would mostly recommend approaches that don’t depend on them.
(Footnote added in an edit of this comment.)
For example, I think there’s no two envelopes problem for someone who maximizes the median value, because the reciprocal of the median is the median of the reciprocal.
But I’d take it to be a problem for anyone who roughly maximizes an expected value or counts higher expected value in favour of an act, e.g. does so with constraints, or after discounting small probabilities. They don’t have to be utilitarian or aggregate welfare at all, either.
OK thanks. I’m going to attempt a summary of where I think things are:
In trying to assess moral weights, you can get two-envelope problems for both empirical uncertainty and normative uncertainty
Re. empirical uncertainty, you argue that there isn’t a two-envelope problem, and you can just treat it like any other empirical uncertainty
In my other comment thread I argue that just like the classic money-based two-envelope problem, there’s still a problem to be addressed, and it probably needs to involve priors
Re. normative uncertainty, you would tend to advise approaches which help to dodge facing two-envelope problems in the first place, alongside dodging facing a bunch of other issues
I’m sympathetic to this, although I don’t think it’s uncontroversial
You argue that a lot of the uncertainty should be understood to be empirical rather than normative — but you also think quite a bit of it is normative (insofar as you recommend people allocating resources into buckets associated with different worldviews)
I kind of get where you’re coming from here, although I feel that the lines between what’s empirical and what’s normative uncertainty are often confusing, and so I kind of want action-guiding advice to be available for actors who haven’t yet worked out how to disentangle them. (I’m also not certain that the “different buckets for different worldviews” is the best approach to normative uncertainty, although as a pragmatic matter I certainly don’t hate it, and it has some theoretical appeal.)
Does that seem wrong anywhere to you?
This all seems right to me.
(I wouldn’t pick out the worldview bucket approach as the solution everyone should necessarily find most satisfying, given their own intuitions/preferences, but it is one I tend to prefer now.)
Ok great. In that case one view I have is that it would be clearer to summarize your position (e.g. in the post title) as “there isn’t a two envelope problem for moral weights”, rather than as presenting a solution.