I must confess my understanding of this is only partial, I wonder if you could explain to me the argument here where there are different moral theories in each envelope? You might well have done it in the article but I missed it/struggled to understand. Like this interesting scenario shared on the other envelopes post.
“To re-emphasise the above, down-prioritising Animal Welfare on these grounds does not require me to have overwhelming confidence that hedonism is false. For example a toy comparison could look like:
In 50% of worlds hedonism is true, and Global Health interventions produce 1 unit of value while Animal Welfare interventions produce 500 units.
In 50% of worlds hedonism is false, and the respective amounts are 1000 and 1 respectively.
Despite a 50%-likely ‘hedonism is true’ scenario where Animal Welfare dominates by 500x, Global Health wins on EV here.”
NB: (side note, not biggerst deal) I would personally appreciate it if this kind of post could somehow be written in a way that was slightly easier to understand for those of us who non moral philosophers, using less Jargon and more straightforward sentences. Maybe this isn’t possible though and I appreciate it might not be worth the effort simplifying things for the plebs at times ;).
NB: (side note, not biggerst deal) I would personally appreciate it if this kind of post could somehow be written in a way that was slightly easier to understand for those of us who non moral philosophers, using less Jargon and more straightforward sentences. Maybe this isn’t possible though and I appreciate it might not be worth the effort simplifying things for the plebs at times ;).
there are different moral theories at play, it gets challenging. I agree with Tomasik that there may sometimes be no way to make a comparison or extract anything like an expected utility.
What matters, I think, in this case, is whether the units are fixed across scenarios. Suppose that we think one unit of value corresponds to a specific amount of human pain and that our non-hedonist theory cares about pain just as much as our hedonistic theory, but also cares about other things in addition. Suppose that it assigns value to personal flourishing, such that it sees 1000x value from personal flourishing as pain mitigation coming from the global health intervention and thinks non-human animals are completely incapable of flourishing. Then we might represent the possibilities as such:
Animal Global Health
Hedonism 500 1
Hedonism + Flourishing 500 1000
If we are 50⁄50, then we should slightly favor the global health intervention, given its expected value of 500.5. This presentation requires that the hedonism + flourishing view count suffering just as much as the hedonist view. So unlike in the quote, it doesn’t down weight the pain suffered by animals in the non-hedonist case. The units can be assumed to be held fixed across contexts.
If we didn’t want to make that assumption, we could try to find a third unit that was held fixed that we could use as a common currency. Maybe we could bring in other views to act as an intermediary. Absent such a common currency, I think extracting an expected value gets very difficult and I’m not sure what to say.
Requiring a fixed unit for comparisons isn’t so much of a drawback as it might seem. I think that most of the views people actually hold care about human suffering for approximately the same reasons, and that is enough license to treat it as having approximately the same value. To make the kind of case sketched above concrete, you’d have to come to grips with how much more valuable you think flourishing is than freedom from suffering. One of the assumptions that motivated the reductive presuppositions of the Moral Weight Project was that suffering is one of the principal components of value for most people, so that it is unlikely to be vastly outweighed by the other things people care about.
I must confess my understanding of this is only partial, I wonder if you could explain to me the argument here where there are different moral theories in each envelope? You might well have done it in the article but I missed it/struggled to understand. Like this interesting scenario shared on the other envelopes post.
“To re-emphasise the above, down-prioritising Animal Welfare on these grounds does not require me to have overwhelming confidence that hedonism is false. For example a toy comparison could look like:
In 50% of worlds hedonism is true, and Global Health interventions produce 1 unit of value while Animal Welfare interventions produce 500 units.
In 50% of worlds hedonism is false, and the respective amounts are 1000 and 1 respectively.
Despite a 50%-likely ‘hedonism is true’ scenario where Animal Welfare dominates by 500x, Global Health wins on EV here.”
NB: (side note, not biggerst deal) I would personally appreciate it if this kind of post could somehow be written in a way that was slightly easier to understand for those of us who non moral philosophers, using less Jargon and more straightforward sentences. Maybe this isn’t possible though and I appreciate it might not be worth the effort simplifying things for the plebs at times ;).
Noted, I will keep this in mind going forward.
there are different moral theories at play, it gets challenging. I agree with Tomasik that there may sometimes be no way to make a comparison or extract anything like an expected utility.
What matters, I think, in this case, is whether the units are fixed across scenarios. Suppose that we think one unit of value corresponds to a specific amount of human pain and that our non-hedonist theory cares about pain just as much as our hedonistic theory, but also cares about other things in addition. Suppose that it assigns value to personal flourishing, such that it sees 1000x value from personal flourishing as pain mitigation coming from the global health intervention and thinks non-human animals are completely incapable of flourishing. Then we might represent the possibilities as such:
Animal Global Health
Hedonism 500 1
Hedonism + Flourishing 500 1000
If we are 50⁄50, then we should slightly favor the global health intervention, given its expected value of 500.5. This presentation requires that the hedonism + flourishing view count suffering just as much as the hedonist view. So unlike in the quote, it doesn’t down weight the pain suffered by animals in the non-hedonist case. The units can be assumed to be held fixed across contexts.
If we didn’t want to make that assumption, we could try to find a third unit that was held fixed that we could use as a common currency. Maybe we could bring in other views to act as an intermediary. Absent such a common currency, I think extracting an expected value gets very difficult and I’m not sure what to say.
Requiring a fixed unit for comparisons isn’t so much of a drawback as it might seem. I think that most of the views people actually hold care about human suffering for approximately the same reasons, and that is enough license to treat it as having approximately the same value. To make the kind of case sketched above concrete, you’d have to come to grips with how much more valuable you think flourishing is than freedom from suffering. One of the assumptions that motivated the reductive presuppositions of the Moral Weight Project was that suffering is one of the principal components of value for most people, so that it is unlikely to be vastly outweighed by the other things people care about.