I also think it misses the worldview bucket that’s the main reason why many people fund global health and (some aspects of) development: intrinsic value attached to saving [human] lives. Potential positive flowthrough effects are a bonus on top of that, in most cases.
From an EA-ish hedonic utilitarianism perspective this dates right back to Singer’s essay about saving a drowning child. Taking that thought experiment in a different direction, I don’t think many people—EA or otherwise—would conclude that the decision on whether to save the child or not should primarily be a tradeoff between the future capacity of the child and the amount of aquatic suffering a corpse to feed upon could alleviate.
I think they’d say the imperative to save the child’s life wasn’t in danger of being swamped by the welfare impact on a very large number of aquatic animals or contingent on that child’s future impact, and I suspect as prominent an anti-speciesist as Singer would agree.
(Placing a significantly lower or zero weight on the estimated suffering experienced by a battery chicken or farmed shrimp is a sufficient but not necessary condition to favour lifesaving over animal suffering reduction campaigns. Though personally I do, and actually think the more compelling ethical arguments for prioritising farm animal welfare are deontological ones about human obligations to stop causing suffering)
Yeah, I don’t think most people’s motivating reasons correspond to anything very coherent. E.g. most will say it’s wrong to let the child before your eyes drown even if saving them prevents you from donating enough to save two other children from drowning. They’d say the imperative to save one child’s life isn’t in danger of being swamped by the welfare impact on other children, even. If anyone can make a coherent view out of that, I’ll be interested to see the results. But I’m skeptical; so here I restricted myself to views that I think are genuinely well-justified. (Others may, of course, judge matters differently!)
Coherence may not even matter that much, I presume that one of Open Philanthropy’s goals in the worldview framework is to have neat buckets for potential donors to back depending on their own feelings. I also reckon that even if they don’t personally have incoherent beliefs, attracting the donations of those that do is probably more advantageous than rejecting them.
It’s fine to offer recommendations within suboptimal cause areas for ineffective donors. But I’m talking about worldview diversification for the purpose of allocating one’s own (or OpenPhil’s own) resources genuinely wisely, given one’s (or: OP’s) warranted uncertainty.
2min coherent view there: the likely flowthrough of not saving a child right in front of you to your psychological wellbeing, community, and future social functioning, especially compared to the counterfactual, are drastically worse than not donating enough to save two children on average, and the powerful intuition one could expect to feel in such a situation, saying that you should save the child, is so strong that to numb or ignore it is likely to damage the strength of that moral intuition or compass, which could be wildly imprudent. In essence:
-psychological and flow-through effects of helping those in proximity to you are likely undervalued in extreme situations where you are the only one capable of mitigating the problem
-effects of community flow-through effects in developed countries regarding altruistic social acts in general may be undervalued, especially if they uniquely foster one’s own well-being or moral character through exercise of a “moral muscle”
-it is imprudent to ignore strong moral intuition, especially in emergency scenarios, and it is important to Make a Habit of not ignoring strong intuition (unless further reflection leads to the natural modification/dissipation of that intuition)
To me, naive application of utilitarianism often leads to underestimating these considerations.
There was meant to be an “all else equal” clause in there (as usually goes without saying in these sorts of thought experiments) -- otherwise, as you say, the verdict wouldn’t necessarily indicate underlying non-utilitarian concerns at all.
Perhaps easiest to imagine if you modify the thought experiment so that your psychology (memories, “moral muscles”, etc.) will be “reset” after making the decision. I’m talking about those who would insist that you still ought to save the one over the two even then—no matter how the purely utilitarian considerations play out.
Yeah honestly I don’t think there is a single true deontologist on Earth. To say anything is good or addresses the good, including deontology, one must define the “good” aimed at.
I think personal/direct situations entail a slew of complicating factors that a utilitarian should consider. As a response to that uncertainty, it is often rational to lean on intuition. And, thus, it is bad to undermine that intuition habitually.
“Directness” inherently means higher level of physical/emotional involvement, different (likely closer to home) social landscape and stakes, etc. So constructing an “all else being equal” scenario is impossible.
Related to initial deontologist point: when your average person expresses a “directness matters” view, it is very likely they are expressing concern for these considerations, rather than actually having a diehard deontologist view (even if they use language that suggests that).
I agree that a lot of people’s motivating reasons don’t correspond to anything particularly coherent, but that’s why I highlighted that even the philosopher who conceived the original thought experiment specifically to argue the being in front of you component didn’t matter, (who happens to be an outspoken anti-speciesist hedonic utilitarian) appears to have concluded that [human] lifesaving is intrinsically valuable, and to the point the approximate equivalence of the value of lives saved swamped considerations about relative suffering or capabilities.
Ultimately the point was less about the quirks of thought experiments and more that “saving lives” is for many people a different bucket with different weights from “ending suffering” and only marginal overlap with “capacity growth”. And a corollary of that is that they can attach a reasonably high value to the suffering of an individual chicken and still think saving a life [of a human] is equal to or more valuable than equivalent spend on activism that might reduce suffering of a relatively large number chickens—it’s a different ‘bucket’ altogether.
(FWIW I think most people find a scenario in which it’s necessary to allow the child to drown to raise enough money to save two children implausible; and perhaps substitute a more plausible equivalent where the person makes a one off donation to an effective medical charity as a form of moral licensing for letting the one child drown… )
I’m curious why you think Singer would agree that “the imperative to save the child’s life wasn’t in danger of being swamped by the welfare impact on a very large number of aquatic animals.” The original thought-experiment didn’t introduce the possibility of any such trade-off. But if you were to introduce this, Singer is clearly committed to thinking that the reason to save the child (however strong it is in isolation) could be outweighed.
Maybe I’m misunderstanding what you have in mind, but I’m not really seeing any principled basis for treating “saving lives” as in a completely separate bucket from improving quality of life. (Indeed, the whole point of QALYs as a metric is to put the two on a common scale.)
(As I argue in this paper, it’s a philosophical mistake to treat “saving lives” as having fixed and constant value, independently of how much and how good of a life extension it actually constitutes. There’s really not any sensible way to value “saving lives” over and above the welfare benefit provided to the beneficiary.)
Because as soon as you start thinking the value of saving or not saving life is [solely] instrumental in terms of suffering/output tradeoffs, the basic premise of his argument (childrens’ lives are approximately equal, no matter where they are) collapses. And the rest of Singer’s actions also seem to indicate that he didn’t and doesn’t believe that saving sentient lives is in danger of being swamped by cost-effective modest suffering reduction for much larger numbers of creatures whose degree of sentience he also values.
The other reason why I’ve picked up there being no quantification of any value to human lives is you’ve called your bucket “pure suffering reduction”, not “improving quality of life”, so it’s explicitly not framed as a comprehensive measure of welfare benefit to the beneficiary (whose death ceases their suffering). The individual welfare upside to survival is absent from your framing, even if it wasn’t from your thinking.
If we look at broader measures like hedonic enjoyment or preference satisfaction, I think its much easier for humans to dominate. Relative similarity of how humans and animals experience pain isn’t necessarily matched by how they experience satisfaction.
So any conservative framing for the purpose of worldview diversification and interspecies tradeoffs involves separate “buckets” for positive and negative valences (which people are free to combine if they actually are happy with the assumption of hedonic utility and valence symmetry). And yes, I’d also have a separate bucket for “saving lives”, which again people are free to attach no additional weight to, and to selectively include and exclude different sets of creatures from.
This means that somebody can prioritise pain relief for 1000 chickens over pain relief for 1 elderly human, but still pick the human when it comes down to whose live(s) to save, which seems well within the bounds of reasonable belief, and similar to what a number of people who’ve thought very carefully about these issues are actually doing.
You’re obviously perfectly entitled to argue otherwise, but there being some sort of value to saving lives other than “suffering reduction” or “the output they produce” is a commonly held view, and the whole point of “worldview diversification” is not to defer to a single philosopher’s framing. For the record, I agree that one could make a case for saving human lives being cost-effective purely on future outputs and moonshot potential given a long enough time frame (which I think was the core of your original argument), but I don’t think that’s a “conservative” framing, I think it’s quite a niche one. I’d strongly agree with an argument that flowthrough effects mean GHD isn’t only “nearterm”.
I also think it misses the worldview bucket that’s the main reason why many people fund global health and (some aspects of) development: intrinsic value attached to saving [human] lives. Potential positive flowthrough effects are a bonus on top of that, in most cases.
From an EA-ish hedonic utilitarianism perspective this dates right back to Singer’s essay about saving a drowning child. Taking that thought experiment in a different direction, I don’t think many people—EA or otherwise—would conclude that the decision on whether to save the child or not should primarily be a tradeoff between the future capacity of the child and the amount of aquatic suffering a corpse to feed upon could alleviate.
I think they’d say the imperative to save the child’s life wasn’t in danger of being swamped by the welfare impact on a very large number of aquatic animals or contingent on that child’s future impact, and I suspect as prominent an anti-speciesist as Singer would agree.
(Placing a significantly lower or zero weight on the estimated suffering experienced by a battery chicken or farmed shrimp is a sufficient but not necessary condition to favour lifesaving over animal suffering reduction campaigns. Though personally I do, and actually think the more compelling ethical arguments for prioritising farm animal welfare are deontological ones about human obligations to stop causing suffering)
Yeah, I don’t think most people’s motivating reasons correspond to anything very coherent. E.g. most will say it’s wrong to let the child before your eyes drown even if saving them prevents you from donating enough to save two other children from drowning. They’d say the imperative to save one child’s life isn’t in danger of being swamped by the welfare impact on other children, even. If anyone can make a coherent view out of that, I’ll be interested to see the results. But I’m skeptical; so here I restricted myself to views that I think are genuinely well-justified. (Others may, of course, judge matters differently!)
Coherence may not even matter that much, I presume that one of Open Philanthropy’s goals in the worldview framework is to have neat buckets for potential donors to back depending on their own feelings. I also reckon that even if they don’t personally have incoherent beliefs, attracting the donations of those that do is probably more advantageous than rejecting them.
It’s fine to offer recommendations within suboptimal cause areas for ineffective donors. But I’m talking about worldview diversification for the purpose of allocating one’s own (or OpenPhil’s own) resources genuinely wisely, given one’s (or: OP’s) warranted uncertainty.
2min coherent view there: the likely flowthrough of not saving a child right in front of you to your psychological wellbeing, community, and future social functioning, especially compared to the counterfactual, are drastically worse than not donating enough to save two children on average, and the powerful intuition one could expect to feel in such a situation, saying that you should save the child, is so strong that to numb or ignore it is likely to damage the strength of that moral intuition or compass, which could be wildly imprudent. In essence:
-psychological and flow-through effects of helping those in proximity to you are likely undervalued in extreme situations where you are the only one capable of mitigating the problem
-effects of community flow-through effects in developed countries regarding altruistic social acts in general may be undervalued, especially if they uniquely foster one’s own well-being or moral character through exercise of a “moral muscle”
-it is imprudent to ignore strong moral intuition, especially in emergency scenarios, and it is important to Make a Habit of not ignoring strong intuition (unless further reflection leads to the natural modification/dissipation of that intuition)
To me, naive application of utilitarianism often leads to underestimating these considerations.
There was meant to be an “all else equal” clause in there (as usually goes without saying in these sorts of thought experiments) -- otherwise, as you say, the verdict wouldn’t necessarily indicate underlying non-utilitarian concerns at all.
Perhaps easiest to imagine if you modify the thought experiment so that your psychology (memories, “moral muscles”, etc.) will be “reset” after making the decision. I’m talking about those who would insist that you still ought to save the one over the two even then—no matter how the purely utilitarian considerations play out.
Yeah honestly I don’t think there is a single true deontologist on Earth. To say anything is good or addresses the good, including deontology, one must define the “good” aimed at.
I think personal/direct situations entail a slew of complicating factors that a utilitarian should consider. As a response to that uncertainty, it is often rational to lean on intuition. And, thus, it is bad to undermine that intuition habitually.
“Directness” inherently means higher level of physical/emotional involvement, different (likely closer to home) social landscape and stakes, etc. So constructing an “all else being equal” scenario is impossible.
Related to initial deontologist point: when your average person expresses a “directness matters” view, it is very likely they are expressing concern for these considerations, rather than actually having a diehard deontologist view (even if they use language that suggests that).
I agree that a lot of people’s motivating reasons don’t correspond to anything particularly coherent, but that’s why I highlighted that even the philosopher who conceived the original thought experiment specifically to argue the being in front of you component didn’t matter, (who happens to be an outspoken anti-speciesist hedonic utilitarian) appears to have concluded that [human] lifesaving is intrinsically valuable, and to the point the approximate equivalence of the value of lives saved swamped considerations about relative suffering or capabilities.
Ultimately the point was less about the quirks of thought experiments and more that “saving lives” is for many people a different bucket with different weights from “ending suffering” and only marginal overlap with “capacity growth”. And a corollary of that is that they can attach a reasonably high value to the suffering of an individual chicken and still think saving a life [of a human] is equal to or more valuable than equivalent spend on activism that might reduce suffering of a relatively large number chickens—it’s a different ‘bucket’ altogether.
(FWIW I think most people find a scenario in which it’s necessary to allow the child to drown to raise enough money to save two children implausible; and perhaps substitute a more plausible equivalent where the person makes a one off donation to an effective medical charity as a form of moral licensing for letting the one child drown… )
I’m curious why you think Singer would agree that “the imperative to save the child’s life wasn’t in danger of being swamped by the welfare impact on a very large number of aquatic animals.” The original thought-experiment didn’t introduce the possibility of any such trade-off. But if you were to introduce this, Singer is clearly committed to thinking that the reason to save the child (however strong it is in isolation) could be outweighed.
Maybe I’m misunderstanding what you have in mind, but I’m not really seeing any principled basis for treating “saving lives” as in a completely separate bucket from improving quality of life. (Indeed, the whole point of QALYs as a metric is to put the two on a common scale.)
(As I argue in this paper, it’s a philosophical mistake to treat “saving lives” as having fixed and constant value, independently of how much and how good of a life extension it actually constitutes. There’s really not any sensible way to value “saving lives” over and above the welfare benefit provided to the beneficiary.)
Because as soon as you start thinking the value of saving or not saving life is [solely] instrumental in terms of suffering/output tradeoffs, the basic premise of his argument (childrens’ lives are approximately equal, no matter where they are) collapses. And the rest of Singer’s actions also seem to indicate that he didn’t and doesn’t believe that saving sentient lives is in danger of being swamped by cost-effective modest suffering reduction for much larger numbers of creatures whose degree of sentience he also values.
The other reason why I’ve picked up there being no quantification of any value to human lives is you’ve called your bucket “pure suffering reduction”, not “improving quality of life”, so it’s explicitly not framed as a comprehensive measure of welfare benefit to the beneficiary (whose death ceases their suffering). The individual welfare upside to survival is absent from your framing, even if it wasn’t from your thinking.
If we look at broader measures like hedonic enjoyment or preference satisfaction, I think its much easier for humans to dominate. Relative similarity of how humans and animals experience pain isn’t necessarily matched by how they experience satisfaction.
So any conservative framing for the purpose of worldview diversification and interspecies tradeoffs involves separate “buckets” for positive and negative valences (which people are free to combine if they actually are happy with the assumption of hedonic utility and valence symmetry). And yes, I’d also have a separate bucket for “saving lives”, which again people are free to attach no additional weight to, and to selectively include and exclude different sets of creatures from.
This means that somebody can prioritise pain relief for 1000 chickens over pain relief for 1 elderly human, but still pick the human when it comes down to whose live(s) to save, which seems well within the bounds of reasonable belief, and similar to what a number of people who’ve thought very carefully about these issues are actually doing.
You’re obviously perfectly entitled to argue otherwise, but there being some sort of value to saving lives other than “suffering reduction” or “the output they produce” is a commonly held view, and the whole point of “worldview diversification” is not to defer to a single philosopher’s framing. For the record, I agree that one could make a case for saving human lives being cost-effective purely on future outputs and moonshot potential given a long enough time frame (which I think was the core of your original argument), but I don’t think that’s a “conservative” framing, I think it’s quite a niche one. I’d strongly agree with an argument that flowthrough effects mean GHD isn’t only “nearterm”.