Yeah, I donât think most peopleâs motivating reasons correspond to anything very coherent. E.g. most will say itâs wrong to let the child before your eyes drown even if saving them prevents you from donating enough to save two other children from drowning. Theyâd say the imperative to save one childâs life isnât in danger of being swamped by the welfare impact on other children, even. If anyone can make a coherent view out of that, Iâll be interested to see the results. But Iâm skeptical; so here I restricted myself to views that I think are genuinely well-justified. (Others may, of course, judge matters differently!)
Coherence may not even matter that much, I presume that one of Open Philanthropyâs goals in the worldview framework is to have neat buckets for potential donors to back depending on their own feelings. I also reckon that even if they donât personally have incoherent beliefs, attracting the donations of those that do is probably more advantageous than rejecting them.
Itâs fine to offer recommendations within suboptimal cause areas for ineffective donors. But Iâm talking about worldview diversification for the purpose of allocating oneâs own (or OpenPhilâs own) resources genuinely wisely, given oneâs (or: OPâs) warranted uncertainty.
2min coherent view there: the likely flowthrough of not saving a child right in front of you to your psychological wellbeing, community, and future social functioning, especially compared to the counterfactual, are drastically worse than not donating enough to save two children on average, and the powerful intuition one could expect to feel in such a situation, saying that you should save the child, is so strong that to numb or ignore it is likely to damage the strength of that moral intuition or compass, which could be wildly imprudent. In essence:
-psychological and flow-through effects of helping those in proximity to you are likely undervalued in extreme situations where you are the only one capable of mitigating the problem
-effects of community flow-through effects in developed countries regarding altruistic social acts in general may be undervalued, especially if they uniquely foster oneâs own well-being or moral character through exercise of a âmoral muscleâ
-it is imprudent to ignore strong moral intuition, especially in emergency scenarios, and it is important to Make a Habit of not ignoring strong intuition (unless further reflection leads to the natural modification/âdissipation of that intuition)
To me, naive application of utilitarianism often leads to underestimating these considerations.
There was meant to be an âall else equalâ clause in there (as usually goes without saying in these sorts of thought experiments) -- otherwise, as you say, the verdict wouldnât necessarily indicate underlying non-utilitarian concerns at all.
Perhaps easiest to imagine if you modify the thought experiment so that your psychology (memories, âmoral musclesâ, etc.) will be âresetâ after making the decision. Iâm talking about those who would insist that you still ought to save the one over the two even thenâno matter how the purely utilitarian considerations play out.
Yeah honestly I donât think there is a single true deontologist on Earth. To say anything is good or addresses the good, including deontology, one must define the âgoodâ aimed at.
I think personal/âdirect situations entail a slew of complicating factors that a utilitarian should consider. As a response to that uncertainty, it is often rational to lean on intuition. And, thus, it is bad to undermine that intuition habitually.
âDirectnessâ inherently means higher level of physical/âemotional involvement, different (likely closer to home) social landscape and stakes, etc. So constructing an âall else being equalâ scenario is impossible.
Related to initial deontologist point: when your average person expresses a âdirectness mattersâ view, it is very likely they are expressing concern for these considerations, rather than actually having a diehard deontologist view (even if they use language that suggests that).
I agree that a lot of peopleâs motivating reasons donât correspond to anything particularly coherent, but thatâs why I highlighted that even the philosopher who conceived the original thought experiment specifically to argue the being in front of you component didnât matter, (who happens to be an outspoken anti-speciesist hedonic utilitarian) appears to have concluded that [human] lifesaving is intrinsically valuable, and to the point the approximate equivalence of the value of lives saved swamped considerations about relative suffering or capabilities.
Ultimately the point was less about the quirks of thought experiments and more that âsaving livesâ is for many people a different bucket with different weights from âending sufferingâ and only marginal overlap with âcapacity growthâ. And a corollary of that is that they can attach a reasonably high value to the suffering of an individual chicken and still think saving a life [of a human] is equal to or more valuable than equivalent spend on activism that might reduce suffering of a relatively large number chickensâitâs a different âbucketâ altogether.
(FWIW I think most people find a scenario in which itâs necessary to allow the child to drown to raise enough money to save two children implausible; and perhaps substitute a more plausible equivalent where the person makes a one off donation to an effective medical charity as a form of moral licensing for letting the one child drown⊠)
Iâm curious why you think Singer would agree that âthe imperative to save the childâs life wasnât in danger of being swamped by the welfare impact on a very large number of aquatic animals.â The original thought-experiment didnât introduce the possibility of any such trade-off. But if you were to introduce this, Singer is clearly committed to thinking that the reason to save the child (however strong it is in isolation) could be outweighed.
Maybe Iâm misunderstanding what you have in mind, but Iâm not really seeing any principled basis for treating âsaving livesâ as in a completely separate bucket from improving quality of life. (Indeed, the whole point of QALYs as a metric is to put the two on a common scale.)
(As I argue in this paper, itâs a philosophical mistake to treat âsaving livesâ as having fixed and constant value, independently of how much and how good of a life extension it actually constitutes. Thereâs really not any sensible way to value âsaving livesâ over and above the welfare benefit provided to the beneficiary.)
Because as soon as you start thinking the value of saving or not saving life is [solely] instrumental in terms of suffering/âoutput tradeoffs, the basic premise of his argument (childrensâ lives are approximately equal, no matter where they are) collapses. And the rest of Singerâs actions also seem to indicate that he didnât and doesnât believe that saving sentient lives is in danger of being swamped by cost-effective modest suffering reduction for much larger numbers of creatures whose degree of sentience he also values.
The other reason why Iâve picked up there being no quantification of any value to human lives is youâve called your bucket âpure suffering reductionâ, not âimproving quality of lifeâ, so itâs explicitly not framed as a comprehensive measure of welfare benefit to the beneficiary (whose death ceases their suffering). The individual welfare upside to survival is absent from your framing, even if it wasnât from your thinking.
If we look at broader measures like hedonic enjoyment or preference satisfaction, I think its much easier for humans to dominate. Relative similarity of how humans and animals experience pain isnât necessarily matched by how they experience satisfaction.
So any conservative framing for the purpose of worldview diversification and interspecies tradeoffs involves separate âbucketsâ for positive and negative valences (which people are free to combine if they actually are happy with the assumption of hedonic utility and valence symmetry). And yes, Iâd also have a separate bucket for âsaving livesâ, which again people are free to attach no additional weight to, and to selectively include and exclude different sets of creatures from.
This means that somebody can prioritise pain relief for 1000 chickens over pain relief for 1 elderly human, but still pick the human when it comes down to whose live(s) to save, which seems well within the bounds of reasonable belief, and similar to what a number of people whoâve thought very carefully about these issues are actually doing.
Youâre obviously perfectly entitled to argue otherwise, but there being some sort of value to saving lives other than âsuffering reductionâ or âthe output they produceâ is a commonly held view, and the whole point of âworldview diversificationâ is not to defer to a single philosopherâs framing. For the record, I agree that one could make a case for saving human lives being cost-effective purely on future outputs and moonshot potential given a long enough time frame (which I think was the core of your original argument), but I donât think thatâs a âconservativeâ framing, I think itâs quite a niche one. Iâd strongly agree with an argument that flowthrough effects mean GHD isnât only âneartermâ.
Yeah, I donât think most peopleâs motivating reasons correspond to anything very coherent. E.g. most will say itâs wrong to let the child before your eyes drown even if saving them prevents you from donating enough to save two other children from drowning. Theyâd say the imperative to save one childâs life isnât in danger of being swamped by the welfare impact on other children, even. If anyone can make a coherent view out of that, Iâll be interested to see the results. But Iâm skeptical; so here I restricted myself to views that I think are genuinely well-justified. (Others may, of course, judge matters differently!)
Coherence may not even matter that much, I presume that one of Open Philanthropyâs goals in the worldview framework is to have neat buckets for potential donors to back depending on their own feelings. I also reckon that even if they donât personally have incoherent beliefs, attracting the donations of those that do is probably more advantageous than rejecting them.
Itâs fine to offer recommendations within suboptimal cause areas for ineffective donors. But Iâm talking about worldview diversification for the purpose of allocating oneâs own (or OpenPhilâs own) resources genuinely wisely, given oneâs (or: OPâs) warranted uncertainty.
2min coherent view there: the likely flowthrough of not saving a child right in front of you to your psychological wellbeing, community, and future social functioning, especially compared to the counterfactual, are drastically worse than not donating enough to save two children on average, and the powerful intuition one could expect to feel in such a situation, saying that you should save the child, is so strong that to numb or ignore it is likely to damage the strength of that moral intuition or compass, which could be wildly imprudent. In essence:
-psychological and flow-through effects of helping those in proximity to you are likely undervalued in extreme situations where you are the only one capable of mitigating the problem
-effects of community flow-through effects in developed countries regarding altruistic social acts in general may be undervalued, especially if they uniquely foster oneâs own well-being or moral character through exercise of a âmoral muscleâ
-it is imprudent to ignore strong moral intuition, especially in emergency scenarios, and it is important to Make a Habit of not ignoring strong intuition (unless further reflection leads to the natural modification/âdissipation of that intuition)
To me, naive application of utilitarianism often leads to underestimating these considerations.
There was meant to be an âall else equalâ clause in there (as usually goes without saying in these sorts of thought experiments) -- otherwise, as you say, the verdict wouldnât necessarily indicate underlying non-utilitarian concerns at all.
Perhaps easiest to imagine if you modify the thought experiment so that your psychology (memories, âmoral musclesâ, etc.) will be âresetâ after making the decision. Iâm talking about those who would insist that you still ought to save the one over the two even thenâno matter how the purely utilitarian considerations play out.
Yeah honestly I donât think there is a single true deontologist on Earth. To say anything is good or addresses the good, including deontology, one must define the âgoodâ aimed at.
I think personal/âdirect situations entail a slew of complicating factors that a utilitarian should consider. As a response to that uncertainty, it is often rational to lean on intuition. And, thus, it is bad to undermine that intuition habitually.
âDirectnessâ inherently means higher level of physical/âemotional involvement, different (likely closer to home) social landscape and stakes, etc. So constructing an âall else being equalâ scenario is impossible.
Related to initial deontologist point: when your average person expresses a âdirectness mattersâ view, it is very likely they are expressing concern for these considerations, rather than actually having a diehard deontologist view (even if they use language that suggests that).
I agree that a lot of peopleâs motivating reasons donât correspond to anything particularly coherent, but thatâs why I highlighted that even the philosopher who conceived the original thought experiment specifically to argue the being in front of you component didnât matter, (who happens to be an outspoken anti-speciesist hedonic utilitarian) appears to have concluded that [human] lifesaving is intrinsically valuable, and to the point the approximate equivalence of the value of lives saved swamped considerations about relative suffering or capabilities.
Ultimately the point was less about the quirks of thought experiments and more that âsaving livesâ is for many people a different bucket with different weights from âending sufferingâ and only marginal overlap with âcapacity growthâ. And a corollary of that is that they can attach a reasonably high value to the suffering of an individual chicken and still think saving a life [of a human] is equal to or more valuable than equivalent spend on activism that might reduce suffering of a relatively large number chickensâitâs a different âbucketâ altogether.
(FWIW I think most people find a scenario in which itâs necessary to allow the child to drown to raise enough money to save two children implausible; and perhaps substitute a more plausible equivalent where the person makes a one off donation to an effective medical charity as a form of moral licensing for letting the one child drown⊠)
Iâm curious why you think Singer would agree that âthe imperative to save the childâs life wasnât in danger of being swamped by the welfare impact on a very large number of aquatic animals.â The original thought-experiment didnât introduce the possibility of any such trade-off. But if you were to introduce this, Singer is clearly committed to thinking that the reason to save the child (however strong it is in isolation) could be outweighed.
Maybe Iâm misunderstanding what you have in mind, but Iâm not really seeing any principled basis for treating âsaving livesâ as in a completely separate bucket from improving quality of life. (Indeed, the whole point of QALYs as a metric is to put the two on a common scale.)
(As I argue in this paper, itâs a philosophical mistake to treat âsaving livesâ as having fixed and constant value, independently of how much and how good of a life extension it actually constitutes. Thereâs really not any sensible way to value âsaving livesâ over and above the welfare benefit provided to the beneficiary.)
Because as soon as you start thinking the value of saving or not saving life is [solely] instrumental in terms of suffering/âoutput tradeoffs, the basic premise of his argument (childrensâ lives are approximately equal, no matter where they are) collapses. And the rest of Singerâs actions also seem to indicate that he didnât and doesnât believe that saving sentient lives is in danger of being swamped by cost-effective modest suffering reduction for much larger numbers of creatures whose degree of sentience he also values.
The other reason why Iâve picked up there being no quantification of any value to human lives is youâve called your bucket âpure suffering reductionâ, not âimproving quality of lifeâ, so itâs explicitly not framed as a comprehensive measure of welfare benefit to the beneficiary (whose death ceases their suffering). The individual welfare upside to survival is absent from your framing, even if it wasnât from your thinking.
If we look at broader measures like hedonic enjoyment or preference satisfaction, I think its much easier for humans to dominate. Relative similarity of how humans and animals experience pain isnât necessarily matched by how they experience satisfaction.
So any conservative framing for the purpose of worldview diversification and interspecies tradeoffs involves separate âbucketsâ for positive and negative valences (which people are free to combine if they actually are happy with the assumption of hedonic utility and valence symmetry). And yes, Iâd also have a separate bucket for âsaving livesâ, which again people are free to attach no additional weight to, and to selectively include and exclude different sets of creatures from.
This means that somebody can prioritise pain relief for 1000 chickens over pain relief for 1 elderly human, but still pick the human when it comes down to whose live(s) to save, which seems well within the bounds of reasonable belief, and similar to what a number of people whoâve thought very carefully about these issues are actually doing.
Youâre obviously perfectly entitled to argue otherwise, but there being some sort of value to saving lives other than âsuffering reductionâ or âthe output they produceâ is a commonly held view, and the whole point of âworldview diversificationâ is not to defer to a single philosopherâs framing. For the record, I agree that one could make a case for saving human lives being cost-effective purely on future outputs and moonshot potential given a long enough time frame (which I think was the core of your original argument), but I donât think thatâs a âconservativeâ framing, I think itâs quite a niche one. Iâd strongly agree with an argument that flowthrough effects mean GHD isnât only âneartermâ.