I agree that a lot of people’s motivating reasons don’t correspond to anything particularly coherent, but that’s why I highlighted that even the philosopher who conceived the original thought experiment specifically to argue the being in front of you component didn’t matter, (who happens to be an outspoken anti-speciesist hedonic utilitarian) appears to have concluded that [human] lifesaving is intrinsically valuable, and to the point the approximate equivalence of the value of lives saved swamped considerations about relative suffering or capabilities.
Ultimately the point was less about the quirks of thought experiments and more that “saving lives” is for many people a different bucket with different weights from “ending suffering” and only marginal overlap with “capacity growth”. And a corollary of that is that they can attach a reasonably high value to the suffering of an individual chicken and still think saving a life [of a human] is equal to or more valuable than equivalent spend on activism that might reduce suffering of a relatively large number chickens—it’s a different ‘bucket’ altogether.
(FWIW I think most people find a scenario in which it’s necessary to allow the child to drown to raise enough money to save two children implausible; and perhaps substitute a more plausible equivalent where the person makes a one off donation to an effective medical charity as a form of moral licensing for letting the one child drown… )
I’m curious why you think Singer would agree that “the imperative to save the child’s life wasn’t in danger of being swamped by the welfare impact on a very large number of aquatic animals.” The original thought-experiment didn’t introduce the possibility of any such trade-off. But if you were to introduce this, Singer is clearly committed to thinking that the reason to save the child (however strong it is in isolation) could be outweighed.
Maybe I’m misunderstanding what you have in mind, but I’m not really seeing any principled basis for treating “saving lives” as in a completely separate bucket from improving quality of life. (Indeed, the whole point of QALYs as a metric is to put the two on a common scale.)
(As I argue in this paper, it’s a philosophical mistake to treat “saving lives” as having fixed and constant value, independently of how much and how good of a life extension it actually constitutes. There’s really not any sensible way to value “saving lives” over and above the welfare benefit provided to the beneficiary.)
Because as soon as you start thinking the value of saving or not saving life is [solely] instrumental in terms of suffering/output tradeoffs, the basic premise of his argument (childrens’ lives are approximately equal, no matter where they are) collapses. And the rest of Singer’s actions also seem to indicate that he didn’t and doesn’t believe that saving sentient lives is in danger of being swamped by cost-effective modest suffering reduction for much larger numbers of creatures whose degree of sentience he also values.
The other reason why I’ve picked up there being no quantification of any value to human lives is you’ve called your bucket “pure suffering reduction”, not “improving quality of life”, so it’s explicitly not framed as a comprehensive measure of welfare benefit to the beneficiary (whose death ceases their suffering). The individual welfare upside to survival is absent from your framing, even if it wasn’t from your thinking.
If we look at broader measures like hedonic enjoyment or preference satisfaction, I think its much easier for humans to dominate. Relative similarity of how humans and animals experience pain isn’t necessarily matched by how they experience satisfaction.
So any conservative framing for the purpose of worldview diversification and interspecies tradeoffs involves separate “buckets” for positive and negative valences (which people are free to combine if they actually are happy with the assumption of hedonic utility and valence symmetry). And yes, I’d also have a separate bucket for “saving lives”, which again people are free to attach no additional weight to, and to selectively include and exclude different sets of creatures from.
This means that somebody can prioritise pain relief for 1000 chickens over pain relief for 1 elderly human, but still pick the human when it comes down to whose live(s) to save, which seems well within the bounds of reasonable belief, and similar to what a number of people who’ve thought very carefully about these issues are actually doing.
You’re obviously perfectly entitled to argue otherwise, but there being some sort of value to saving lives other than “suffering reduction” or “the output they produce” is a commonly held view, and the whole point of “worldview diversification” is not to defer to a single philosopher’s framing. For the record, I agree that one could make a case for saving human lives being cost-effective purely on future outputs and moonshot potential given a long enough time frame (which I think was the core of your original argument), but I don’t think that’s a “conservative” framing, I think it’s quite a niche one. I’d strongly agree with an argument that flowthrough effects mean GHD isn’t only “nearterm”.
I agree that a lot of people’s motivating reasons don’t correspond to anything particularly coherent, but that’s why I highlighted that even the philosopher who conceived the original thought experiment specifically to argue the being in front of you component didn’t matter, (who happens to be an outspoken anti-speciesist hedonic utilitarian) appears to have concluded that [human] lifesaving is intrinsically valuable, and to the point the approximate equivalence of the value of lives saved swamped considerations about relative suffering or capabilities.
Ultimately the point was less about the quirks of thought experiments and more that “saving lives” is for many people a different bucket with different weights from “ending suffering” and only marginal overlap with “capacity growth”. And a corollary of that is that they can attach a reasonably high value to the suffering of an individual chicken and still think saving a life [of a human] is equal to or more valuable than equivalent spend on activism that might reduce suffering of a relatively large number chickens—it’s a different ‘bucket’ altogether.
(FWIW I think most people find a scenario in which it’s necessary to allow the child to drown to raise enough money to save two children implausible; and perhaps substitute a more plausible equivalent where the person makes a one off donation to an effective medical charity as a form of moral licensing for letting the one child drown… )
I’m curious why you think Singer would agree that “the imperative to save the child’s life wasn’t in danger of being swamped by the welfare impact on a very large number of aquatic animals.” The original thought-experiment didn’t introduce the possibility of any such trade-off. But if you were to introduce this, Singer is clearly committed to thinking that the reason to save the child (however strong it is in isolation) could be outweighed.
Maybe I’m misunderstanding what you have in mind, but I’m not really seeing any principled basis for treating “saving lives” as in a completely separate bucket from improving quality of life. (Indeed, the whole point of QALYs as a metric is to put the two on a common scale.)
(As I argue in this paper, it’s a philosophical mistake to treat “saving lives” as having fixed and constant value, independently of how much and how good of a life extension it actually constitutes. There’s really not any sensible way to value “saving lives” over and above the welfare benefit provided to the beneficiary.)
Because as soon as you start thinking the value of saving or not saving life is [solely] instrumental in terms of suffering/output tradeoffs, the basic premise of his argument (childrens’ lives are approximately equal, no matter where they are) collapses. And the rest of Singer’s actions also seem to indicate that he didn’t and doesn’t believe that saving sentient lives is in danger of being swamped by cost-effective modest suffering reduction for much larger numbers of creatures whose degree of sentience he also values.
The other reason why I’ve picked up there being no quantification of any value to human lives is you’ve called your bucket “pure suffering reduction”, not “improving quality of life”, so it’s explicitly not framed as a comprehensive measure of welfare benefit to the beneficiary (whose death ceases their suffering). The individual welfare upside to survival is absent from your framing, even if it wasn’t from your thinking.
If we look at broader measures like hedonic enjoyment or preference satisfaction, I think its much easier for humans to dominate. Relative similarity of how humans and animals experience pain isn’t necessarily matched by how they experience satisfaction.
So any conservative framing for the purpose of worldview diversification and interspecies tradeoffs involves separate “buckets” for positive and negative valences (which people are free to combine if they actually are happy with the assumption of hedonic utility and valence symmetry). And yes, I’d also have a separate bucket for “saving lives”, which again people are free to attach no additional weight to, and to selectively include and exclude different sets of creatures from.
This means that somebody can prioritise pain relief for 1000 chickens over pain relief for 1 elderly human, but still pick the human when it comes down to whose live(s) to save, which seems well within the bounds of reasonable belief, and similar to what a number of people who’ve thought very carefully about these issues are actually doing.
You’re obviously perfectly entitled to argue otherwise, but there being some sort of value to saving lives other than “suffering reduction” or “the output they produce” is a commonly held view, and the whole point of “worldview diversification” is not to defer to a single philosopher’s framing. For the record, I agree that one could make a case for saving human lives being cost-effective purely on future outputs and moonshot potential given a long enough time frame (which I think was the core of your original argument), but I don’t think that’s a “conservative” framing, I think it’s quite a niche one. I’d strongly agree with an argument that flowthrough effects mean GHD isn’t only “nearterm”.