I’ll suggest a reconceptualization that may seem radical in theory but is conservative in practice.
It doesn’t seem conservative in practice? Like Vasco, I’d be surprised if aiming for reliable global capacity growth would look like the current GHD portfolio. For example:
Given an inability to help everyone, you’d want to target interventions based on people’s future ability to contribute. (E.g. you should probably stop any interventions that target people in extreme poverty.)
You’d either want to stop focusing on infant mortality, or start interventions to increase fertility. (Depending on whether population growth is a priority.)
You’d want to invest more in education than would be suggested by typical metrics like QALYs or income doublings.
I’d guess most proponents of GHD would find (1) and (2) particularly bad.
I also think it misses the worldview bucket that’s the main reason why many people fund global health and (some aspects of) development: intrinsic value attached to saving [human] lives. Potential positive flowthrough effects are a bonus on top of that, in most cases.
From an EA-ish hedonic utilitarianism perspective this dates right back to Singer’s essay about saving a drowning child. Taking that thought experiment in a different direction, I don’t think many people—EA or otherwise—would conclude that the decision on whether to save the child or not should primarily be a tradeoff between the future capacity of the child and the amount of aquatic suffering a corpse to feed upon could alleviate.
I think they’d say the imperative to save the child’s life wasn’t in danger of being swamped by the welfare impact on a very large number of aquatic animals or contingent on that child’s future impact, and I suspect as prominent an anti-speciesist as Singer would agree.
(Placing a significantly lower or zero weight on the estimated suffering experienced by a battery chicken or farmed shrimp is a sufficient but not necessary condition to favour lifesaving over animal suffering reduction campaigns. Though personally I do, and actually think the more compelling ethical arguments for prioritising farm animal welfare are deontological ones about human obligations to stop causing suffering)
Yeah, I don’t think most people’s motivating reasons correspond to anything very coherent. E.g. most will say it’s wrong to let the child before your eyes drown even if saving them prevents you from donating enough to save two other children from drowning. They’d say the imperative to save one child’s life isn’t in danger of being swamped by the welfare impact on other children, even. If anyone can make a coherent view out of that, I’ll be interested to see the results. But I’m skeptical; so here I restricted myself to views that I think are genuinely well-justified. (Others may, of course, judge matters differently!)
Coherence may not even matter that much, I presume that one of Open Philanthropy’s goals in the worldview framework is to have neat buckets for potential donors to back depending on their own feelings. I also reckon that even if they don’t personally have incoherent beliefs, attracting the donations of those that do is probably more advantageous than rejecting them.
It’s fine to offer recommendations within suboptimal cause areas for ineffective donors. But I’m talking about worldview diversification for the purpose of allocating one’s own (or OpenPhil’s own) resources genuinely wisely, given one’s (or: OP’s) warranted uncertainty.
2min coherent view there: the likely flowthrough of not saving a child right in front of you to your psychological wellbeing, community, and future social functioning, especially compared to the counterfactual, are drastically worse than not donating enough to save two children on average, and the powerful intuition one could expect to feel in such a situation, saying that you should save the child, is so strong that to numb or ignore it is likely to damage the strength of that moral intuition or compass, which could be wildly imprudent. In essence:
-psychological and flow-through effects of helping those in proximity to you are likely undervalued in extreme situations where you are the only one capable of mitigating the problem
-effects of community flow-through effects in developed countries regarding altruistic social acts in general may be undervalued, especially if they uniquely foster one’s own well-being or moral character through exercise of a “moral muscle”
-it is imprudent to ignore strong moral intuition, especially in emergency scenarios, and it is important to Make a Habit of not ignoring strong intuition (unless further reflection leads to the natural modification/dissipation of that intuition)
To me, naive application of utilitarianism often leads to underestimating these considerations.
There was meant to be an “all else equal” clause in there (as usually goes without saying in these sorts of thought experiments) -- otherwise, as you say, the verdict wouldn’t necessarily indicate underlying non-utilitarian concerns at all.
Perhaps easiest to imagine if you modify the thought experiment so that your psychology (memories, “moral muscles”, etc.) will be “reset” after making the decision. I’m talking about those who would insist that you still ought to save the one over the two even then—no matter how the purely utilitarian considerations play out.
Yeah honestly I don’t think there is a single true deontologist on Earth. To say anything is good or addresses the good, including deontology, one must define the “good” aimed at.
I think personal/direct situations entail a slew of complicating factors that a utilitarian should consider. As a response to that uncertainty, it is often rational to lean on intuition. And, thus, it is bad to undermine that intuition habitually.
“Directness” inherently means higher level of physical/emotional involvement, different (likely closer to home) social landscape and stakes, etc. So constructing an “all else being equal” scenario is impossible.
Related to initial deontologist point: when your average person expresses a “directness matters” view, it is very likely they are expressing concern for these considerations, rather than actually having a diehard deontologist view (even if they use language that suggests that).
I agree that a lot of people’s motivating reasons don’t correspond to anything particularly coherent, but that’s why I highlighted that even the philosopher who conceived the original thought experiment specifically to argue the being in front of you component didn’t matter, (who happens to be an outspoken anti-speciesist hedonic utilitarian) appears to have concluded that [human] lifesaving is intrinsically valuable, and to the point the approximate equivalence of the value of lives saved swamped considerations about relative suffering or capabilities.
Ultimately the point was less about the quirks of thought experiments and more that “saving lives” is for many people a different bucket with different weights from “ending suffering” and only marginal overlap with “capacity growth”. And a corollary of that is that they can attach a reasonably high value to the suffering of an individual chicken and still think saving a life [of a human] is equal to or more valuable than equivalent spend on activism that might reduce suffering of a relatively large number chickens—it’s a different ‘bucket’ altogether.
(FWIW I think most people find a scenario in which it’s necessary to allow the child to drown to raise enough money to save two children implausible; and perhaps substitute a more plausible equivalent where the person makes a one off donation to an effective medical charity as a form of moral licensing for letting the one child drown… )
I’m curious why you think Singer would agree that “the imperative to save the child’s life wasn’t in danger of being swamped by the welfare impact on a very large number of aquatic animals.” The original thought-experiment didn’t introduce the possibility of any such trade-off. But if you were to introduce this, Singer is clearly committed to thinking that the reason to save the child (however strong it is in isolation) could be outweighed.
Maybe I’m misunderstanding what you have in mind, but I’m not really seeing any principled basis for treating “saving lives” as in a completely separate bucket from improving quality of life. (Indeed, the whole point of QALYs as a metric is to put the two on a common scale.)
(As I argue in this paper, it’s a philosophical mistake to treat “saving lives” as having fixed and constant value, independently of how much and how good of a life extension it actually constitutes. There’s really not any sensible way to value “saving lives” over and above the welfare benefit provided to the beneficiary.)
Because as soon as you start thinking the value of saving or not saving life is [solely] instrumental in terms of suffering/output tradeoffs, the basic premise of his argument (childrens’ lives are approximately equal, no matter where they are) collapses. And the rest of Singer’s actions also seem to indicate that he didn’t and doesn’t believe that saving sentient lives is in danger of being swamped by cost-effective modest suffering reduction for much larger numbers of creatures whose degree of sentience he also values.
The other reason why I’ve picked up there being no quantification of any value to human lives is you’ve called your bucket “pure suffering reduction”, not “improving quality of life”, so it’s explicitly not framed as a comprehensive measure of welfare benefit to the beneficiary (whose death ceases their suffering). The individual welfare upside to survival is absent from your framing, even if it wasn’t from your thinking.
If we look at broader measures like hedonic enjoyment or preference satisfaction, I think its much easier for humans to dominate. Relative similarity of how humans and animals experience pain isn’t necessarily matched by how they experience satisfaction.
So any conservative framing for the purpose of worldview diversification and interspecies tradeoffs involves separate “buckets” for positive and negative valences (which people are free to combine if they actually are happy with the assumption of hedonic utility and valence symmetry). And yes, I’d also have a separate bucket for “saving lives”, which again people are free to attach no additional weight to, and to selectively include and exclude different sets of creatures from.
This means that somebody can prioritise pain relief for 1000 chickens over pain relief for 1 elderly human, but still pick the human when it comes down to whose live(s) to save, which seems well within the bounds of reasonable belief, and similar to what a number of people who’ve thought very carefully about these issues are actually doing.
You’re obviously perfectly entitled to argue otherwise, but there being some sort of value to saving lives other than “suffering reduction” or “the output they produce” is a commonly held view, and the whole point of “worldview diversification” is not to defer to a single philosopher’s framing. For the record, I agree that one could make a case for saving human lives being cost-effective purely on future outputs and moonshot potential given a long enough time frame (which I think was the core of your original argument), but I don’t think that’s a “conservative” framing, I think it’s quite a niche one. I’d strongly agree with an argument that flowthrough effects mean GHD isn’t only “nearterm”.
I guess I have (i) some different empirical assumptions, and (ii) some different moral assumptions (about what counts as a sufficiently modest revision to still count as “conservative”, i.e. within the general spirit of GHD).
To specifically address your three examples:
I’d guess that variance in cost (to save one life, or whatever) outweighs the variance in predictable ability to contribute. (iirc, Nick Beckstead’s dissertation on longtermism made the point that all else equal, it would be better to save a life in a wealthy country for instrumental reasons, but that the cost difference is so great that it’s still plausibly much better to focus on developing countries in practice.)
Perhaps it would justify more of a shift towards the “D” side of “H&D”, insofar as we could identify any good interventions for improving economic development. But the desire for lasting improvements seems commonsensical to many people anyway (compare all the rhetoric around “root causes”, “teaching a man to fish”, etc.)
In general, extreme poverty might seem to have the most low-hanging fruit for improvement (including improvements to capacity-building). But there may be exceptions in cases of extreme societal dysfunction, in which case, again, I think it’s pretty commonsensical that we shouldn’t invest resources in places where they’d actually do less lasting good.
I don’t understand at all why this would motivate less focus on infant mortality: fixing that is an extremely cheap way to improve human capacity! I think I already mentioned in the OP that increasing fertility could also be justified in principle, but I’m not aware of any proven cheap interventions that do this in practice. Adding some child benefit support (or whatever) into the mix doesn’t strike me as unduly radical, in any case.
Greater support for education seems very commonsensical in principle (including from a broadly “global health & development” perspective), and iirc was an early focus area of GiveWell—they just gave up because they couldn’t find any promising interventions.
So I’m not really seeing anything “bad” here. I agree that it could involve some revisions to standard GHD portfolios. But it’s very conservative in comparison to proposals to shift all GHD funding to Animal Welfare, as the orthodox “neartermist” worldview seemingly recommends.
All that said: I’m less interested in what counts as “radical” or “conservative”, and more interested in what is actually justified. The strongest argument for my reconceptualization is that all the “worldviews” I set out are very sensible and warrant support (IMO). I don’t think the same is true of all the “orthodox” worldviews.
I didn’t say your proposal was “bad”, I said it wasn’t “conservative”.
My point is just that, if GHD were to reorient around “reliable global capacity growth”, it would look very different, to the point where I think your proposal is better described as “stop GHD work, and instead do reliable global capacity growth work”, rather than the current framing of “let’s reconceptualize the existing bucket of work”.
I don’t understand at all why this would motivate less focus on infant mortality: fixing that is an extremely cheap way to improve human capacity!
This sounds plausible, but not obvious, to me. If your society has a sharply limited amount of resources to invest in the next generation, it isn’t clear to me that maximizing the number of members in that generation would be the best “way to improve human capacity” in that society. One could argue that with somewhat fewer kids, the society could provide better nutrition, education, health care, and other inputs that are rather important to adult capacity and flourishing.
To be clear, I am a strong supporter of life-saving interventions and am not advocating for a move away from these. I just think they are harder to justify on improving-capacity grounds than on the grounds usually provided for them.
One could argue that with somewhat fewer kids, the society could provide better nutrition, education, health care, and other inputs that are rather important to adult capacity and flourishing.
I think that’s an argument worth having. After all, if the claim were true then I think that really would justify shifting attention away from infant mortality reduction and towards these “other inputs” for promoting human flourishing. (But I’m skeptical that the claim is true, at least on currently relevant margins in most places.)
You’d either want to stop focusing on infant mortality, or start interventions to increase fertility. (Depending on whether population growth is a priority.)
I’m not sure I buy this disjunctive claim. Many people over humanity’s history have worked on reducing infant mortality (in technology, in policy, in direct aid, and in direct actions that prevent their own children/relatives’ children from dying). While some people worked on this because they primarily intrinsically value reducing infant mortality, I think many others were inspired by the indirect effects. And taking the long view, reducing infant mortality clearly had long-run benefits that are different from (and likely better than) equivalent levels of population growth while keeping infant mortality rates constant.
I agree reductions in infant mortality likely have better long-run effects on capacity growth than equivalent levels of population growth while keeping infant mortality rates constant, which could mean that you still want to focus on infant mortality while not prioritizing increasing fertility.
I would just be surprised if the decision from the global capacity growth perspective ended up being “continue putting tons of resources into reducing infant mortality, but not much into increasing fertility” (which I understand to be the status quo for GHD), because:
Probably the dominant consideration for importance is how good / bad it is to grow the population, and it is unlikely that the differential effects from reducing infant mortality vs increasing fertility end up changing the decision
Probably it is easier / cheaper to increase fertility than to reduce infant mortality, because very little effort has been put into increasing fertility (to my knowledge)
That said, it’s been many years since I closely followed the GHD space, and I could easily be wrong about a lot of this.
It doesn’t seem conservative in practice? Like Vasco, I’d be surprised if aiming for reliable global capacity growth would look like the current GHD portfolio. For example:
Given an inability to help everyone, you’d want to target interventions based on people’s future ability to contribute. (E.g. you should probably stop any interventions that target people in extreme poverty.)
You’d either want to stop focusing on infant mortality, or start interventions to increase fertility. (Depending on whether population growth is a priority.)
You’d want to invest more in education than would be suggested by typical metrics like QALYs or income doublings.
I’d guess most proponents of GHD would find (1) and (2) particularly bad.
I also think it misses the worldview bucket that’s the main reason why many people fund global health and (some aspects of) development: intrinsic value attached to saving [human] lives. Potential positive flowthrough effects are a bonus on top of that, in most cases.
From an EA-ish hedonic utilitarianism perspective this dates right back to Singer’s essay about saving a drowning child. Taking that thought experiment in a different direction, I don’t think many people—EA or otherwise—would conclude that the decision on whether to save the child or not should primarily be a tradeoff between the future capacity of the child and the amount of aquatic suffering a corpse to feed upon could alleviate.
I think they’d say the imperative to save the child’s life wasn’t in danger of being swamped by the welfare impact on a very large number of aquatic animals or contingent on that child’s future impact, and I suspect as prominent an anti-speciesist as Singer would agree.
(Placing a significantly lower or zero weight on the estimated suffering experienced by a battery chicken or farmed shrimp is a sufficient but not necessary condition to favour lifesaving over animal suffering reduction campaigns. Though personally I do, and actually think the more compelling ethical arguments for prioritising farm animal welfare are deontological ones about human obligations to stop causing suffering)
Yeah, I don’t think most people’s motivating reasons correspond to anything very coherent. E.g. most will say it’s wrong to let the child before your eyes drown even if saving them prevents you from donating enough to save two other children from drowning. They’d say the imperative to save one child’s life isn’t in danger of being swamped by the welfare impact on other children, even. If anyone can make a coherent view out of that, I’ll be interested to see the results. But I’m skeptical; so here I restricted myself to views that I think are genuinely well-justified. (Others may, of course, judge matters differently!)
Coherence may not even matter that much, I presume that one of Open Philanthropy’s goals in the worldview framework is to have neat buckets for potential donors to back depending on their own feelings. I also reckon that even if they don’t personally have incoherent beliefs, attracting the donations of those that do is probably more advantageous than rejecting them.
It’s fine to offer recommendations within suboptimal cause areas for ineffective donors. But I’m talking about worldview diversification for the purpose of allocating one’s own (or OpenPhil’s own) resources genuinely wisely, given one’s (or: OP’s) warranted uncertainty.
2min coherent view there: the likely flowthrough of not saving a child right in front of you to your psychological wellbeing, community, and future social functioning, especially compared to the counterfactual, are drastically worse than not donating enough to save two children on average, and the powerful intuition one could expect to feel in such a situation, saying that you should save the child, is so strong that to numb or ignore it is likely to damage the strength of that moral intuition or compass, which could be wildly imprudent. In essence:
-psychological and flow-through effects of helping those in proximity to you are likely undervalued in extreme situations where you are the only one capable of mitigating the problem
-effects of community flow-through effects in developed countries regarding altruistic social acts in general may be undervalued, especially if they uniquely foster one’s own well-being or moral character through exercise of a “moral muscle”
-it is imprudent to ignore strong moral intuition, especially in emergency scenarios, and it is important to Make a Habit of not ignoring strong intuition (unless further reflection leads to the natural modification/dissipation of that intuition)
To me, naive application of utilitarianism often leads to underestimating these considerations.
There was meant to be an “all else equal” clause in there (as usually goes without saying in these sorts of thought experiments) -- otherwise, as you say, the verdict wouldn’t necessarily indicate underlying non-utilitarian concerns at all.
Perhaps easiest to imagine if you modify the thought experiment so that your psychology (memories, “moral muscles”, etc.) will be “reset” after making the decision. I’m talking about those who would insist that you still ought to save the one over the two even then—no matter how the purely utilitarian considerations play out.
Yeah honestly I don’t think there is a single true deontologist on Earth. To say anything is good or addresses the good, including deontology, one must define the “good” aimed at.
I think personal/direct situations entail a slew of complicating factors that a utilitarian should consider. As a response to that uncertainty, it is often rational to lean on intuition. And, thus, it is bad to undermine that intuition habitually.
“Directness” inherently means higher level of physical/emotional involvement, different (likely closer to home) social landscape and stakes, etc. So constructing an “all else being equal” scenario is impossible.
Related to initial deontologist point: when your average person expresses a “directness matters” view, it is very likely they are expressing concern for these considerations, rather than actually having a diehard deontologist view (even if they use language that suggests that).
I agree that a lot of people’s motivating reasons don’t correspond to anything particularly coherent, but that’s why I highlighted that even the philosopher who conceived the original thought experiment specifically to argue the being in front of you component didn’t matter, (who happens to be an outspoken anti-speciesist hedonic utilitarian) appears to have concluded that [human] lifesaving is intrinsically valuable, and to the point the approximate equivalence of the value of lives saved swamped considerations about relative suffering or capabilities.
Ultimately the point was less about the quirks of thought experiments and more that “saving lives” is for many people a different bucket with different weights from “ending suffering” and only marginal overlap with “capacity growth”. And a corollary of that is that they can attach a reasonably high value to the suffering of an individual chicken and still think saving a life [of a human] is equal to or more valuable than equivalent spend on activism that might reduce suffering of a relatively large number chickens—it’s a different ‘bucket’ altogether.
(FWIW I think most people find a scenario in which it’s necessary to allow the child to drown to raise enough money to save two children implausible; and perhaps substitute a more plausible equivalent where the person makes a one off donation to an effective medical charity as a form of moral licensing for letting the one child drown… )
I’m curious why you think Singer would agree that “the imperative to save the child’s life wasn’t in danger of being swamped by the welfare impact on a very large number of aquatic animals.” The original thought-experiment didn’t introduce the possibility of any such trade-off. But if you were to introduce this, Singer is clearly committed to thinking that the reason to save the child (however strong it is in isolation) could be outweighed.
Maybe I’m misunderstanding what you have in mind, but I’m not really seeing any principled basis for treating “saving lives” as in a completely separate bucket from improving quality of life. (Indeed, the whole point of QALYs as a metric is to put the two on a common scale.)
(As I argue in this paper, it’s a philosophical mistake to treat “saving lives” as having fixed and constant value, independently of how much and how good of a life extension it actually constitutes. There’s really not any sensible way to value “saving lives” over and above the welfare benefit provided to the beneficiary.)
Because as soon as you start thinking the value of saving or not saving life is [solely] instrumental in terms of suffering/output tradeoffs, the basic premise of his argument (childrens’ lives are approximately equal, no matter where they are) collapses. And the rest of Singer’s actions also seem to indicate that he didn’t and doesn’t believe that saving sentient lives is in danger of being swamped by cost-effective modest suffering reduction for much larger numbers of creatures whose degree of sentience he also values.
The other reason why I’ve picked up there being no quantification of any value to human lives is you’ve called your bucket “pure suffering reduction”, not “improving quality of life”, so it’s explicitly not framed as a comprehensive measure of welfare benefit to the beneficiary (whose death ceases their suffering). The individual welfare upside to survival is absent from your framing, even if it wasn’t from your thinking.
If we look at broader measures like hedonic enjoyment or preference satisfaction, I think its much easier for humans to dominate. Relative similarity of how humans and animals experience pain isn’t necessarily matched by how they experience satisfaction.
So any conservative framing for the purpose of worldview diversification and interspecies tradeoffs involves separate “buckets” for positive and negative valences (which people are free to combine if they actually are happy with the assumption of hedonic utility and valence symmetry). And yes, I’d also have a separate bucket for “saving lives”, which again people are free to attach no additional weight to, and to selectively include and exclude different sets of creatures from.
This means that somebody can prioritise pain relief for 1000 chickens over pain relief for 1 elderly human, but still pick the human when it comes down to whose live(s) to save, which seems well within the bounds of reasonable belief, and similar to what a number of people who’ve thought very carefully about these issues are actually doing.
You’re obviously perfectly entitled to argue otherwise, but there being some sort of value to saving lives other than “suffering reduction” or “the output they produce” is a commonly held view, and the whole point of “worldview diversification” is not to defer to a single philosopher’s framing. For the record, I agree that one could make a case for saving human lives being cost-effective purely on future outputs and moonshot potential given a long enough time frame (which I think was the core of your original argument), but I don’t think that’s a “conservative” framing, I think it’s quite a niche one. I’d strongly agree with an argument that flowthrough effects mean GHD isn’t only “nearterm”.
I guess I have (i) some different empirical assumptions, and (ii) some different moral assumptions (about what counts as a sufficiently modest revision to still count as “conservative”, i.e. within the general spirit of GHD).
To specifically address your three examples:
I’d guess that variance in cost (to save one life, or whatever) outweighs the variance in predictable ability to contribute. (iirc, Nick Beckstead’s dissertation on longtermism made the point that all else equal, it would be better to save a life in a wealthy country for instrumental reasons, but that the cost difference is so great that it’s still plausibly much better to focus on developing countries in practice.)
Perhaps it would justify more of a shift towards the “D” side of “H&D”, insofar as we could identify any good interventions for improving economic development. But the desire for lasting improvements seems commonsensical to many people anyway (compare all the rhetoric around “root causes”, “teaching a man to fish”, etc.)
In general, extreme poverty might seem to have the most low-hanging fruit for improvement (including improvements to capacity-building). But there may be exceptions in cases of extreme societal dysfunction, in which case, again, I think it’s pretty commonsensical that we shouldn’t invest resources in places where they’d actually do less lasting good.
I don’t understand at all why this would motivate less focus on infant mortality: fixing that is an extremely cheap way to improve human capacity! I think I already mentioned in the OP that increasing fertility could also be justified in principle, but I’m not aware of any proven cheap interventions that do this in practice. Adding some child benefit support (or whatever) into the mix doesn’t strike me as unduly radical, in any case.
Greater support for education seems very commonsensical in principle (including from a broadly “global health & development” perspective), and iirc was an early focus area of GiveWell—they just gave up because they couldn’t find any promising interventions.
So I’m not really seeing anything “bad” here. I agree that it could involve some revisions to standard GHD portfolios. But it’s very conservative in comparison to proposals to shift all GHD funding to Animal Welfare, as the orthodox “neartermist” worldview seemingly recommends.
All that said: I’m less interested in what counts as “radical” or “conservative”, and more interested in what is actually justified. The strongest argument for my reconceptualization is that all the “worldviews” I set out are very sensible and warrant support (IMO). I don’t think the same is true of all the “orthodox” worldviews.
I didn’t say your proposal was “bad”, I said it wasn’t “conservative”.
My point is just that, if GHD were to reorient around “reliable global capacity growth”, it would look very different, to the point where I think your proposal is better described as “stop GHD work, and instead do reliable global capacity growth work”, rather than the current framing of “let’s reconceptualize the existing bucket of work”.
I was replying to your sentence, “I’d guess most proponents of GHD would find (1) and (2) particularly bad.”
Oh I see, sorry for misinterpreting you.
This sounds plausible, but not obvious, to me. If your society has a sharply limited amount of resources to invest in the next generation, it isn’t clear to me that maximizing the number of members in that generation would be the best “way to improve human capacity” in that society. One could argue that with somewhat fewer kids, the society could provide better nutrition, education, health care, and other inputs that are rather important to adult capacity and flourishing.
To be clear, I am a strong supporter of life-saving interventions and am not advocating for a move away from these. I just think they are harder to justify on improving-capacity grounds than on the grounds usually provided for them.
I think that’s an argument worth having. After all, if the claim were true then I think that really would justify shifting attention away from infant mortality reduction and towards these “other inputs” for promoting human flourishing. (But I’m skeptical that the claim is true, at least on currently relevant margins in most places.)
I’m not sure I buy this disjunctive claim. Many people over humanity’s history have worked on reducing infant mortality (in technology, in policy, in direct aid, and in direct actions that prevent their own children/relatives’ children from dying). While some people worked on this because they primarily intrinsically value reducing infant mortality, I think many others were inspired by the indirect effects. And taking the long view, reducing infant mortality clearly had long-run benefits that are different from (and likely better than) equivalent levels of population growth while keeping infant mortality rates constant.
I agree reductions in infant mortality likely have better long-run effects on capacity growth than equivalent levels of population growth while keeping infant mortality rates constant, which could mean that you still want to focus on infant mortality while not prioritizing increasing fertility.
I would just be surprised if the decision from the global capacity growth perspective ended up being “continue putting tons of resources into reducing infant mortality, but not much into increasing fertility” (which I understand to be the status quo for GHD), because:
Probably the dominant consideration for importance is how good / bad it is to grow the population, and it is unlikely that the differential effects from reducing infant mortality vs increasing fertility end up changing the decision
Probably it is easier / cheaper to increase fertility than to reduce infant mortality, because very little effort has been put into increasing fertility (to my knowledge)
That said, it’s been many years since I closely followed the GHD space, and I could easily be wrong about a lot of this.