One query: to me, the choice to label GHD as “reliable human capacity growth” conflicts with the idea you state of GHD sticking to more common-sense/empirically-grounded ideas of doing good.
Isn’t the capacity growth argument presuming a belief in the importance of long-run effects/longtermism? A common-sense view on this to me feels closer to a time discounting argument (the future is too uncertain so we help people as best we can within a timeframe that we can have some reasonable level of confidence that we affect).
Thanks! I should clarify that I’m trying to offer a principled account that can yield certain verdicts that happen to align with commonsense. But I’m absolutely not trying to capture common-sense reasoning or ideas (I think those tend to be hopelessly incoherent).
So yes, my framework assumes that long-run effects matter. (I don’t think there’s any reasonable basis for preferring GHD over AW if you limit yourself to nearterm effects.) But it allows that there are epistemic challenges to narrowly targeted attempts to improve the future (i.e. the traditional “longtermist” bucket of high-impact longshots). The suggestion is that increasing human capacity (via “all-purpose goods” like health, productivity, wealth, education, etc.) is less subject to epistemic discounting. Nothing about the future is certain, but I think it’s clearly positive in expectation to have more resources and healthy, well-educated, productive people available to solve whatever challenges the future may bring.
This is an interesting and thoughtful post.
One query: to me, the choice to label GHD as “reliable human capacity growth” conflicts with the idea you state of GHD sticking to more common-sense/empirically-grounded ideas of doing good.
Isn’t the capacity growth argument presuming a belief in the importance of long-run effects/longtermism? A common-sense view on this to me feels closer to a time discounting argument (the future is too uncertain so we help people as best we can within a timeframe that we can have some reasonable level of confidence that we affect).
Thanks! I should clarify that I’m trying to offer a principled account that can yield certain verdicts that happen to align with commonsense. But I’m absolutely not trying to capture common-sense reasoning or ideas (I think those tend to be hopelessly incoherent).
So yes, my framework assumes that long-run effects matter. (I don’t think there’s any reasonable basis for preferring GHD over AW if you limit yourself to nearterm effects.) But it allows that there are epistemic challenges to narrowly targeted attempts to improve the future (i.e. the traditional “longtermist” bucket of high-impact longshots). The suggestion is that increasing human capacity (via “all-purpose goods” like health, productivity, wealth, education, etc.) is less subject to epistemic discounting. Nothing about the future is certain, but I think it’s clearly positive in expectation to have more resources and healthy, well-educated, productive people available to solve whatever challenges the future may bring.