Thanks! I should clarify that I’m trying to offer a principled account that can yield certain verdicts that happen to align with commonsense. But I’m absolutely not trying to capture common-sense reasoning or ideas (I think those tend to be hopelessly incoherent).
So yes, my framework assumes that long-run effects matter. (I don’t think there’s any reasonable basis for preferring GHD over AW if you limit yourself to nearterm effects.) But it allows that there are epistemic challenges to narrowly targeted attempts to improve the future (i.e. the traditional “longtermist” bucket of high-impact longshots). The suggestion is that increasing human capacity (via “all-purpose goods” like health, productivity, wealth, education, etc.) is less subject to epistemic discounting. Nothing about the future is certain, but I think it’s clearly positive in expectation to have more resources and healthy, well-educated, productive people available to solve whatever challenges the future may bring.
Thanks! I should clarify that I’m trying to offer a principled account that can yield certain verdicts that happen to align with commonsense. But I’m absolutely not trying to capture common-sense reasoning or ideas (I think those tend to be hopelessly incoherent).
So yes, my framework assumes that long-run effects matter. (I don’t think there’s any reasonable basis for preferring GHD over AW if you limit yourself to nearterm effects.) But it allows that there are epistemic challenges to narrowly targeted attempts to improve the future (i.e. the traditional “longtermist” bucket of high-impact longshots). The suggestion is that increasing human capacity (via “all-purpose goods” like health, productivity, wealth, education, etc.) is less subject to epistemic discounting. Nothing about the future is certain, but I think it’s clearly positive in expectation to have more resources and healthy, well-educated, productive people available to solve whatever challenges the future may bring.