Thanks! I should clarify that Iām trying to offer a principled account that can yield certain verdicts that happen to align with commonsense. But Iām absolutely not trying to capture common-sense reasoning or ideas (I think those tend to be hopelessly incoherent).
So yes, my framework assumes that long-run effects matter. (I donāt think thereās any reasonable basis for preferring GHD over AW if you limit yourself to nearterm effects.) But it allows that there are epistemic challenges to narrowly targeted attempts to improve the future (i.e. the traditional ālongtermistā bucket of high-impact longshots). The suggestion is that increasing human capacity (via āall-purpose goodsā like health, productivity, wealth, education, etc.) is less subject to epistemic discounting. Nothing about the future is certain, but I think itās clearly positive in expectation to have more resources and healthy, well-educated, productive people available to solve whatever challenges the future may bring.
Thanks! I should clarify that Iām trying to offer a principled account that can yield certain verdicts that happen to align with commonsense. But Iām absolutely not trying to capture common-sense reasoning or ideas (I think those tend to be hopelessly incoherent).
So yes, my framework assumes that long-run effects matter. (I donāt think thereās any reasonable basis for preferring GHD over AW if you limit yourself to nearterm effects.) But it allows that there are epistemic challenges to narrowly targeted attempts to improve the future (i.e. the traditional ālongtermistā bucket of high-impact longshots). The suggestion is that increasing human capacity (via āall-purpose goodsā like health, productivity, wealth, education, etc.) is less subject to epistemic discounting. Nothing about the future is certain, but I think itās clearly positive in expectation to have more resources and healthy, well-educated, productive people available to solve whatever challenges the future may bring.