How are you defining global capacity, then? This is currently being argued in other replies better than I can, but I think there’s a good chance that the most reasonable definition implies optimal actions very different than GiveWell. Although I could be wrong.
I don’t really think the important part is the metric—the important part is that we’re aiming for interventions that agree with common sense and don’t require accepting controversial philosophical positions (beyond rejecting pro-local bias I guess)
I don’t have anything as precise as a definition, but something in the general vicinity of [direct effects on individual welfare + indirect effects on total productivity, which can be expected to improve future welfare].
It’s not a priori that GiveWell is justified by any reasonable EA framework. It is, and should be, open to dispute. So I see the debate as a feature, not a bug, of my framework. (I personally do think it is justified, and try to offer a rough indication of why, here. But I could be wrong on the empirics. If so, I would accept the implication that the funding should then be directed differently.)
On the “important part”, distinguish three steps:
(i) a philosophical goal: preserving room for common sense causes on a reasonable philosophical basis.
(ii) the philosophical solution: specifying that reasonable basis, e.g. valuing reliable improvements (including long-term flow-through effects) to global human capacity.
(iii) providing a metric (GDP, QALYs, etc.) by which we could attempt to measure, at least approximately, how well we are achieving the specified value.
I’m not making any claims about metrics. But I do think that my proposed “philosophical solution” is important, because otherwise it’s not clear that the philosophical goal is realizable (without moral arbitrariness).
Suppose someone were to convince you that the interventions GiveWell pursues are not the best way to improve “global capacity”, and that a better way would be to pursue more controversial/speculative causes like population growth or long-run economic growth or whatever. I just don’t see EA reorienting GHW-worldview spending toward controversial causes like this, ever. The controversial stuff will always have to compete with animal welfare and AI x-risk. If your worldview categorization does not always make the GHW worldview center on non-controversial stuff, it is practically meaningless. This is why I was so drawn to this post—I think you correctly point out that “improving the lives of current humans” is not really what GHW is about!
The non-controversial stuff doesn’t have to be anti-malaria efforts or anything that GiveWell currently pursues; I agree with you there that we shouldn’t dogmatically accept these current causes. But you should really be defining your GHW worldview such that it always centers on non-controversial stuff. Is this kind of arbitrary? You bet! As you state in this post, there are at least some reasons to stay away from weird causes, so it might not be totally arbitrary. But honestly it doesn’t matter whether it’s arbitrary or not; some donors are just really uncomfortable about pursuing philosophical weirdness, and GHW should be for them.
How are you defining global capacity, then? This is currently being argued in other replies better than I can, but I think there’s a good chance that the most reasonable definition implies optimal actions very different than GiveWell. Although I could be wrong.
I don’t really think the important part is the metric—the important part is that we’re aiming for interventions that agree with common sense and don’t require accepting controversial philosophical positions (beyond rejecting pro-local bias I guess)
I don’t have anything as precise as a definition, but something in the general vicinity of [direct effects on individual welfare + indirect effects on total productivity, which can be expected to improve future welfare].
It’s not a priori that GiveWell is justified by any reasonable EA framework. It is, and should be, open to dispute. So I see the debate as a feature, not a bug, of my framework. (I personally do think it is justified, and try to offer a rough indication of why, here. But I could be wrong on the empirics. If so, I would accept the implication that the funding should then be directed differently.)
On the “important part”, distinguish three steps:
(i) a philosophical goal: preserving room for common sense causes on a reasonable philosophical basis.
(ii) the philosophical solution: specifying that reasonable basis, e.g. valuing reliable improvements (including long-term flow-through effects) to global human capacity.
(iii) providing a metric (GDP, QALYs, etc.) by which we could attempt to measure, at least approximately, how well we are achieving the specified value.
I’m not making any claims about metrics. But I do think that my proposed “philosophical solution” is important, because otherwise it’s not clear that the philosophical goal is realizable (without moral arbitrariness).
Suppose someone were to convince you that the interventions GiveWell pursues are not the best way to improve “global capacity”, and that a better way would be to pursue more controversial/speculative causes like population growth or long-run economic growth or whatever. I just don’t see EA reorienting GHW-worldview spending toward controversial causes like this, ever. The controversial stuff will always have to compete with animal welfare and AI x-risk. If your worldview categorization does not always make the GHW worldview center on non-controversial stuff, it is practically meaningless. This is why I was so drawn to this post—I think you correctly point out that “improving the lives of current humans” is not really what GHW is about!
The non-controversial stuff doesn’t have to be anti-malaria efforts or anything that GiveWell currently pursues; I agree with you there that we shouldn’t dogmatically accept these current causes. But you should really be defining your GHW worldview such that it always centers on non-controversial stuff. Is this kind of arbitrary? You bet! As you state in this post, there are at least some reasons to stay away from weird causes, so it might not be totally arbitrary. But honestly it doesn’t matter whether it’s arbitrary or not; some donors are just really uncomfortable about pursuing philosophical weirdness, and GHW should be for them.