Oops, definitely didnât mean any derogationâIâm a big fan of moonshots, er, speculative high-uncertainty (but high EV) opportunities! [Update: Iâve renamed them to âHigh-impact long-shotsâ.]
I disagree on âcapacity growthâ through: that one actually has descriptive content, which âcommon-sense global interventionsâ lacks. (They are interventions to achieve what, exactly?)
How are you defining global capacity, then? This is currently being argued in other replies better than I can, but I think thereâs a good chance that the most reasonable definition implies optimal actions very different than GiveWell. Although I could be wrong.
I donât really think the important part is the metricâthe important part is that weâre aiming for interventions that agree with common sense and donât require accepting controversial philosophical positions (beyond rejecting pro-local bias I guess)
I donât have anything as precise as a definition, but something in the general vicinity of [direct effects on individual welfare + indirect effects on total productivity, which can be expected to improve future welfare].
Itâs not a priori that GiveWell is justified by any reasonable EA framework. It is, and should be, open to dispute. So I see the debate as a feature, not a bug, of my framework. (I personally do think it is justified, and try to offer a rough indication of why, here. But I could be wrong on the empirics. If so, I would accept the implication that the funding should then be directed differently.)
On the âimportant partâ, distinguish three steps:
(i) a philosophical goal: preserving room for common sense causes on a reasonable philosophical basis.
(ii) the philosophical solution: specifying that reasonable basis, e.g. valuing reliable improvements (including long-term flow-through effects) to global human capacity.
(iii) providing a metric (GDP, QALYs, etc.) by which we could attempt to measure, at least approximately, how well we are achieving the specified value.
Iâm not making any claims about metrics. But I do think that my proposed âphilosophical solutionâ is important, because otherwise itâs not clear that the philosophical goal is realizable (without moral arbitrariness).
Suppose someone were to convince you that the interventions GiveWell pursues are not the best way to improve âglobal capacityâ, and that a better way would be to pursue more controversial/âspeculative causes like population growth or long-run economic growth or whatever. I just donât see EA reorienting GHW-worldview spending toward controversial causes like this, ever. The controversial stuff will always have to compete with animal welfare and AI x-risk. If your worldview categorization does not always make the GHW worldview center on non-controversial stuff, it is practically meaningless. This is why I was so drawn to this postâI think you correctly point out that âimproving the lives of current humansâ is not really what GHW is about!
The non-controversial stuff doesnât have to be anti-malaria efforts or anything that GiveWell currently pursues; I agree with you there that we shouldnât dogmatically accept these current causes. But you should really be defining your GHW worldview such that it always centers on non-controversial stuff. Is this kind of arbitrary? You bet! As you state in this post, there are at least some reasons to stay away from weird causes, so it might not be totally arbitrary. But honestly it doesnât matter whether itâs arbitrary or not; some donors are just really uncomfortable about pursuing philosophical weirdness, and GHW should be for them.
Oops, definitely didnât mean any derogationâIâm a big fan of
moonshots,er, speculative high-uncertainty (but high EV) opportunities! [Update: Iâve renamed them to âHigh-impact long-shotsâ.]I disagree on âcapacity growthâ through: that one actually has descriptive content, which âcommon-sense global interventionsâ lacks. (They are interventions to achieve what, exactly?)
How are you defining global capacity, then? This is currently being argued in other replies better than I can, but I think thereâs a good chance that the most reasonable definition implies optimal actions very different than GiveWell. Although I could be wrong.
I donât really think the important part is the metricâthe important part is that weâre aiming for interventions that agree with common sense and donât require accepting controversial philosophical positions (beyond rejecting pro-local bias I guess)
I donât have anything as precise as a definition, but something in the general vicinity of [direct effects on individual welfare + indirect effects on total productivity, which can be expected to improve future welfare].
Itâs not a priori that GiveWell is justified by any reasonable EA framework. It is, and should be, open to dispute. So I see the debate as a feature, not a bug, of my framework. (I personally do think it is justified, and try to offer a rough indication of why, here. But I could be wrong on the empirics. If so, I would accept the implication that the funding should then be directed differently.)
On the âimportant partâ, distinguish three steps:
(i) a philosophical goal: preserving room for common sense causes on a reasonable philosophical basis.
(ii) the philosophical solution: specifying that reasonable basis, e.g. valuing reliable improvements (including long-term flow-through effects) to global human capacity.
(iii) providing a metric (GDP, QALYs, etc.) by which we could attempt to measure, at least approximately, how well we are achieving the specified value.
Iâm not making any claims about metrics. But I do think that my proposed âphilosophical solutionâ is important, because otherwise itâs not clear that the philosophical goal is realizable (without moral arbitrariness).
Suppose someone were to convince you that the interventions GiveWell pursues are not the best way to improve âglobal capacityâ, and that a better way would be to pursue more controversial/âspeculative causes like population growth or long-run economic growth or whatever. I just donât see EA reorienting GHW-worldview spending toward controversial causes like this, ever. The controversial stuff will always have to compete with animal welfare and AI x-risk. If your worldview categorization does not always make the GHW worldview center on non-controversial stuff, it is practically meaningless. This is why I was so drawn to this postâI think you correctly point out that âimproving the lives of current humansâ is not really what GHW is about!
The non-controversial stuff doesnât have to be anti-malaria efforts or anything that GiveWell currently pursues; I agree with you there that we shouldnât dogmatically accept these current causes. But you should really be defining your GHW worldview such that it always centers on non-controversial stuff. Is this kind of arbitrary? You bet! As you state in this post, there are at least some reasons to stay away from weird causes, so it might not be totally arbitrary. But honestly it doesnât matter whether itâs arbitrary or not; some donors are just really uncomfortable about pursuing philosophical weirdness, and GHW should be for them.