I’d be curious if you have any thoughts on how your proposed refactoring from [neartermist human-only / neartermist incl. AW / longtermist] → [pure suffering reduction / reliable global capacity growth / moonshots] might change, in broad strokes (i.e. direction & OOM change), current
Or maybe these are not the right questions to ask / I’m looking at the wrong things, since you seem to be mainly aiming at research (re)prioritisation?
I do agree with
we should be especially cautious of completely dismissing commonsense priorities in a worldview-diversified portfolio (even as we give significant weight and support to a range of theoretically well-supported counterintuitive cause areas)
although I thought the sandboxing of cluster thinking (vs sequence thinking) handles that just fine:
A key difference with “sequence thinking” is the handling of certainty/robustness (by which I mean the opposite of Knightian uncertainty) associated with each perspective. Perspectives associated with high uncertainty are in some sense “sandboxed” in cluster thinking: they are stopped from carrying strong weight in the final decision, even when such perspectives involve extreme claims (e.g., a low-certainty argument that “animal welfare is 100,000x as promising a cause as global poverty” receives no more weight than if it were an argument that “animal welfare is 10x as promising a cause as global poverty”).
I don’t really know enough about the empirics to add much beyond the possible “implications” flagged at the end of the post. Maybe the clearest implication is just the need for further research into flow-through effects, to better identify which interventions are most promising by the lights of reliable global capacity growth (since that seems a question that has been unduly neglected to date).
Thanks for flagging the “sandboxing” argument against AW swamping of GHD. I guess a lot depends there on how uncertain the case for AW effectiveness is. (I didn’t have the impression that it was especially uncertain, such that it belongs more in the “moonshot” category. But maybe I’m wrong about that.) But if there are reasonable grounds for viewing AW as at least an order of magnitude more effective than GHD in terms of its immediate effects, and no such strong countervailing arguments for viewing AW as at least an order of magnitude less effective, then it seems like it would be hard to justify allocating more funding to GHD than to AW, purely on the basis of the immediate effects.
I’d be curious if you have any thoughts on how your proposed refactoring from [neartermist human-only / neartermist incl. AW / longtermist] → [pure suffering reduction / reliable global capacity growth / moonshots] might change, in broad strokes (i.e. direction & OOM change), current
funding allocation (proxies: Maule’s funding by cause area, McDoodles)
career advising (proxies: 80K problems and skills, Probably Good profiles)
Or maybe these are not the right questions to ask / I’m looking at the wrong things, since you seem to be mainly aiming at research (re)prioritisation?
I do agree with
although I thought the sandboxing of cluster thinking (vs sequence thinking) handles that just fine:
I don’t really know enough about the empirics to add much beyond the possible “implications” flagged at the end of the post. Maybe the clearest implication is just the need for further research into flow-through effects, to better identify which interventions are most promising by the lights of reliable global capacity growth (since that seems a question that has been unduly neglected to date).
Thanks for flagging the “sandboxing” argument against AW swamping of GHD. I guess a lot depends there on how uncertain the case for AW effectiveness is. (I didn’t have the impression that it was especially uncertain, such that it belongs more in the “moonshot” category. But maybe I’m wrong about that.) But if there are reasonable grounds for viewing AW as at least an order of magnitude more effective than GHD in terms of its immediate effects, and no such strong countervailing arguments for viewing AW as at least an order of magnitude less effective, then it seems like it would be hard to justify allocating more funding to GHD than to AW, purely on the basis of the immediate effects.