FWIW I don’t think these are nitpicks—I think they point to a totally different takeaway than Mathias suggests in his (excellent) post. If there are political reforms that the various (smart, altruistically-motivated, bias-aware) camps can agree on, it seems like they should work on those instead of retreating to totally uncontroversial RCT-based interventions. Especially since the set of interventions that can be tested in RCTs doesn’t include the interventions that either group thinks are most impactful.
More to the point: it seems like both the camps Mathias describes, the EA libertarians and the Effective Samaritans, would agree that their potential influence over how political economy develops over time has much higher stakes (from a cosmopolitan moral perspective) than their potential influence over the sorts of interventions that are amenable to RCTs. It seems far from obvious that they should do the lower-stakes thing, instead of trying to find some truth-tracking approach to work on the higher-stakes thing. (E.g. only pursue the reforms that both camps want; or cooperate to build institutions/contexts that let both camps compete in the marketplace of ideas in a way that both sides expect to be truth-tracking, or just compete in the existing marketplace of ideas and hope the result is truth-tracking, etc.)
Similarly, it seems like AI accelerationists and AI decelerationists would both agree that their potential influence over how AI plays out has much higher stakes (from a cosmopolitan moral perspective) than their potential influence over the sorts of interventions that are amenable to RCTs. So it’s far from obvious that it would be better for them to do the lower-stakes thing instead of trying to find some truth-tracking approach to do the higher-stakes thing.
TBC I think Mathias’ post is excellent. I myself work partly on GHW causes, for mostly the reasons he gestures at here. Still, I wanted to spell out the opposing case as I see it.
I’m grappling with this exact issue. I think AI is the most important technology humanity will event, but I’m skeptical of the EV of much work on the technology. Still it seems that it should be the only reasonable thing to spend all my time thinking about, but even then I’m not sure I’d arrive at anything useful.
And the opportunity cost is saving hundreds of lives. I don’t think there is any other question that has cost me as much sleep as this one.
FWIW I don’t think these are nitpicks—I think they point to a totally different takeaway than Mathias suggests in his (excellent) post. If there are political reforms that the various (smart, altruistically-motivated, bias-aware) camps can agree on, it seems like they should work on those instead of retreating to totally uncontroversial RCT-based interventions. Especially since the set of interventions that can be tested in RCTs doesn’t include the interventions that either group thinks are most impactful.
More to the point: it seems like both the camps Mathias describes, the EA libertarians and the Effective Samaritans, would agree that their potential influence over how political economy develops over time has much higher stakes (from a cosmopolitan moral perspective) than their potential influence over the sorts of interventions that are amenable to RCTs. It seems far from obvious that they should do the lower-stakes thing, instead of trying to find some truth-tracking approach to work on the higher-stakes thing. (E.g. only pursue the reforms that both camps want; or cooperate to build institutions/contexts that let both camps compete in the marketplace of ideas in a way that both sides expect to be truth-tracking, or just compete in the existing marketplace of ideas and hope the result is truth-tracking, etc.)
Similarly, it seems like AI accelerationists and AI decelerationists would both agree that their potential influence over how AI plays out has much higher stakes (from a cosmopolitan moral perspective) than their potential influence over the sorts of interventions that are amenable to RCTs. So it’s far from obvious that it would be better for them to do the lower-stakes thing instead of trying to find some truth-tracking approach to do the higher-stakes thing.
TBC I think Mathias’ post is excellent. I myself work partly on GHW causes, for mostly the reasons he gestures at here. Still, I wanted to spell out the opposing case as I see it.
I’m grappling with this exact issue. I think AI is the most important technology humanity will event, but I’m skeptical of the EV of much work on the technology. Still it seems that it should be the only reasonable thing to spend all my time thinking about, but even then I’m not sure I’d arrive at anything useful.
And the opportunity cost is saving hundreds of lives. I don’t think there is any other question that has cost me as much sleep as this one.