But for the purposes of my questions above, that’s not the relevant factor; the relevant factor is: does someone know, and have they made those arguments [that specific intervention X will wildly outperform] publicly, in a way that we could learn from if we were more open to less quantitative analysis?
I agree with this. I think the best way to settle this question is to link to actual examples of someone making such arguments. Personally, my observation from engaging with non-EA advocates of political advocacy is that they don’t actually make a case; when I cash out people’s claims it usually turns out they are asserting 10x − 100x multipliers, not 100x − 1000x multipliers, let alone higher than that. It appears the divergence in our bottom lines is coming from my cosmopolitan values and low tolerance for act/omission distinctions, and hopefully we at least agree that if even the entrenched advocate doesn’t actually think their cause is best under my values, I should just move on.
As an aside, I know you wrote recently that you think more work is being done by EA’s empirical claims than moral claims. I think this is credible for longtermism but mostly false for Global Health/Poverty. People appear to agree they can save lives in the deveoping world incredibly cheaply, in fact usually giving lower numbers than I think are possible. We aren’t actually that far apart on the empirical state of affairs. They just don’t want to. They aren’t refusing to because they have even better things to do, because most people do very little. Or as Rob put it:
Many people donate a small fraction of their income, despite claiming to believe that lives can be saved for remarkably small amounts. This suggests they don’t believe they have a duty to give even if lives can be saved very cheaply – or that they are not very motivated by such a duty.
I think that last observation would also be my answer to ‘what evidence do we have that we aren’t in the second world?’ Empirically, most people don’t care, and most people who do care are not trying to optimise for the thing I am optimising for (in many cases it’s debateable whether they are trying to optimise at all). So it would be surprising if they hit the target anyway, in much the same way it would be surprising if AMF were the best way to improve animal welfare.
I agree with this. I think the best way to settle this question is to link to actual examples of someone making such arguments. Personally, my observation from engaging with non-EA advocates of political advocacy is that they don’t actually make a case; when I cash out people’s claims it usually turns out they are asserting 10x − 100x multipliers, not 100x − 1000x multipliers, let alone higher than that. It appears the divergence in our bottom lines is coming from my cosmopolitan values and low tolerance for act/omission distinctions, and hopefully we at least agree that if even the entrenched advocate doesn’t actually think their cause is best under my values, I should just move on.
As an aside, I know you wrote recently that you think more work is being done by EA’s empirical claims than moral claims. I think this is credible for longtermism but mostly false for Global Health/Poverty. People appear to agree they can save lives in the deveoping world incredibly cheaply, in fact usually giving lower numbers than I think are possible. We aren’t actually that far apart on the empirical state of affairs. They just don’t want to. They aren’t refusing to because they have even better things to do, because most people do very little. Or as Rob put it:
I think that last observation would also be my answer to ‘what evidence do we have that we aren’t in the second world?’ Empirically, most people don’t care, and most people who do care are not trying to optimise for the thing I am optimising for (in many cases it’s debateable whether they are trying to optimise at all). So it would be surprising if they hit the target anyway, in much the same way it would be surprising if AMF were the best way to improve animal welfare.