I think weâre still talking past each other here.
You seem to be implicitly focusing on the question âhow certain are we these will turn out to be bestâ. Iâm focusing on the question âDenise and I are likely to make a donation to near-term human-centric causes in the next few months; is there something I should be donating to above Givewell charitiesâ.
Listing unaccounted-for second order effects is relevant for the first, but not decision-relevant until the effects are predictable-in-direction and large; it needs to actually impact my EV meaningfully. Currently, Iâm not seeing a clear argument for that. âMight have wildly large impactsâ, âvery rough estimatesâ, âpolicy can have enormous effectsâ...these are all phrases that increase uncertainty rather than concretely change EVs and so are decision-irrelevant. (Thatâs not quite true; we should penalise rough thingsâ calculated EV more in high-uncertainty environments due to winnersâ curse effects, but thatâs secondary to my main point here).
Another way of putting it is that this is the difference between oneâs confidence level that what you currently think is best will still be what you think is best 20 years from now, versus trying to identify the best all-things-considered donation opportunity right now with oneâs limited information.
So concretely, I think itâs very likely that in 20 years Iâll think one of the >20 alternatives Iâve briefly considered will look like it was a better use of my money that Givewell charities, due to the uncertainty youâre highlighting. But I donât know which one, and I donât expect it to outperform 20x, so picking one essentially at random still looks pretty bad.
A non-random way to pick would be if Open Phil, or someone else I respect, shifted their equivalent donation bucket to some alternative. AFAIK, this hasnât happened. Thatâs the relevance of those decisions to me, rather than any belief that theyâve done a secret Uber-Analysis.
Hmm, I agree that weâre talking past each other. I donât intend to focus on ex post evaluations over ex ante evaluations. What I intend to focus on is the question: âwhen an EA make the claim that GiveWell charities are the charities with the strongest case for impact in near-term human-centric terms, how justified are they?â Or, relatedly, âHow likely is it that somebody who is motivated to find the best near-term human-centric charities possible, but takes a very different approach than EA does (in particular by focusing much more on hard-to-measure political effects) will do better than EA?â
In my previous comment, I used a lot of phrases which you took to indicate the high uncertainty of political interventions. My main point was that itâs plausible that a bunch of them exist which will wildly outperform GiveWell charities. I agree I donât know which one, and you donât know which one, and GiveWell doesnât know which one. But for the purposes of my questions above, thatâs not the relevant factor; the relevant factor is: does someone know, and have they made those arguments publicly, in a way that we could learn from if we were more open to less quantitative analysis? (Alternatively, could someone know if they tried? But letâs go with the former for now.)
In other words, consider two possible worlds. In one world GiveWell charities are in fact the most cost-effective, and all the people doing political advocacy are less cost-effective than GiveWell ex ante (given publicly available information). In the other world thereâs a bunch of people doing political advocacy work which EA hasnât supported even though they have strong, well-justified arguments that their work is very impactful (more impactful than GiveWellâs top charities), because that impact is hard to quantitatively estimate. What evidence do we have that weâre not in the second world? In both worlds GiveWell would be saying roughly the same thing (because they have a high bar for rigour). Would OpenPhil be saying different things in different worlds? Insofar as their arguments in favour of GiveWell are based on back-of-the-envelope calculations like the ones I just saw, then theyâd be saying the same thing in both worlds, because those calculations seem insufficient to capture most of the value of the most cost-effective political advocacy. Insofar as their belief that itâs hard to beat GiveWell is based on other evidence which might distinguish between these two worlds, they donât explain this in their blog postâwhich means I donât think the post is strong evidence in favour of GiveWell top charities for people who donât already trust OpenPhil a lot.
But for the purposes of my questions above, thatâs not the relevant factor; the relevant factor is: does someone know, and have they made those arguments [that specific intervention X will wildly outperform] publicly, in a way that we could learn from if we were more open to less quantitative analysis?
I agree with this. I think the best way to settle this question is to link to actual examples of someone making such arguments. Personally, my observation from engaging with non-EA advocates of political advocacy is that they donât actually make a case; when I cash out peopleâs claims it usually turns out they are asserting 10x â 100x multipliers, not 100x â 1000x multipliers, let alone higher than that. It appears the divergence in our bottom lines is coming from my cosmopolitan values and low tolerance for act/âomission distinctions, and hopefully we at least agree that if even the entrenched advocate doesnât actually think their cause is best under my values, I should just move on.
As an aside, I know you wrote recently that you think more work is being done by EAâs empirical claims than moral claims. I think this is credible for longtermism but mostly false for Global Health/âPoverty. People appear to agree they can save lives in the deveoping world incredibly cheaply, in fact usually giving lower numbers than I think are possible. We arenât actually that far apart on the empirical state of affairs. They just donât want to. They arenât refusing to because they have even better things to do, because most people do very little. Or as Rob put it:
Many people donate a small fraction of their income, despite claiming to believe that lives can be saved for remarkably small amounts. This suggests they donât believe they have a duty to give even if lives can be saved very cheaply â or that they are not very motivated by such a duty.
I think that last observation would also be my answer to âwhat evidence do we have that we arenât in the second world?â Empirically, most people donât care, and most people who do care are not trying to optimise for the thing I am optimising for (in many cases itâs debateable whether they are trying to optimise at all). So it would be surprising if they hit the target anyway, in much the same way it would be surprising if AMF were the best way to improve animal welfare.
I think weâre still talking past each other here.
You seem to be implicitly focusing on the question âhow certain are we these will turn out to be bestâ. Iâm focusing on the question âDenise and I are likely to make a donation to near-term human-centric causes in the next few months; is there something I should be donating to above Givewell charitiesâ.
Listing unaccounted-for second order effects is relevant for the first, but not decision-relevant until the effects are predictable-in-direction and large; it needs to actually impact my EV meaningfully. Currently, Iâm not seeing a clear argument for that. âMight have wildly large impactsâ, âvery rough estimatesâ, âpolicy can have enormous effectsâ...these are all phrases that increase uncertainty rather than concretely change EVs and so are decision-irrelevant. (Thatâs not quite true; we should penalise rough thingsâ calculated EV more in high-uncertainty environments due to winnersâ curse effects, but thatâs secondary to my main point here).
Another way of putting it is that this is the difference between oneâs confidence level that what you currently think is best will still be what you think is best 20 years from now, versus trying to identify the best all-things-considered donation opportunity right now with oneâs limited information.
So concretely, I think itâs very likely that in 20 years Iâll think one of the >20 alternatives Iâve briefly considered will look like it was a better use of my money that Givewell charities, due to the uncertainty youâre highlighting. But I donât know which one, and I donât expect it to outperform 20x, so picking one essentially at random still looks pretty bad.
A non-random way to pick would be if Open Phil, or someone else I respect, shifted their equivalent donation bucket to some alternative. AFAIK, this hasnât happened. Thatâs the relevance of those decisions to me, rather than any belief that theyâve done a secret Uber-Analysis.
Hmm, I agree that weâre talking past each other. I donât intend to focus on ex post evaluations over ex ante evaluations. What I intend to focus on is the question: âwhen an EA make the claim that GiveWell charities are the charities with the strongest case for impact in near-term human-centric terms, how justified are they?â Or, relatedly, âHow likely is it that somebody who is motivated to find the best near-term human-centric charities possible, but takes a very different approach than EA does (in particular by focusing much more on hard-to-measure political effects) will do better than EA?â
In my previous comment, I used a lot of phrases which you took to indicate the high uncertainty of political interventions. My main point was that itâs plausible that a bunch of them exist which will wildly outperform GiveWell charities. I agree I donât know which one, and you donât know which one, and GiveWell doesnât know which one. But for the purposes of my questions above, thatâs not the relevant factor; the relevant factor is: does someone know, and have they made those arguments publicly, in a way that we could learn from if we were more open to less quantitative analysis? (Alternatively, could someone know if they tried? But letâs go with the former for now.)
In other words, consider two possible worlds. In one world GiveWell charities are in fact the most cost-effective, and all the people doing political advocacy are less cost-effective than GiveWell ex ante (given publicly available information). In the other world thereâs a bunch of people doing political advocacy work which EA hasnât supported even though they have strong, well-justified arguments that their work is very impactful (more impactful than GiveWellâs top charities), because that impact is hard to quantitatively estimate. What evidence do we have that weâre not in the second world? In both worlds GiveWell would be saying roughly the same thing (because they have a high bar for rigour). Would OpenPhil be saying different things in different worlds? Insofar as their arguments in favour of GiveWell are based on back-of-the-envelope calculations like the ones I just saw, then theyâd be saying the same thing in both worlds, because those calculations seem insufficient to capture most of the value of the most cost-effective political advocacy. Insofar as their belief that itâs hard to beat GiveWell is based on other evidence which might distinguish between these two worlds, they donât explain this in their blog postâwhich means I donât think the post is strong evidence in favour of GiveWell top charities for people who donât already trust OpenPhil a lot.
I agree with this. I think the best way to settle this question is to link to actual examples of someone making such arguments. Personally, my observation from engaging with non-EA advocates of political advocacy is that they donât actually make a case; when I cash out peopleâs claims it usually turns out they are asserting 10x â 100x multipliers, not 100x â 1000x multipliers, let alone higher than that. It appears the divergence in our bottom lines is coming from my cosmopolitan values and low tolerance for act/âomission distinctions, and hopefully we at least agree that if even the entrenched advocate doesnât actually think their cause is best under my values, I should just move on.
As an aside, I know you wrote recently that you think more work is being done by EAâs empirical claims than moral claims. I think this is credible for longtermism but mostly false for Global Health/âPoverty. People appear to agree they can save lives in the deveoping world incredibly cheaply, in fact usually giving lower numbers than I think are possible. We arenât actually that far apart on the empirical state of affairs. They just donât want to. They arenât refusing to because they have even better things to do, because most people do very little. Or as Rob put it:
I think that last observation would also be my answer to âwhat evidence do we have that we arenât in the second world?â Empirically, most people donât care, and most people who do care are not trying to optimise for the thing I am optimising for (in many cases itâs debateable whether they are trying to optimise at all). So it would be surprising if they hit the target anyway, in much the same way it would be surprising if AMF were the best way to improve animal welfare.