Here are my less rushed thoughts on why this line of thought is mistaken. Would have been better to do this as a comment in the first place—sorry about that.
This is a shorter and less rushed version of the argument I made in an earlier post on counterfactual impact, which could have been better in a few ways. Hopefully, people will find this version clearer and more convincing.
Suppose that we are assessing the total lifetime impact of two agents: Darren, a GWWC member who gives $1m to effective charities over the course of his life; and GWWC, which, let’s assume in this example, moves only Darren’s money to effective charities. If Darren had not heard of GWWC, he would have had zero impact, and if GWWC had not had Darren as a member it would have had zero impact.
When we ask how much lifetime counterfactual impact someone had, we are asking how much impact they had compared to the world in which they did not exist. On this approach, when we are assessing Darren’s impact, we compare two worlds:
Actual world: Darren gives $1m to GWWC recommended charities.
Counterfactual worldD: Darren does not exist and GWWC acts as it would have if Darren did not exist.
In the actual world, an additional $1m is given to effective charities compared to the Counterfactual WorldD. Therefore, Darren’s lifetime counterfactual impact is $1m. Similarly, when we are assessing GWWC’s counterfactual impact, we compare two worlds:
Actual world: GWWC recruits Darren ensuring that $1m goes to effective charities
Counterfactual worldG: GWWC does not exist and Darren acts as he would have done if GWWC did not exist.
In the actual world, an additional $1m is given to effective charities compared to the Counterfactual WorldG. Therefore, GWWC’s lifetime counterfactual impact is $1m.
This seems to give rise to the paradoxical conclusion that the lifetime counterfactual impact of both GWWC and Darren is $2m, which is absurd as this exceeds the total benefit produced. We would assess the lifetime counterfactual impact of both Darren and GWWC collectively by comparing two worlds:
Actual world: GWWC recruits Darren ensuring that $1m goes to effective charities
Counterfactual worldG&D: GWWC does not exist and Darren does not exist.
The difference between the Actual world and the counterfactual worldG&D is $1m, not $2m, so, the argument goes, the earlier method of calculating counterfactual impact must be wrong. The hidden premise here is:
Premise. The sum of the counterfactual impact of any two agents, A and B, taken individually, must equal the sum of the counterfactual impact of A and B, taken collectively.
In spite of its apparent plausibility, this premise is false. It implies that the conjunction of the counterfactual worlds we use to assess the counterfactual impact of two agents, taken individually, must be the same as the counterfactual world we use to assess the counterfactual impact of two agents, taken collectively. But this is not so. The conjunction of the counterfactual worlds we use to assess the impact of Darren and GWWC, taken individually, is:
Counterfactual worldD+G: GWWC does not exist and Darren acts as he would have done if GWWC did not exist; and Darren does not exist and GWWC acts as it would have done if Darren did not exist.
This world is not equivalent to Counterfactual worldD&G. Indeed, in this world Darren does not exist and acts as he would have done had GWWC not existed. But if GWWC had not existed, Darren would, ex hypothesi, still have existed. Therefore, this is not a description of the relevant counterfactual world which determines the counterfactual impact of both Darren and GWWC. This shows that you cannot unproblematically aggregate counterfactual worlds, it does not show that we assessed the counterfactual impact of Darren or GWWC in the wrong way.
To reiterate this point, when we assess Darren’s lifetime counterfactual impact, we ask: “what would have happened if Darren only hadn’t existed?” When we assess Darren and GWWC’s lifetime counterfactual impact, we ask “what would have happened if Darren and GWWC hadn’t existed?” These questions inevitably produce different answers about what GWWC would have done: in one case, we ask what GWWC would have done if Darren hadn’t existed, and in another we are assuming GWWC doesn’t even exist. This is why we get surprising answers when we mistakenly try to aggregate the counterfactual impact of multiple agents.
I agree with you that impact is importantly relative to a particular comparison world, and so you can’t straightforwardly sum different people’s impacts. But my impression is that Joey’s argument is actually that it’s important for us to try to work collectively rather than individually. Consider a case of three people:
Anna and Bob each have $600 to donate, and want to donate as effectively as possible. Anna is deciding between donating to TLYCS and AMF, Bob between GWWC and AMF. Casey is currently not planning to donate, but if introduced to EA by TLYCS and convinced of the efficacy of donating by GWWC, would donate $1000 to AMF.
It might be the case that Anna knows that Bob plans to donate the GWWC, and therefore she’s choosing between a case of causing $600 of impact or $1000. I take Joey’s point not to be that you can’t think of Anna’s impact as being $1000, but to be that it would be better to concentrate on the collective case than the individual case. Rather than considering what her impact would be holding fixed Bob’s actions ($1000 if she donates to TLYCS, $600 if she gives to AMF), Anna should try to coordinate with Bob and think about their collective impact ($1200 if they give to AMF, $1000 if they give to TLYCS/GWWC).
Given that, I would add ‘increased co-ordination’ to the list of things that could help with the problem. Given the highlighted fact that often multiple steps by different organisations are required to achieve particular impact, we should be thinking not just about how to optimise each step individually but also about the process overall.
I think this is a fair comment. I probably misinterpreted the main emphasis of the piece. I thought his main point was that each of the organisations is misstating their impact. I do think this was part of the argument and I think a few others did as well given that a few people started talking about dividing up credit according to the Shapely value. But I think the main part is about coordination and I agree wholeheartedly with his points and yours on that front
I’m interested in what norms we can use to better deal with the practical case.
e.g. Suppose:
1) GiveWell does research for a cost of $6
2) TLYCS does outreach using the research for a cost of $6
3) $10 is raised as a result.
Assume that if GiveWell didn’t do the research, TLYCS wouldn’t have raised the $10, and vice versa.
If you’re a donor working out where to give, how should you approach the situation?
If you consider funding TLYCS with GiveWell held fixed, then you can spend $6 to raise $10, which is worth doing. But if you consider funding GiveWell+TLYCS together, then you can spend $12 to raise $10, which is not worth doing.
It seems like the solution is that the donor needs to think very carefully about which margin they’re operating at. Here are a couple of options:
A) If GiveWell will definitely do the research whatever happens, then you ought to give.
B) Maybe GiveWell won’t do the research if they don’t think anyone will promote it, so the two orgs are coupled, and that means you shouldn’t fund either. (Funding TLYCS causes GiveWell to raise more, which is bad in this case)
C) If you’re a large donor who is able to cover both funding gaps, then you should consider the value of funding the sum, rather than each org individually.
It seems true that donors don’t often consider situations like (B), which might be a mistake. Though sometimes they do—e.g. GiveWell considers the costs of malaria net distribution incurred by other actors.
Likewise, it seems like donors often don’t consider situations like (C). e.g. If there are enough interactions, maybe the EA Funds should calculate the cost-effectiveness of a portfolio of EA orgs, rather than estimate the ratios for each individual org.
On the other hand, I don’t think these cases where two orgs are both 100% necessary for 100% of the impact are actually that common. In practice, if GiveWell didn’t exist, TLYCS would do something else with the $6, which would mean they raise somewhat less than $10; and vice versa. So, the two impacts are fairly unlikely to add up to much more than $12.
In case B, it looks to me like the donor should give to TLYCS, in certain conditions, in others not.
(a) Suppose: Because you gave to TLYCS, GiveWell does the research at a cost of $6, fundraising from an otherwise ineffective donor, and getting $10 to GW charities. In this case, your $6 has raised $10 for effective charities minus the $6 from an otherwise ineffective donor (~0 value). So, I don’t think causing GW to fundraise further would be bad in this case. Coordinating with GW to just get them to fundraise for donations to their effective charities is even better in this case, but donating to TLYCS is better than doing nothing.
(b) Suppose: Same as before, except GW fundraises from an effective donor who would otherwise have given the $6 to GW charities. In this case, giving to TLYCS is worse than doing nothing because you have spent $6 getting $10 to GW charities, minus what the effective donor would have given to had you not acted (-$6 to GW charities), so you’ve spent $6 getting $4 to effective charities. Doing nothing would be better, as then $6 goes to effective charities.
This shows that the counterfactual impact of funged/leveraged donations needs to be considered carefully. GiveWell is starting to do this—e.g. if govt money is leveraged or funged they try to estimate the cost-effectiveness of govt money. Outside that, this is probably something EA donors should take more account of.
Another case that should be considered is causing irrational prioritisation with a given amount of funds. Imagine case (a) above except that instead of fundraising, GiveWell moves money from another research project with a counterfactual value of $9 to GW charities because they have not considering these coordination effects (they reason that $10>$9). In that case, you’re spending $6 to get $10 to GW charities minus the $9 that would have gone to GW charities.
Regarding C, this seems right. It would be a mistake for the EA funds to add up its impact as the sum of the impact of each of the individual grants it has made.
Here are my less rushed thoughts on why this line of thought is mistaken. Would have been better to do this as a comment in the first place—sorry about that.
This is a shorter and less rushed version of the argument I made in an earlier post on counterfactual impact, which could have been better in a few ways. Hopefully, people will find this version clearer and more convincing.
Suppose that we are assessing the total lifetime impact of two agents: Darren, a GWWC member who gives $1m to effective charities over the course of his life; and GWWC, which, let’s assume in this example, moves only Darren’s money to effective charities. If Darren had not heard of GWWC, he would have had zero impact, and if GWWC had not had Darren as a member it would have had zero impact.
When we ask how much lifetime counterfactual impact someone had, we are asking how much impact they had compared to the world in which they did not exist. On this approach, when we are assessing Darren’s impact, we compare two worlds:
Actual world: Darren gives $1m to GWWC recommended charities.
Counterfactual worldD: Darren does not exist and GWWC acts as it would have if Darren did not exist.
In the actual world, an additional $1m is given to effective charities compared to the Counterfactual WorldD. Therefore, Darren’s lifetime counterfactual impact is $1m. Similarly, when we are assessing GWWC’s counterfactual impact, we compare two worlds:
Actual world: GWWC recruits Darren ensuring that $1m goes to effective charities
Counterfactual worldG: GWWC does not exist and Darren acts as he would have done if GWWC did not exist.
In the actual world, an additional $1m is given to effective charities compared to the Counterfactual WorldG. Therefore, GWWC’s lifetime counterfactual impact is $1m.
This seems to give rise to the paradoxical conclusion that the lifetime counterfactual impact of both GWWC and Darren is $2m, which is absurd as this exceeds the total benefit produced. We would assess the lifetime counterfactual impact of both Darren and GWWC collectively by comparing two worlds:
The difference between the Actual world and the counterfactual worldG&D is $1m, not $2m, so, the argument goes, the earlier method of calculating counterfactual impact must be wrong. The hidden premise here is:
Premise. The sum of the counterfactual impact of any two agents, A and B, taken individually, must equal the sum of the counterfactual impact of A and B, taken collectively.
In spite of its apparent plausibility, this premise is false. It implies that the conjunction of the counterfactual worlds we use to assess the counterfactual impact of two agents, taken individually, must be the same as the counterfactual world we use to assess the counterfactual impact of two agents, taken collectively. But this is not so. The conjunction of the counterfactual worlds we use to assess the impact of Darren and GWWC, taken individually, is:
Counterfactual worldD+G: GWWC does not exist and Darren acts as he would have done if GWWC did not exist; and Darren does not exist and GWWC acts as it would have done if Darren did not exist.
This world is not equivalent to Counterfactual worldD&G. Indeed, in this world Darren does not exist and acts as he would have done had GWWC not existed. But if GWWC had not existed, Darren would, ex hypothesi, still have existed. Therefore, this is not a description of the relevant counterfactual world which determines the counterfactual impact of both Darren and GWWC. This shows that you cannot unproblematically aggregate counterfactual worlds, it does not show that we assessed the counterfactual impact of Darren or GWWC in the wrong way.
To reiterate this point, when we assess Darren’s lifetime counterfactual impact, we ask: “what would have happened if Darren only hadn’t existed?” When we assess Darren and GWWC’s lifetime counterfactual impact, we ask “what would have happened if Darren and GWWC hadn’t existed?” These questions inevitably produce different answers about what GWWC would have done: in one case, we ask what GWWC would have done if Darren hadn’t existed, and in another we are assuming GWWC doesn’t even exist. This is why we get surprising answers when we mistakenly try to aggregate the counterfactual impact of multiple agents.
I agree with you that impact is importantly relative to a particular comparison world, and so you can’t straightforwardly sum different people’s impacts. But my impression is that Joey’s argument is actually that it’s important for us to try to work collectively rather than individually. Consider a case of three people:
Anna and Bob each have $600 to donate, and want to donate as effectively as possible. Anna is deciding between donating to TLYCS and AMF, Bob between GWWC and AMF. Casey is currently not planning to donate, but if introduced to EA by TLYCS and convinced of the efficacy of donating by GWWC, would donate $1000 to AMF.
It might be the case that Anna knows that Bob plans to donate the GWWC, and therefore she’s choosing between a case of causing $600 of impact or $1000. I take Joey’s point not to be that you can’t think of Anna’s impact as being $1000, but to be that it would be better to concentrate on the collective case than the individual case. Rather than considering what her impact would be holding fixed Bob’s actions ($1000 if she donates to TLYCS, $600 if she gives to AMF), Anna should try to coordinate with Bob and think about their collective impact ($1200 if they give to AMF, $1000 if they give to TLYCS/GWWC).
Given that, I would add ‘increased co-ordination’ to the list of things that could help with the problem. Given the highlighted fact that often multiple steps by different organisations are required to achieve particular impact, we should be thinking not just about how to optimise each step individually but also about the process overall.
I think this is a fair comment. I probably misinterpreted the main emphasis of the piece. I thought his main point was that each of the organisations is misstating their impact. I do think this was part of the argument and I think a few others did as well given that a few people started talking about dividing up credit according to the Shapely value. But I think the main part is about coordination and I agree wholeheartedly with his points and yours on that front
I’m interested in what norms we can use to better deal with the practical case.
e.g. Suppose:
1) GiveWell does research for a cost of $6 2) TLYCS does outreach using the research for a cost of $6 3) $10 is raised as a result.
Assume that if GiveWell didn’t do the research, TLYCS wouldn’t have raised the $10, and vice versa.
If you’re a donor working out where to give, how should you approach the situation?
If you consider funding TLYCS with GiveWell held fixed, then you can spend $6 to raise $10, which is worth doing. But if you consider funding GiveWell+TLYCS together, then you can spend $12 to raise $10, which is not worth doing.
It seems like the solution is that the donor needs to think very carefully about which margin they’re operating at. Here are a couple of options:
A) If GiveWell will definitely do the research whatever happens, then you ought to give. B) Maybe GiveWell won’t do the research if they don’t think anyone will promote it, so the two orgs are coupled, and that means you shouldn’t fund either. (Funding TLYCS causes GiveWell to raise more, which is bad in this case) C) If you’re a large donor who is able to cover both funding gaps, then you should consider the value of funding the sum, rather than each org individually.
It seems true that donors don’t often consider situations like (B), which might be a mistake. Though sometimes they do—e.g. GiveWell considers the costs of malaria net distribution incurred by other actors.
Likewise, it seems like donors often don’t consider situations like (C). e.g. If there are enough interactions, maybe the EA Funds should calculate the cost-effectiveness of a portfolio of EA orgs, rather than estimate the ratios for each individual org.
On the other hand, I don’t think these cases where two orgs are both 100% necessary for 100% of the impact are actually that common. In practice, if GiveWell didn’t exist, TLYCS would do something else with the $6, which would mean they raise somewhat less than $10; and vice versa. So, the two impacts are fairly unlikely to add up to much more than $12.
In case B, it looks to me like the donor should give to TLYCS, in certain conditions, in others not.
(a) Suppose: Because you gave to TLYCS, GiveWell does the research at a cost of $6, fundraising from an otherwise ineffective donor, and getting $10 to GW charities. In this case, your $6 has raised $10 for effective charities minus the $6 from an otherwise ineffective donor (~0 value). So, I don’t think causing GW to fundraise further would be bad in this case. Coordinating with GW to just get them to fundraise for donations to their effective charities is even better in this case, but donating to TLYCS is better than doing nothing.
(b) Suppose: Same as before, except GW fundraises from an effective donor who would otherwise have given the $6 to GW charities. In this case, giving to TLYCS is worse than doing nothing because you have spent $6 getting $10 to GW charities, minus what the effective donor would have given to had you not acted (-$6 to GW charities), so you’ve spent $6 getting $4 to effective charities. Doing nothing would be better, as then $6 goes to effective charities.
This shows that the counterfactual impact of funged/leveraged donations needs to be considered carefully. GiveWell is starting to do this—e.g. if govt money is leveraged or funged they try to estimate the cost-effectiveness of govt money. Outside that, this is probably something EA donors should take more account of.
Another case that should be considered is causing irrational prioritisation with a given amount of funds. Imagine case (a) above except that instead of fundraising, GiveWell moves money from another research project with a counterfactual value of $9 to GW charities because they have not considering these coordination effects (they reason that $10>$9). In that case, you’re spending $6 to get $10 to GW charities minus the $9 that would have gone to GW charities.
Regarding C, this seems right. It would be a mistake for the EA funds to add up its impact as the sum of the impact of each of the individual grants it has made.