I would imagine Will donates to multiple charities because the impact of his donations come primarily through their ability to inspire others to donate. Because of Will’s profile as a columnist and public intellectual, he often meets with potential donors who favour one of his recommendations over the others, and Will is able to say that he also donates to them, which may increase the likelihood of donations via the “actions speak louder than words” heuristic.
This would apply to others if they believe {{the impact of donations they can inspire by donating to multiple charities} - {the impact of donations they can inspire by donating to their top recommended charity}} > {{the impact of the donation to their top recommended charity} - {the impact of instead donating to multiple charities}}. Presumably Will believes that this inequality is true for his case. The exact quantities of donations that you need to be able to inspire for this to be true depend on your assessment of the relative efficiencies of the different charities that you are considering donating to. Of course in reality these quantities are virtually impossible to calculate and so there is always going to be signficant uncertainty associated with this decision.
It is also possible that Will is using some variant of the argument used by Julia Wise: “I wouldn’t want the whole effective altruist community to donate to only one place. So I’m okay with dividing things up a bit.” /ea/5l/where_im_giving_and_why_julia_wise/
I think the claim, which I do not necessarily support, would be this: Many people give to multiple orgs as a way of selfishly benefiting themselves (by looking good and affiliating with many good causes), whereas a “good” EAer might spread their donation to multiple orgs as a way to (a) persuade the rest of the world to accomplish more good or (b) coordinate better with other EAs, a la the argument you link with Julia. (Whether or not there’s a morally important distinction between the laymen and the EAer as things actually take place in the real world is a bit dubious. EA arguments might just be a way to show off how well you can abstractly justify your actions.)
I would imagine Will donates to multiple charities because the impact of his donations come primarily through their ability to inspire others to donate. Because of Will’s profile as a columnist and public intellectual, he often meets with potential donors who favour one of his recommendations over the others, and Will is able to say that he also donates to them, which may increase the likelihood of donations via the “actions speak louder than words” heuristic.
This would apply to others if they believe {{the impact of donations they can inspire by donating to multiple charities} - {the impact of donations they can inspire by donating to their top recommended charity}} > {{the impact of the donation to their top recommended charity} - {the impact of instead donating to multiple charities}}. Presumably Will believes that this inequality is true for his case. The exact quantities of donations that you need to be able to inspire for this to be true depend on your assessment of the relative efficiencies of the different charities that you are considering donating to. Of course in reality these quantities are virtually impossible to calculate and so there is always going to be signficant uncertainty associated with this decision.
It is also possible that Will is using some variant of the argument used by Julia Wise: “I wouldn’t want the whole effective altruist community to donate to only one place. So I’m okay with dividing things up a bit.” /ea/5l/where_im_giving_and_why_julia_wise/
It is also interesting to note that many of the GiveWell staff have chosen to donate to only one of their recommendations, presumably because they agree that they can have more impact that way. http://blog.givewell.org/2013/12/12/staff-members-personal-donations/
I think the claim, which I do not necessarily support, would be this: Many people give to multiple orgs as a way of selfishly benefiting themselves (by looking good and affiliating with many good causes), whereas a “good” EAer might spread their donation to multiple orgs as a way to (a) persuade the rest of the world to accomplish more good or (b) coordinate better with other EAs, a la the argument you link with Julia. (Whether or not there’s a morally important distinction between the laymen and the EAer as things actually take place in the real world is a bit dubious. EA arguments might just be a way to show off how well you can abstractly justify your actions.)