How best to aggregate judgements about donations?
For people who care about the outside view, or generally like equal-weight or conciliatory views of social epistemology, a good survey of what people think can be very valuable: one can look for epistemic superiors and follow their lead, and the âwisdom of the crowdsâ mean that the central view of a group of people (even if they are less informed than you) may still be closer to the mark, as the individual errors have a chance to cancel.
One area where this might be particularly valuable is judgements about donations: although there is a lot of heterogeneity in the decisions individual donors make, there seems scope for donors to improve their donation decisions if they had better access to what other EAs thought. One could imagine an extreme case where a donor, unsure where best to donate, follows a published âEA consensus positionâ, which recommends donations via aggregating the judgements of other (âexpertâ?) EAs.
Iâd guess people in this forum may have thought about this, or on something related. Iâd be interested in canvassing opinion on i) whether this is worth exploring further, and ii) discussion of the best way this could be done. Besides the case above, I would guess the infrastructure required to make the information of what other EAs think more fluid could have value in other things (e.g. moral trade).
So, any ideas?
[This was inspired by conversations by Alison Woodman and Owen Cotton-Barrett. Neither are responsible for my thoughts or this post.
As a personal anecdote, I had tried to do my own version of this over the last year by giving to the GWWC trust, with the instruction to âGive my donation to all organisations the trust gives money to, in the same proportionsâ. For a variety of reasons, this was probably worse than simply giving to my own best guess as to the best cause. I have a few ideas of what to do instead, but Iâll comment later to avoid priming etc.]
If people would like to discuss donation decisions, a few of us created this mailing list a while ago: http://ââskillshare.im/ââoffers/ââ163
revealed preferenceâlook at all those donating through giving what we can, and scale them to an appropriate equaliser so you arenât just reflecting the views of the rich. But that only captures knowledge about favourites, not why they werenât choosing the others.
Iâm not sure that anything beyond that isnât going to be either informationally poor or administratively a burden equivalent to givewellâs evaluations.
But Iâm not sure this kind of exercise is going to beat Givewellâs approach, as they seem to canvass pretty widely and seek out outside views?
Maybe the answer is to encourage givewell to look broader, or to try and compete with them or animal charity evaluators and try and look at everything?
You could use the public data from the EA survey (or the named donations choices of people on the EA Donation Registry) for this.
Some people might want to give different weight to different EAs according to personal criteria, such as epistemological or ethical views, bias, etc.