Letâs say youâre going to donate some money, and plan to put some time into figuring out the best place to donate. The more time you put into your decision the better itâs likely to be, but at some point you need to stop looking and actually make the donation. The more youâre donating the longer it makes sense to go, since itâs more valuable to better direct a larger amount of money. Can you get the benefits of the deeper research without increasing time spent on research? Weirdly, you can! About five years ago, Carl Schulman proposed (and ran the first) âdonor lotteryâ.
Say instead of donating $1k, you get together with 100 other people and you each put in $1k. You select one of the people at random (1:100) to choose where the pool ($100k) goes. This turns your 100% chance of directing $1k into a 1% chance of directing $100k. The goal is to make research more efficient:
If you win, youâre working with enough money that itâs worth it for you to put serious time into figuring out your best donation option.
If you lose, you donât need to put any time into determining where your money should go.
This isnât that different from giving your money to GiveWell or similar to decide how to distribute, in that itâs delegating the decision to someone who can put in more research. Except that it doesnât require identifying someone better at allocating funds than you are, it just requires that you would:
Make better decisions in allocating $100k than $1k
Prefer a 1% chance of a well-allocated $100k to a 100% chance of a somewhat less well-allocated $1k.
If you come at this from a theoretical perspective this can seem really neat: better considered donations, more efficient use of research time, strictly better, literally no downsides, why would you not do this?
Despite basically agreeing with all of the above, however, Iâve not participated in a donor lottery, and I think theyâre likely slightly negative on balance. There actually are downsides, just not in areas that the logic above considers.
The biggest downside is that it makes your donation decisions less legible: itâs harder for people to understand what youâre doing and why. Lets say youâre talking to a friend:
Friend: Youâre really into charity stuff, right? Where did you decide to donate this year?You: I put my money into a donor lottery. I traded my $10k for a 1% chance of donating $1M.
Friend: Did you win?
You: No, but Iâm glad I did it! Let me explain whyâŠ
There are a few different ways this could go. Your friend could:
Listen to your explanation at length, and think âthis is really cool, more efficient donation allocation, I bet EA is full of great ideas like this, I should learn more!â
Listen to your explanation at length, and think âmaybe this works, but it seems like it could probably go wrong somehow.â
Not have time or interest for the full explanation, and be left thinking you irresponsibly gambled away your annual donation.
I think (b) and (c) are going to happen often enough to outweigh both the benefit (a) and the benefit of the additional research. And this is in some sense a best case: your friend thinks well of you and is likely to give you the benefit of the doubt. The same conversation with someone you know less well or who doesnât have the same baseline assumption that youâre an honest and principled person trying to do the right thing would likely go very poorly.
The other problem with that conversation is that youâre not really answering the question! Theyâre trying to figure out where they should donate, and are looking for your advice. Even if they come away thinking your decision to participate in the lottery makes some sense, theyâre unlikely to decide to participate the first time they hear about the idea, and so do still need to decide where to give. Itâs much better if you can explain what charity you picked, how you were thinking about it, and be able to answer followups. Importantly, I think this is better than explaining the donor lottery at illustrating EA thinking and helping people figure out if learning more about EA is something theyâd enjoy.
I also have two smaller objections to donor lotteries:
If with some research you have a good chance of identifying better donation opportunities than âgive to GiveWell or EA Fundsâ, Iâd be excited for you to do that and write up your results. I think youâd likely influence other funders enough that the time investment would be worth it, and youâd learn a lot. If this goes well you get the benefit of winning a donor lottery without having to actually win!
When I was earning to give I was in a position similar to someone who had won a donor lottery, in that I had enough money to allocate that it would be worth putting in substantial time deciding what to do with it. But in practice I didnât end up putting in that much time, the time I did put in didnât shift my views, and I ended up donating to the same places I expect I would have if Iâd been donating much less money.
(I think the tradeoffs were less against donor lotteries in 2016 when they were first proposed, because there were many fewer people working full time on how to allocate EA money.)
Comment via: facebook
I think most people should at least have a best guess about where they would donate if they hadnât participated in a lottery, and if itâs easy then they should plausibly give a token amount directly (like 10%). This makes âgive to lotteryâ much more comparable to âgive directlyâ since youâve still spent the time thinking about it, have your head in the game, and can talk from experience about that decision. (Another simple reason to do this is to confirm that actually diminishing returns arenât a big deal wherever you were planning to donate.)
If someone really means to ask âwhere should I donate?â and ânot where did you donate?â then it seems like you can just answer that question. Giving 10% to the object level makes that even easier but I feel like I can basically do that even when I donât give directly to the particular charity Iâm discussing. (That said, I probably get asked this question much less than you and by a very different distribution of people.)
I particularly like donor lotteries because I think they are one of the cleanest âwinsâ amongst all EA recommendations for small donors (though not one of the largest wins). I think itâs really complicated to assess whether you should defer to EAs that you donât really know, but that there are great arguments that most donors would be better served by using a lottery. So I consider it one of the most legible ways to clearly demonstrate that EA is just better at donating than other communities (rather than merely having complex methodological disagreements).
I think that donor lotteries seem better for the health of the community than having a bunch of small donors who have no option other than deferring to a centralized recommender (whose work they often canât check very thoroughly). It empowers more people to have meaningful influence over how charity is done. âSmall donors should just defer to professionalsâ might be right but I donât think itâs very legibly right, and it relies on a bunch of trust. In practice if you are donating $10-100k I think that a lot of that has to be social proof rather than having the time to dig into claims enough to reliably tell that GiveWell knows their stuff. And even if everyone defers to an evaluator, the form of accountability seems important and Iâd prefer evaluators be accountable to 10x fewer donors with 10x more time. Relatedly, I think this makes it easier in practice to start a competitor and market to a smaller number of large donors based on quality product.
I do think a significant fraction of people (especially in tech or finance) respond positively to the idea of donor lotteries, and having the option value to talk about the object level or lotteries based on the audience seems good. Again, I think this might differ because we are talking to different audiences in a different way. I also think that Iâm more interested in weird stuff than you, so Iâm maybe particularly willing to lose out on prospective donors who arenât willing to spend 30 minutes thinking about a plausible-looking but weird idea, or who canât handle expected value calculations. And in general I more want EA to be the kind of place that does weird but good stuff.
I donât think that âlots of people are thinking about the allocation of EA fundsâ has made it that much easier to be a small donor. Which of those people am I supposed to defer to? Also, I qualitatively donât feel like the ratio of (money moved) vs (time spent thinking about allocation) is that much better than it was in 2016. I wouldnât be surprised if both of those have grown ~4x in parallel.
I agree some lottery-winners give to the same kinds of things that they would have given to otherwise, but in the two cases where I have the most information it seems like the outcome was very different. (For several of these grants I think that giving smaller amounts would not have been an option.) Itâs also worth noting that most small donors give much worse than you, and for many of them it would be great if âspend more time to thinkâ got them up to âdefer to GiveWell.â
Iâd guess that your earning to give situation was similar to a small lottery winner but not a large lottery winner, and if you donated somewhere without significant diminishing returns then I think it might have made sense to go to an even larger scale. (Right now I personally feel like itâs pretty plausible that a philanthropist with a $10M budget should do a lottery with a big EA foundation up to a $100M budget, if theyâd offer it, though Iâd defer to funders and charities about what they think would make for a healthier ecosystem.)
Iâm not sure that youâre making the wrong call, but I think itâs sort of weird/âhypocritical to advertise EA by making donation choices that sacrifice altruistic impact in order to seem more normal.
Another effect is that Iâd much rather evangelize EA to the kind of people who understand donor lotteries quickly.
âSeem more normalâ isnât quite what Iâm going for; itâs more about there being value in doing things that are easier to explain, or that are are clearly valuable even from worldviews different from your own. For example, someone choosing to live with roommates so theyâre able to work for a non-profit or donate more is weird, but itâs not hard to explain and peopleâs reaction is much more likely to be âI wouldnât do thatâ than âthatâs not actually goodâ.
Iâd feel differently if I thought we were talking about a large amount of altruistic impact. If you think AI safety research or building pandemic shelters are what most needs doing you shouldnât go do something else just because itâs easier to explain to the average person. But I think the gains from lotteries are pretty low, low enough that when you consider the downside it being more confusing itâs not worth it?
Another place this tradeoff comes up is with salary sacrifice: itâs more legible to donate money, but asking for a reduced salary has more altruistic impact.
I think that (normal charity) vs (lottery) is a clear improvement for a much wider range of worldviews than (normal charity) vs (defer to GiveWell).
I do agree that âdefer to GiveWellâ is easier to explain though. Or slightly more precisely: I think itâs easier to explain what GiveWell does well enough that someone can understand why you might think itâs the best option, but harder to explain what GiveWell does in enough detail that someone can verify for themselves that itâs actually better than their alternatives.
It feels like a bit of a Catch-22. To purposefully make donations in a way that you know will lead to less overall altruistic impact in order to increase first order impact also seems to run afoul of similar, albeit maybe less severe, issues.
Disclaimers:
Writing quickly about a complex topic I have many vague ideas about
I won the $500k donor lottery, and it has been a very emotional experience, I think I will be in a better position to evaluate this in ~5 years, Iâm extremely biased
Even trying to take into account my strong bias, Iâm strongly in favor of donor lotteries, here are my 2 cents:
I confirm that you do get negative reactions to the concept of a âdonor lotteryâ from most people, but you get even more negative reactions when you tell people youâre donating most of your income, as Iâm sure you know better than me.
I would go with the advice in why you should give to a donor lottery
I personally would recommend 50-50 for most people, but with high variance. I think it would heavily mitigate your points b) and c), and keep you informed enough about giving opportunities to provide useful advice.
You can still share with people your direct donations and I think you have roughly the same influence whether you donate X to an organization or 2X. You can mention the donor lottery only to people that might find it interesting or useful.
In my experience, there are now many funds and grantmakers to choose from! The problem moved one level up but itâs still there.
An advantage of moving to the ~$500k scale is that it makes sense for me to ask people that seem knowledgeable and trustworthy to confirm whether a fund/âevaluator is actually reliable. It actually turned out that one (of the many) probably wasnât at the time, and I definitely might have donated to it otherwise.
I also think that donating to GiveWellâs âAll grants fundâ is probably just better than donating directly to one of the recommended charities, because GiveWell only funds particularly effective programs, and many top charities run many programs in many countries with various levels of cost-effectiveness. But I see many donors not making these kinds of considerations that I think they would make if they won a donor lottery.
Another way to say this: many EA donors, like silly me last year, do not give to GiveWell funds or EA Funds, maybe for reasons similar to your points b) and c).
When I was donating my last 10k last year I was reading things that made me doubt the optimality of GiveWellâs recommendation (like this series). At that scale, it would have been hard to justify the time to investigate whether those claims have merit.
Some IMHO more serious downsides of the donor lottery:
Unilateralistâs curse, especially for longtermist funding where many opportunities have sign uncertainty
Potentially a less representative distribution compared to a âfree marketâ approach where small funders distribute money according to their values.
e.g. if half of the funders care only about insect suffering and half care only about mental health, it might be better to allocate the funds accordingly, instead of having 100% insects or 100% mental health depending on who wins the lottery.
People might over update on the opinions of a random lottery winner, compared to a grantmaker that was selected meritocratically, or a funder that at least earned the money.
It might be treated by the winner like âmonopoly moneyâ, without the moral seriousness of considering the thousands of lives it impacts.
Some serious advantages of winning the donor lottery:
It makes sense to verify that the fund/âevaluator you would be donating to is actually reliable with much higher confidence and compare funds with each other. (A popular one probably wasnât until months or years ago)
It makes sense to go much deeper into EA and cause prioritization. e.g. How much should I care about animals, WELLBYs vs QALYs vs GiveWell moral weights, longtermism vs short-termism, population ethics in general, whether to fund research or proved interventions). This helps not just with allocating the donor lottery winnings, but with all future donations and career choices, and helps give much better donation advice to various friends.
I learned basic concepts like âpopulation ethicsâ and âtheory of changeâ only after winning the lottery. I expect many donors to not be familiar with these basic topics and I am still learning a lot. Also, how to decide allocation across cause areas seems a very tricky question, sensitive to ethical questions that there is no consensus on.
Many reliable people, even at large funds, encourage funding diversity. Especially for opportunities in the $50k ~ $1M range, which might be too large for very small funders and too small for very large funders. There are professional grantmakers for these, but they have different opinions on where the marginal $100k should go (which I think is good!)
Despite the perception of abundant funding for EA orgs, many of their leaders still spend a significant amount of time fundraising, which has a large opportunity cost. If you can identify things that would have otherwise been funded by EA funds anyway, you can save them significant amounts of time.
Publishing a grantmaking report after winning a donor lottery allows you to signal-boost projects that you considered especially valuable, providing them with more reach even if funding-wise you would already cover their funding gap.
It could accelerate the career of potentially promising grantmakers, the correlation between grantmaking skill and grantmaking budget seems far from perfect. You could use the lottery funds to provide valuable experience to someone that one day might work for e.g. Schmidt Futures.
It would be easier to find opportunities for leverage, you influence donations less when losing but potentially more when winning. I did end up talking to way more people about EA (with mixed results).
I think the existing donor lotteries might have been too small for someone at your scale to appreciate the advantages, would you feel the same about a ~$10M donor lottery?
In my experience, explaining donor lotteries is actually pretty engaging to the right sort of person. And I expect that sort of person to be a good fit for EA.
You are/âwere an extremely prominent earning-to-giver (you and Julia have blogs; you guys have been profiled on news articles for your donations a number of times). I suspect the vast majority of EA donors are not in a position where they inspire more donations-by-casual-donors than the size of their own donations.
The comparison here isnât inspiration vs own donations, but inspiration vs marginal improvement due to better research, which is a much lower bar.
Yeah this is a good argument. I suspect theyâre in the same order-of-magnitude but I agree the argument is nontrivial.
In practice most donations end up funging against Open Phil or FTXâs last dollar I guess, so a lot of this hinges on whether/âhow much you expect to do better than that bar.
Interestingly I recently tried this (here). It led to some money moved but less than I hoped. I would encourage others to do the same but also to have low expectations that anyone will listen to them or care or donate differently, I donât expect the community is that agile/â responsive to suggestions.
If fact the whole experience made me much more likely to enter a donor lotteries â I now have a long list of places I expect should be funded and no where near enough funds to give so I might as well enter the lottery and see if that helps solve the problem.
I think this post misses something: the impact of a donation is typically a nonlinear function of donation size.
Specifically, a donation of (say) $200,000 typically achieves more than 100x more impact than a donation of $2,000.
Why?
Because with $200,000, a charity can take meaningful actions now, whereas with $2,000 the funds will likely sit in the charityâs bank account until the charity has built up enough funds from other sources to use the money meaningfully.
And money has a time value.
Details/âcaveats:
Note that this depends on the âunit of actionâ for the organisation. E.g. for AMF, it may be that $2,000 is enough to do another distribution of bednets (I havenât checked this). In which case the value of donations probably does scale linearly (at least when comparing a $2,000 donation and a $200,000 donation). But the non-linearity point would be valid if we were comparing $20 with $2,000 (i.e. $2,000 would be more than 100x as valuable as $20).
However for lots of organisations the âunit of actionâ is the amount of money needed to fund another personâs salary, in which case the non-linearity point would apply in the example I gave ($2,000 vs $200,000).
Obviously all of this is assuming favourable Room For More Funding conditionsâi.e. that we have not yet hit diminishing marginal returns on donations.
Also, the point about âmoney has a time valueâ hides lots of detail which I havenât gone into here, but can do if someone requests it.
[EDIT: As Linch pointed out, not necessarily good reasons to not participate in a donor lottery]
I think there are two additional criteria for participation in a donor lottery [EDIT: assuming you are less comfortable with random/âspeculative impact]:
1) The average winner would do at least as good a job as you in selecting where to donate as if you hadnât participated.
It means their enormous time sink in investigating where to donate would yield donation options at least as good as those from your cursory investigation. Thatâs a relatively low bar, but itâs still there.
If you are unusually rational and conscientious with your giving (which includes most in the EA community), then there is a good chance your giving will be more effective than a less conscientious person, even if they spent substantially more time thinking about where to donate. I anticipate most EAs would be more likely to join a donor lottery with other EAs than a lottery with participants sampled from the general population. For a random person in the population, there is a good chance your âcursoryâ investigation is as good or better than their âdetailedâ investigation.
2) You share similar values on what is most important as the others in the donor lottery
Say I joined a donor lottery with three other people. They have different ideas of what is most beneficial to the world. For one, it is building monuments to Chtulu. For another, it is dyeing cats purple. And for the last one, it is increasing the number of shrubs in their neighborhood. Even if more time allows them to find the best giving opportunities for the things they care about, I see little additional value in a world with a higher concentration of Cthulhu monuments, purple cats, or neighborhood shrubbery.
I think this is incorrect; if a normal lottery ends up being +EV itâs still rational to put your money in it, even if you then think the other lottery participants have worse epistemics or values to you (which is normal and unsurprising).
Yeah, I suppose thatâs true. Itâs still +EV even if the money disappears into a blackhole if you donât win, because if you win, you get to spend more time deciding where to donate.
I guess my hesitation around the lotteries is Iâm uncomfortable with expected value calculations at the extremes (e.g. 1 in a million chance to win for a million more impact), and that Iâd experience regret if I didnât think the eventually winner was thoughtful in their giving (even if it is still +EV).
Weâre not talking about one in a million odds, though? Weâre talking about 1%
I mean, Iâd still be hesitant at a 1% chance. Letâs say I give my donations every year for 40 years to a donor lottery at a 1% chance. Then itâs a 0.99^40 = 67% chance Iâd never get to choose where the money goes. Personally, Iâm not sure I could handle that. Iâd more comfortable with 10% odds.
Seems similar to startups as an earning to give approach. It makes more sense for 10 people to attempt start-ups than to pursue high-paying careers because the expected value is larger. But it really sucks for the people that never succeed, and many probably give up earning to give and leave the movement altogether. Itâs a high price to pay in terms of community and mental health for greater expected aggregate impact.
Ideally there would be some way to involve lottery losers in the winâat least acknowledging them, or having them give feedback on a draft giving proposal. That would help counteract the wearing feeling of pitching money into the void, without negating too much of the benefit of having one person do all the heavy work of deciding where to give.
There are also extra transaction costs in transferring money through an extra step, although that argument can also be made of regrantors like EA Funds and probably isnât a big enough deal to change what you should decide
I think you point to relevant tradeoffs here. I myself am currently testing a different scheme to determine my donation: let the public (or anyone interested in participating) decide collectively where to donate. I believe this might...
improve the decision quality due to crowd-sourcing information and reducing any bias (moral or otherwise) that I might have
encourage others to think about various causes and potentially make them donate more as well
The downside is probably that the total effort spent in this decision is larger than would I take it on my own...
My polls look like this, sometimes with a fixed total amount, sometimes with a total that increases with participation (to give an incentive). Typically around 100 people participate in it:
What do you think of this?