Where are you donating this year, and why? (Open thread)
Share where you donated or plan to donate in 2023 and why!
See also: How would your project use extra funding?
I encourage you to share regardless of how small or large a donation you’re making, and you shouldn’t feel obliged to share the amount that you’re donating.
You can share as much or as little detail as you want (anything from 1 sentence simply describing where you’re giving, to multiple pages explaining your decision process and key considerations). You can also clarify whether you’re interested in feedback or follow-up questions or not.
And if you have thoughts or feedback on someone else’s donation plans, I’d encourage you to share that in a reply to their “answer”, unless the person indicated they don’t want that. (But remember to be respectful and kind while doing this! See also supportive scepticism.)
Why commenting on this post might be useful:
You might get useful feedback on your donation plan
Readers might form better donation plans by learning about donation options you’re considering, seeing your reasoning, etc.
Commenting or reading might help you/other people become or stay inspired to give (and to give effectively)
Related:
It’s Giving Season, and “Donation Debate Week” is starting today! We’ll be sharing a separate post about that soon.
Previous posts of this kind:
Aim to finish up 2023 having donated:
16k to AI safety / long term future
9k to animal suffering
4k to global health / well-being
This year wanted to shift more toward existential risk, vs last year which was mostly global health and well being—given the increased concerns and seemingly accelerating timelines.
Also put money into EA community side-projects:
10k for accountability coaching app (pro-bono for EAs)
3k for EA kids storybook project
Happy with the year—also took the GivingWhatWeCan pledge—better late than never!
ps—thanks for all the folks working hard in non-profits, fighting the good fight...
Thanks for helping organise the donation events, Lizka!
In agreement with my comment last year, I made 97 % of my year donations a few months ago to the Long-Term Future Fund (LTFF). However, I am now significantly less confident about existential risk mitigation being the best way to improve the world:
David Thorstad’s posts, namely the ones on mistakes in the moral mathematics of existential risk, epistemics and exagerating the risks, increased my general level of scepticism towards deferring to thought leaders in effective altruism before having engaged deeply with the arguments. It is not so much that I got to know knock-down arguments against existential risk mitigation, but more that I become more willing to investigate the claims being made.
I noticed my tail risk estimates tend to go down as I investigate a topic. In the context of:
Climate risk, I was deferring to a mix between 80,000 Hours’ upper bound of 0.01 % existential risk in the next 100 years, Toby Ord’s best guess of 0.1 %, and John Halstead’s best guess of 0.001 %. However, I looked a little more into John’s report, and think it makes sense to put more weight in his estimate.
Nuclear risk, I was previously mostly deferring to Luisa’s (great!) investigation for the effects on mortality, and to Toby Ord’s 0.1 % existential risk in the next 100 years. However, I did an analysis suggesting both are quite pessimistic:
“My estimate of 12.9 M expected famine deaths due to the climatic effects of nuclear war before 2050 is 2.05 % the 630 M implied by Luisa Rodriguez’s results for nuclear exchanges between the United States and Russia, so I would say they are significantly pessimistic[3]”.
“Mitigating starvation after a population loss of 50 % does not seem that different from saving a life now, and I estimate a probability of 3.29*10^-6 of such a loss due to the climatic effects of nuclear war before 2050[58]”.
AI risk, I noted I am not confident superintelligent AI disempowering humanity would necessarily be bad, and wonder whether the vast majority of technological progress will happen in the longterm future.
AI and bio risk, I suspect the risk of a terrorist attack causing human extinction is exagerated.
I said 97 % above rather than 100 % because I have just made a small donation to the EA Forum Donation Fund[1], distributing my votes fairly similarly across the LTFF, Animal Welfare Fund, and Rethink Priorities. LTFF may still be my top option, so I might have put all votes on LTFF (related dialogue). On the other hand:
I was more inclined to support Rethink’s (great!) work on the CURVE sequence (whose 1st post went out about 1 month after I made my big year donation). I think it is stimulating some great discussion on cause priritisation, and might (I hope!) eventually influence Open Phil’s allocation.
I agree animal welfare should be receing more resources, and wanted to signal my support. Also, even though I am all in for fanaticism in principle (not in practice), I also just feel like it is nice to donate to something reducing suffering in a more sure-way now and then!
Side note. No donation icon showed up after my donation. Not sure whether this is supposed to or not. Update: you have to DM @EA Forum Team.
I generally choose on a month by month basis. I don’t claim this is effective.
Things that I guess might be the most effective in the coming months? EA funds, Rethink and Animal Welfare stuff.
(I find it uncomfy to talk about justifications for this but whatever. Feel free to correct errors here)
EA funds—I remember reading @Linch’s case that money there was really marginal when he wrote it. I think they’ve had more money since but it still might be very effective.
Rethink—I know and trust Peter and often see reflections of work I think is interesting in Rethink (eg the surveys and animal weights). If their animal weights are even 1% likely then that implies a big shift in how we should spread impact.
Animal Welfare stuff—seems underrated compared to humans.
Why not AI?
Well my P(doom) is 2-8% which is comparably lower than many people’s. Also I am unsure that marginal money is spent well. I hear about and see non-trivial levels of grift. I have given to AI pause stuff before and might do that, but currently I feel like AI stuff is where I want it in terms of many of the axes I think I could donate to effect.
What else would excite me?
Sustainable immigration stuff. I think if that could happen there would be a lot of good outcomes pretty cheaply. Note I don’t mean more immigration, I mean more sustainable stuff. How can we increase levels of immgration without increasing backlash (in the UK there has been a huge increase in concern about immigration mainly caused by the government)
Top-line, gave ~25% of my income—primarily to Global Health and Climate causes. This year I focused on a smaller # of organizations at higher levels than in 2022, based on feedback on last year’s thread.
26k to GiveWell Top Charities Fund; add’l 11k to Against Malaria Fund
35k to climate organizations - (EA-ish): Silverlining, Clean Air Task Force; (non-EA—focused on a US state-level organization, data organization, and industry-focused organizations): Fresh Energy, Carbonplan, IREC, InnerSpace
Balance to Nuclear Threat Initiative, University of Washington’s Virology & Epidemiology Funds
Happy holidays!
This year I gave 13% of my income (+ some carryover from last year, which I had postponed) to EA charities. Of this, I gave about half to global health and development (mostly to GiveWell Top Charities, some to Give Directly) and the other half to animal welfare (mostly to the EA Funds Animal Welfare Fund, some to The Humane League). I also gave $1,250 to various political candidates I felt were EA-aligned. In prior years I’ve given overwhelmingly to global health and development and I still think that’s very important: it’s what initially drew me to EA and what I’m most confident is good. But last year I was convinced I had underinvested in animal welfare historically and I’m starting to make up for that.
I strongly prefer near-term causes with my personal donations, partly because my career focuses on speculative long-term impact. I’m bothered by the strong possibility that my career efforts will benefit nobody, and want to ensure I do at least some good along the way. I also think that in recent years, the wealthiest and most prominent EAs have invested more money into longterm causes than we can be confident is helpful, in ways that have sometimes backfired, damaged or dominated EA’s reputation, promoted groupthink in pursuit of jobs/community, and ultimately saddened or embarrassed me. Relatedly, I think managing public perceptions of EA is inescapably important work if we want to effectively improve government policies in democratic countries. So even on longtermist grounds, I think it’s important for self-described EAs at the grassroots level to keep proven, RCT-backed, highly effective charities with intuitive mass appeal on the funding menu (perhaps especially if we personally work on longtermism and want people to trust our motives).
Within neartermism, I like to split my donations across a single-digit number of the most impactful funds or charities. This is because I do not have a strong, confident belief that any one of them is most effective, want to maximize my chance of doing a large amount of good overall, and see hedging my bets as a mark of intellectual humility. I don’t mind if this makes my altruism less effective than that of the very best EAs, because I’m confident it’s better than that of 99% of people. Likewise, I think the path to effective giving at a societal scale depends much more on outreach to the bottom 90% or so of givers, who give barely any quantitative thought to relative impact, than it does on redirecting donations from those already in the movement.
I’m giving to GiveDirectly again, as I have every year since learning about them. I think they’re undervalued in the EA community in general, because we don’t yet have a way to give enough weight to subjective wellbeing, the value of self-determination, or justice. I think it is good that extremely poor people would have the opportunity to choose how to improve their own lives—rather than those types of decisions being made for them, however rigorously.
You say “don’t yet”...are you aware of anyone working on a project to incorporate deontology or other non-utilitarian factors in cause prioritization?
I reckon my donations this year will amount to about:
$3.7K to animal welfare, via Effektiv Spenden.
$1.7K to global health and development, via Effektiv Spenden.
$1.1K to the Donation Election Fund.
And my labour to mitigating risks from AI. In a way, this amounts to way more than the above, given that I would be earning 2x+ what I am earning now if I were doing what I did before, i.e., software engineering.
I recently reconfigured my giving to be about 85% animal welfare and 15% global health, however, for reasons similar to those spelled out in this post (I think, though I only skimmed that post, and came to my decision independently).
I haven’t yet decided, but it’s likely that a majority of my donations will go to this year’s donor lottery. I’m fairly convinced by the arguments in favour of donor lotteries [1, 2], and would encourage others to consider them if they’re unsure where to give.
Having said that, lotteries have less fuzzies than donating directly, so I may separately give to some effective charities which I’m personally excited about.
I’m donating 10% this year, probably all towards nonhuman animal welfare via the ACE Recommended Charity Fund.
Animal issues seem much more neglected than global health & poverty.
X-risk seems much less funding-constrained than animal stuff.
If there were an obvious way to support longermist animal stuff, I’d probably allocate something towards that. In particular, I think someone should be lobbying AI companies to take animal welfare more seriously and to get their models to not tacitly support factory farming. I also think digital sentience seems important and neglected, but I basically trust OpenPhil to do a good job funding that type of research.
I’m still on the fence about going all in with longtermist cause areas. Therefore, I usually divide my donations between Animal Welfare (Animal Charity Evaluators Fund), GWWC Top Charities Fund and The Long Term Future Fund (pandemic risk and AI safety mainly).
Although not relevant to the question, a really useful tip is to donate via your friends’ employers (such as Google and Microsoft) who may match the employee’s contribution.
There was an earlier post from lots of people at CEA, including me: Here’s where CEA staff are donating in 2023
Quick summary of my section: I donated to the Donation Election Fund for the reasons described here, to someone’s political campaign[1], and in some cases I didn’t take compensation I was supposed to get from organizations I’d happily donate to.
I feel weird donating to political campaigns (I grew up ~avoiding politics and still have a lot of the same beliefs and intuitions). But I talked to some people I know about the value of this campaign and tried to estimate the cost-effectiveness of the donation (my conclusion was that it was very close to donating to the LTFF, even when I was ignoring impact that might come from animal welfare improvements, which is important to me), and was compelled by the consideration that I had an unusual ability to donate to the campaign as a US citizen. (I’m interested in hearing people’s thoughts about this, but will probably not actively participate in public discussions about the decision.)
Just to comment on your footnote: my intuition is that political spending can be very effective and it is an important component of my family’s donations. For anyone interested in this I really recommend Ezra Klein’s interview with Amanda Litman from Run for Something.
She speaks compellingly about how most political donations, especially on the left, are reactionary and not necessarily effective, but about how in certain races and particularly state and local races, tiny sums of money can really make a huge difference. I don’t think she explicitly uses an ITN framework but it definitely fits, and their work is in what has in recent history been a very neglected space IMO.
Is it okay if I post here? I’m not an EA but am curious about the movement.
To answer OP’s question: my giving this year has focused on animal rights and welfare—local shelters, pro-vegan organizations, pet and wildlife rehabilitation. I’ve also given direct aid to people experiencing financial crisis in my social circle, which isn’t “charity” but is part of my personal mission of care.
If my financial situation ever does improve, I’d love to give more and also fund anti-AI / low-tech / degrowth initiatives.
Hi Hayven- yes, you’re very welcome to post here.
Thanks for caring about animals and the people around you!
If you’re interested in helping the most animals you can with some of your donations, you may be interested in this recent post from the EA animal welfare fund. Giving What We Can recently evaluated them as a top rated fund for animal welfare, so they are likely to be one of the absolute best places you could donate to help animals.
Thank you so much, Toby! I’ll read the post today and see what I think.
It’s honestly kind of refreshing to see that concern for (non-human) animals is so widespread in the EA movement just because it most certainly isn’t in wider society. I have a lot of hesitation about aligning myself with any ideologies, but there’s something really refreshing about EA’s care for animals as more than just means to human ends.
I probably will end up on the receiving end of donations (in some form or other :/ )rather than give myself some (Uh Oh, had like 4000$ of disposable income this year).
But If I had the funds I would donate to Sea Shepard, I like their proactive approach to animal suffering (in this case, marine wildlife). Who knows I might join them full-time one day even if my career is the complete opposite of being out there in the open sea.
Another one that comes in mind is International Network of Street papers and members of it, dependent on location they may vary. The business model is fairly simple and the investment per person is fairly low given that the return is so much more. The local paper here is running now for the past 10 years and their political involvement has helped a lot of people, directly and indirectly.
Another aspect is the direct help that the marginalized communities get from these street papers, at least here, the paper directly helps with rehabilitation and resocialization of their sellers (usually homeless/very low income/disabled people) and you can see this on their faces you can feel it in society on a wider scale.
I’m waiting to decide where to give t’ill 2024 comes or later, as GWWC plans on investing several new charities, as well as reviewing current ones, so that my giving will be more accurate, as well as judt the general increase in accuracy of charity evaluators. I also plan on donating much later, and investing it now, to increase the amount of good done.