Naively, $1.6B/$5k ~330k deaths averted[1]? Adjust down because some spending is less effective than AMF. Adjust up because of AMF cost/life inflation.
- ^
(Or equivalent)
It seems like it would be particularly difficult to know ahead of time whether one is well-suited to founding a charity, and I can imagine that is a major barrier to application. Do you have any suggestions for assessment of fit?
The biggest factor is the arrival of FTX, which has given more to infrastructure YTD than all others combined the prior two years
Relevant excerpt from his prior 80k interview:
Rob Wiblin: …How have you ended up five or 10 times happier? It sounds like a large multiple.
Will MacAskill: One part of it is being still positive, but somewhat close to zero back then...There’s the classics, like learning to sleep well and meditate and get the right medication and exercise. There’s also been an awful lot of just understanding your own mind and having good responses. For me, the thing that often happens is I start to beat myself up for not being productive enough or not being smart enough or just otherwise failing or something. And having a trigger action plan where, when that starts happening, I’m like, “OK, suddenly the top priority on my to-do list again is looking after my mental health.” Often that just means taking some time off, working out, meditating, and perhaps also journaling as well to recognize that I’m being a little bit crazy.
Yes, sorry, on reflection that seems totally reasonable
Yeah it looked like grants had been announced roughly through June, so the methodology here was to divide by proportion dated Jan-Jun in prior years (0.49)
I’m not sure that inflation makes sense—this money isn’t being spent on bread :) I think most of these funds would alternatively be invested, and returning above inflation on average.
2012-Pres. (first longtermist grant was in 2015) no projection
Estimates for Open Phil:
FTX has so far granted 10x more to AI stuff than OPP
This is not true, sorry the Open Phil database labels are a bit misleading.
It appears that there is a nested structure to a couple of the Focus Areas, where e.g. ‘Potential Risks from Advanced AI’ is a subset of ‘Longtermism’, and when downloading the database only one tag is included. So for example, this one grant alone from March ’22 was over $13M, with both tags applied, and shows up in the .csv as only ‘Longtermism’. Edit: this is now flagged more prominently in the spreadsheet.
Many of the sources used here can’t be automated, but the spreadsheet is simple to update
Fixed
EA does seem a bit overrepresented (sort of acknowledged here).
Possible reasons: (a) sharing was encouraged post-survey, with some forewarning (b) EAs might be more likely than average to respond to ‘Student Values Survey’?
I strongly agree with this comment, especially the last bit.
In line with the first two paragraphs, I think the primary constraint is plausibly founders [of orgs and mega-projects], rather than generically ‘switching to direct work’.
Re footnote, the only public estimate I’ve seen is $400k-$4M here, so you’re in the same ballpark.
Personally I think $3M/y is too high, though I too would like to see more opinions and discussion on this topic.
I enjoyed this post and the novel framing, but I’m confused as to why you seem to want to lock in your current set of values—why is current you morally superior to future you?
Do I want my values changed to be more aligned with what’s good for the world? This is a hard philosophical question, but my tentative answer is: not inherently – only to the extent that it lets me do better according to my current values.
Speaking for myself personally, my values have changed quite a bit in the past ten years (by choice). Ten-years-ago-me would likely be doing something much different right now, but that’s not a trade that the current version of myself would want to make. In other words, it seems like in the case where you opt for ‘impactful toil’, that label no longer applies (it is more like ‘fun work’ per your updated set of values).
Some of the comments here are suggesting that there is in fact tension between promoting donations and direct work. The implication seems to be that while donations are highly effective in absolute terms, we should intentionally downplay this fact for fear that too many people might ‘settle’ for earning to give.
Personally, I would much rather employ honest messaging and allow people to assess the tradeoffs for their individual situation. I also think it’s important to bear in mind that downplaying cuts both ways—as Michael points out, the meme that direct work is overwhelmingly effective has done harm.
There may be some who ‘settle’ for earning to give when direct work could have been more impactful, and there may be some who take away that donations are trivial and do neither. Obviously I would expect the former to be hugely overrepresented on the EA Forum.
A lot of this wouldn’t show up in malaria, e.g. last year 39% of GiveWell funds directed went to malaria programs. But yeah, still would be interested to see data.