Along with my co-founder, Marcus A. Davis, I run Rethink Priorities. I’m also a Grant Manager for the Effective Altruism Infrastructure Fund and a top forecaster on Metaculus. Previously, I was a professional data scientist.
Peter Wildeford
Oh ok, thanks! Sorry for my confusion.
I’m confused—elsewhere you identify yourself as the author of this post but here you are commenting as if you have independently reviewed it?
To be clear, I don’t think people have turned against earning to give as a concept, as in they think it’s no longer good or something.
But I do think people have turned against “donating $5K a year to GiveWell[1] is sufficient to feel like I’m an EA in good standing, that I’m impactful, and that I can feel good about myself and what I’m doing for the world” as a concept. And this seems pretty sad to me.
Moreover, there’s been a lot of pressure over the past five more recent years of EA to push people onto concrete “direct good” career paths, especially at the (elite) university level, and this is likely a good thing, but I think the next step is that people feel like failures if they don’t succeed along this path, when that wouldn’t be the emotions I would recommend.
- ↩︎
Feel free to substitute in Animal Charity Evaluators, non-profits working on existential risk, Rethink Priorities, etc. as “GiveWell” specifically is not the important part of my point.
- ↩︎
The TV show Loot, in Season 2 Episode 1, introduces a SBF-type character named Noah Hope DeVore, who is a billionaire wonderkid who invents “analytic altruism”, which uses an algorithm to determine “the most statistically optimal ways” of saving lives and naturally comes up with malaria nets. However, Noah is later arrested by the FBI for wire fraud and various other financial offenses.
It’s been lost a bit in all the noise, but I think people should still be very excited and satisfied about “earning to give” and donating.
Anyone who can donate $5000 via GiveWell can save a life.
Possibly you can do even better than that per dollar if you’re okay accepting some premises around nonhuman animal welfare / sentience or risks from emerging technologies.
I think this is all very cool.
Moreover, while a $5000 donation is a big commitment, it is also achievable by a rather wide array of people. Many people are wealthy enough to do this donation, but do not do it, or donate far less effectively. You could have the same philanthropic power as a multi-millionaire philanthropist by allocating better.
If you earn a median income of ~$40K USD/yr[1] and spend $32,400[2], that gives you $7600 left over to donate each year, which could potentially save three lives every two years.
- ↩︎
As a single American.
- ↩︎
Spend $6K/yr on taxes. Then spend $1K/mo on rent and $100/mo on utilities which is doable in most metropolitan areas in a small apartment or if you have roommates. Then maybe spend $300/mo on groceries and $300/mo on other things. Then save 15% of your income ($6K/yr), which is pretty standard financial advice.
- ↩︎
Bold move launching this apparently quite serious new process on April Fools Day.
Rather than give a price tag for each (as there’s many), maybe you or other donors could flag the ones you’re most interested in and I could let you know? (Also this may be helpful for our internal prioritization even if there weren’t any money on the line.)
Hey, thanks for the feedback. I do think reasonable people can disagree about this policy and it entails an important trade-off.
To understand a bit about the other side of the trade-off, I would ask that you consider that we at RP are roughly an order of magnitude bigger than Lightcone Infrastructure and we need policies to be able to work well with ~70 people that I agree make no sense with ~7 (relatively independent, relatively senior) people.
Is it possible to elaborate at all on why they’d be particularly good fits for individual donors? I imagine in many cases the answer is a bit sensitive as to why OP may prefer to fund an org more but the org itself may prefer not to be funded by OP more but instead funded by individual donors. And I certainly can use my own private information to make some of those guesses. But reading this list it’s actually pretty hard to tell what is going on.
Thank the heavens my prayers have been answered! My birthday wish came true! I’ve been waiting so long for this moment and it’s finally here! This is the true meaning of the holiday season!
How will you prioritise amongst the projects listed here with unrestricted funds from small donors? Most of these projects I find very exciting, but some more than others. Do you have a rough priority ordering or a sense of what you would do in different scenarios, like if you ended up with unrestricted funding of $0.25m/$0.5m/$1m/$2m/$4m etc how you would split it between the projects you list?
I think views on this will vary somewhat within RP leadership. Here I am also reporting my own somewhat quick independent impressions which could update upon further discussing with teams and the rest of leadership. We’re planning to do more to finalize plans next month so it’s still a bit up in the air right now. Candidly, our priorities also depend on how our end of year fundraising campaign lands.
Here’s roughly what I’m thinking right now:
Highest priority / first $250k (and I am not listing these by order of priority within each tier):
~$35k for publishing our backlog of completed work.
~$100k towards more cause prioritization work
~$30k towards running the next EA Survey (hoping to have that matched by institutional funders)
~$15k for writing up our learnings from spending a year attempting longermist incubation (though we might try to 80⁄20 soon for less).
~$38k for establishing our farmed animals impact tracker
~$60k towards piloting our value of research model
Next highest priority / second $250k:
~$15k for publishing our backlog of work
~$75k towards more cause prioritization work/ worldview investigation work
~$30k towards piloting our value of research model
~$40k towards more rigorously understanding branding for AI risk outreach
~$45k towards more rigorously understanding EA’s growth trajectory
Third highest priority / next $500k:
~$30k towards publishing our backlog of research
~$75k towards more cause prioritization work/ worldview investigation work
~$50k towards piloting our value of research model
~$50k towards understanding why people drop out of EA
~$75k towards the theory of change status report for farmed animals
~$40k towards the farmed insect ask.
~$40k towards more rigorously understanding branding for EA And existential risk
~$30k towards understanding EA’s growth trajectory
~$60k towards QALYs/DALYS for animals
I am going to stop there as that first million is probably the most decision relevant, and right now that seems most likely where we will land with our funds raised.
With all this said, we also take donor preferences very seriously, including by taking restricted gifts. If there are particular options that are most exciting to you, then we might be able to further prioritize those if you or others were to offer support and indicated that preference .Relatedly, I am curious if you are willing to share which projects you are most excited about, or conversely which are least appealing?
And thank you too for all that you do to improve the world and these well-thought out questions!! :)
Can you assure me that Rethink’s researchers are independent?
Yes. Rethink Priorities is devoted to editorial independence. None of our contracts with our clients include editorial veto power (except that we obviously cannot publish confidential information) and we wouldn’t accept these sorts of contracts.
I think our reliance on individual large clients is important, but overstated. Our single largest client made up ~47% of our income in 2023 and we’re on track for this to land somewhere between 40% and 60% in 2024. This means in the unlikely event that we were to lose this large client completely, it would certainly suck, but some version of RP would still survive. Our leadership is committed to taking this path if necessary.
This is also why we are proactively working hard to diversify our client base and why we are soliciting individual funding. Funding from you and others would help us further safeguard our editorial independence. We’re very lucky and grateful for our existing individual donors that collectively donate a significant portion of our expenses and preserve our independence.
Luckily, I am not aware of any instance where we wanted to publish work that any of our institutional clients did not want to publish. We try to work with clients that value our independence and aim to promote truth-seeking even if it may go against the initial views of our client. We do not work with clients where they only want to hear from us a confirmation of their pre-existing beliefs—we have been asked to do something like this (though not in these exact words) many times actually and have always turned down this work.
There have sometimes been disagreements between individual staff and RP management about what to publish that did not involve anything about upsetting our clients, and in these cases we’ve always allowed the staff to still publish their work in their personal capacity. Rethink Priorities does not censor staff or prevent them from publishing.
If you have reason to believe that Rethink Priorities researchers are not independent, please let me know so I can correct any issues and reassure relevant staff.
Rethink’s cost per published research report that is not majority funded by “institutional support”
Our AI work, XST work, and our GHD work were entirely funded by institutions. Our animal welfare work was mostly funded by institutions. However, our survey work and WIT work were >90% covered by individual donors, so let’s zoom in on that.
The survey department and WIT department produced 31 outputs in 2023 against a spending of $2,055,048.78 across both departments including all relevant management and operations. This is $66,291.90 per report.
Notably we’ve separately estimated that some existing reports could be published publicly for a marginal ~$1K to $10K each, so there’s definitely an opportunity for leverage there for a private donor that wants to see more of our results become public. A lot of our work becomes public on the EA Forum thanks to support from individual donors, as our institutional clients typically don’t allocate budget for that.
Rethink’s cost per published research report (again total org cost not amount spent on a specific projects, divided by the number of published reports where a research heavy EA Forum post of typical Rethink quality would count as a published report).
Collectively, RP’s AI work, global health + development work, animal work, worldview investigations work, and survey work lead to the generation of 90 reports in 2023.
The budget for these six departments was $7,838,001.20, including all relevant management and operations.
This results in $87,088.90 per report.
This excludes the XST and special projects departments because their core outputs are not intended to be research papers.
To be clear, many of our outputs weren’t public, but I tried to norm the count to a sense of what a substantial EA Forum report would be.
If I may add some editorialization, I’d note that I’m sure we or other organizations could produce a report more cheaply, but we’ve found that investing a lot more per report in doing presentations, outreach, networking, etc. has magnified the impact per report. I worry that looking at outputs per dollar is more of a vanity metric and we actually need to do more to quantify impact per report.
Hi Sam,
Thanks for the detailed engagement! I am going to respond to each with a separate reply.
Rethink’s cost per researcher year on average, (i.e. the total org cost divided by total researcher years, not salary).
I think the best way to look at this is marginal cost. A new researcher hire costs us ~$87K USD in salary (this is a median, there is of course variation by title level here) and ~$28K in other costs (e.g., taxes, employment fees, benefits, equipment, employee travel). We then need to spend ~$31K in marginal spending on operations and ~$28K in marginal spending on management to support a new researcher. So the total cost for one new FTE year of research ends up being ~$174K.
You could take total org cost (~$11.4M in 2023 for RP core) and divide by total researcher years (~47 FTE in 2023 for RP core research, getting ~$242K per research year) but I don’t think you should do that because it ignores the fact that we spend a fairly significant amount of operations on things other than research, such as supporting work via our Special Projects program as well as running multiple coordination events (e.g., this animal strategy forum) and doing other organizational incubation work. We also use money to support contractors not counted in the FTE figure I gave.
Hey Yonatan, glad to see you doing this just wanted to drop a quick note saying that we’d really appreciate your support at Rethink Priorities! We wrote a post outlining our funding needs and I’d be happy to answer any questions you have.
I’m very excited to see this. To be honest when I first heard of the “evaluate the evaluators” project I was very skeptical and thought it would just be a rubber stamp on the EA ecosystem in a way that would play well for social media and attract donations.
I definitely was wrong!
It’s good to see that there actually was substantive meta-evaluation here and that the GWWC meta-evaluators did not pull punches!
I agree with this and I’d also be curious to hear more details about where GWWC’s current funding does come from, to help evaluate the extent to which GWWC is impartial (though to be clear I do think GWWC is impartial).
I’m really happy to see the “Add 10% to support our work” button and I check this every time it comes up!
Thanks for your comment and questions!
RP is still involved in work on AI and existential risk. This work now takes place internally at RP on our Worldview Investigations Team and externally via our special projects program.
Across the special projects program in particular we are supporting over 50 total staff working on various AI-related projects! RP is still very involved with these groups, from fundraising to comms to strategic support and I personally dedicate almost all of my time to AI-related initiatives.
As part of this strategy, our team members who were formerly working in our “Existential Security Team” and our “AI Governance and Strategy” department have spun out to do their work at a new organization which is better positioned to have the impact that RP wants to support.
We don’t grant RP unrestricted funds to special projects, so if you want to donate to them you would have to restrict your donation to them. RP unrestricted funds could be used to support our Worldview Investigation Team. Feel free to reach out to me or to Henri Thunberg henri@rethinkpriorities.org if you want to learn more.