AMA: Peter Wildeford (Co-CEO at Rethink Priorities)
Hi everyone! I’ll be doing an Ask Me Anything (AMA) here. Feel free to drop your questions in the comments below. I will aim to answer them by Monday, July 24.
Who am I?
I’m Peter. I co-founded Rethink Priorities (RP) with Marcus A. Davis in 2018. Previously, I worked as a data scientist in industry for five years. I’m an avid forecaster. I’ve been known to tweet here and blog here.
What does Rethink Priorities do?
RP is a research and implementation group that works with foundations and impact-focused non-profits to identify pressing opportunities to make the world better, figures out strategies for working on those problems, and does that work.
We focus on:
Wild and farmed animal welfare (including invertebrate welfare)
Global health and development (including climate change)
AI governance and strategy
Existential security and safeguarding a flourishing long-term future
Understanding and supporting communities relevant to the above
What should you ask me?
Anything!
I oversee RP’s work related to existential security, AI, and surveys and data analysis research, but I can answer any question about RP (or anything).
I’m also excited to answer questions about the organization’s future plans and our funding gaps (see here for more information). We’re pretty funding constrained right now and could use some help!
We also recently published a personal reflection on what Marcus and I have learned in the last five years as well as a review of the organization’s impacts, future plans, and funding needs that you might be interested in or have questions about.
RP’s publicly available research can be found in this database. If you’d like to support RP’s mission, please donate here or contact Director of Development Janique Behman.To stay up-to-date on our work, please subscribe to our newsletter or engage with us on Twitter, Facebook, or LinkedIn.
- Five Years of Rethink Priorities: Impact, Future Plans, Funding Needs (July 2023) by 18 Jul 2023 15:59 UTC; 110 points) (
- EA Organization Updates: August 2023 by 14 Aug 2023 20:55 UTC; 36 points) (
- Join Rethink Priorities’ Town Hall on August 10, 2023 by 3 Aug 2023 17:20 UTC; 21 points) (
Doing some napkin-math:
Rethink published 32 pieces of research in 2022 (according to your database)
I think roughly (?) half of your work doesn’t get published as it’s for specific clients, so let’s say you produced 64 reports overall in 2022.
Rethink raised $10.7 million in 2022.
That works out to around $167k per research output.
That seems like a lot! Maybe I should discount a bit as some of this might be for the new Special Projects team rather than research, but it still seems like it’ll be over $100k per research output.
Related questions:
Do you think the calculations above are broadly correct? If not, could you share what the ballpark figures might actually be? Obviously, this will depend a lot on the size of the project and other factors but averages are still useful!
If they are correct, how come this number is so high? Is it just due to multiple researchers spending a lot of time per report and making sure it’s extremely high-quality? FWIW I think the value of some RP projects is very high—and worth more than the costs above—but I’m still surprised at the costs.
Is the cost something you’re assessing when you decide whether to take on a research project (when it’s not driven by an external client)? Do you have some internal calculations (or external tool) where you try to calculate the value of information of a given research project and weigh that up against proposed costs?
Hi James,
Thanks for your thoughtful question, but I think you’re thinking about this incorrectly for a few reasons:
Firstly, while we raised $10.7M, most of that was earmarked for 2023 as we usually raise money in the current year for the following year. In 2022, we spent around $6.8M on RP core programs, not including special projects and operations to support special projects.
Secondly, we actually have published less than half of our 2022 research. My rough guess is that in 2022 we produced over 100 pieces of work, not ~64 as you estimate. This is for two reasons:
Some research is confidential for whatever reason and is never intended to be published
Some research is intended to be published but we haven’t had the resources or time to publish it yet because public outputs are not a priority for our clients and their funding does not cover it (this is actually something we’d love to get money from the EA public for).
To give a more clear substitute figure, we generally say that $20K-$40K pays for a typical short-term research project and $70K-$100K pays for a typical in-depth research project.
But more importantly I’d add that counting outputs per dollar is not a good way to view RP’s work. This is for a few reasons:
“Outputs” are a vanity metric and we don’t want to take a quantity-focused approach to our work where we aim to produce as many outputs as cheaply as possible. Instead, the quest to quantify the impact – and the impact per dollar – of Rethink Priorities is much more difficult.
“Outputs” vary a lot in size, scope, and funding and aren’t really apples-to-apples comparable in a way that would work for an aggregated metric/count. Some research reports take >12 months of full-time work whereas other research reports (especially internal ones not meant for publication) are completed in two weeks or less.
A lot of the most important work that happens with our research isn’t the time spent actually producing the report, but also engaging with stakeholders, presenting findings, and/or providing feedback on others’ research. We also spend a fair amount of time after we produce research reports engaging with the client — answering follow-up questions and/or otherwise helping people understand the research.
Doing good research at scale requires a sizable operations budget, management budget, strategy work, etc., that doesn’t immediately translate into concrete publishable research outputs.
We also produce other endpoints that aren’t research. We also incubate organizations and spend researcher time advising those organizations and we spend money to organize conferences and other events.
Love the question
Relatedly, how much of the funding (both for 2022 and for 2024) is for the production of research outputs, compared to how much it is for other operations (like fiscal sponsorships or incubation)?
I think for marginal donations on RP, perhaps the best way to think about this would be in the cost to produce marginal research. A new researcher hire would cost ~$87K in salary (median, there is of course variation by title level here) and ~$28K in other costs (e.g., taxes, employment fees, benefits, equipment, employee travel). We then need to spend ~$31K in marginal spending on operations and ~$28K in marginal spending on management to support a new researcher. So the total cost for one new FTE year of research ends up being ~$174K. I think if you want to get a sense of how much it costs to support research at RP and how that balances between operations and other costs, this is a useful breakdown to look at.
In addition to research and operations, I’d say we produce roughly four other categories of things: fiscal sponsorship, incubated organizations, internal events, and external conferences. Let me go into a bit of detail about that:
Fiscal sponsorship arrangements pay for themselves out of the sponsored org’s budget, so they’re not something we’d seek public funding for.
Incubation work, or work to produce and advise new organizations based on our research (e.g., Condor Camp, Insect Institute) are things we’d raise money for and hope that they’d be impactful enough to encourage you to support. My rough guess is that in 2024 we would spend ~$150K from our animal welfare research budget and ~$900K for our Existential Security Team to work on incubating new organizations, and we are looking to fundraise for those amounts. These would be subject to similar marginal costs per FTE as mentioned above.
Internal events come out of the operations budget mentioned above.
External conferences are not something we’d seek public funding for – these have always been fully covered by getting a specific grant for the conference.
If you want to financially support only a particular part of RP or a particular thing RP does, let me know and we can discuss ways we could honor that arrangement.
I think this really depends on the research output. $100k for a report with roughly one person year’s worth of effort seems about right. Or roughly one good academic paper or master’s thesis. I suspect a lot of Rethink’s reports are more valuable than that.
That’s $100k all in cost, including costs that aren’t specific to a project. Including salary, overheads, taxes, travel, any expenses, training, recruitment etc.
Do you have a sense of how much funding this informed?
I’m guessing what you mean is something like “One of RP’s aims is to advise grantmaking. How many total dollars of grantmaking have you advised?” You might then be tempted to take this number, divide it by our costs, and compare that to other organizations. But this is a tricky question to answer actually, since it never has been as straightforward of a relationship as I’d expect for a few reasons:
Our advice is marginal and we never make a sole and final decision on any grant. Also the amount of contribution varies a lot between grants. So you need some counterfactually-adjusted marginal figure.
Sometimes our advice leads to grantmakers being less likely to make a grant rather than more likely… how does that count?
The impact value of the grants themselves is not equal.
Some of our research work looks into decisions but doesn’t actually change the answer. For example, we look into an area that we think isn’t promising and confirm it isn’t promising so in absolute terms we got nowhere but the hits-based fact that it could’ve gone somewhere is valuable. It’s hard to figure out how to quantify this value.
A large portion of our research builds on itself. For example, our invertebrate work has led to some novel grantmaking that likely would not have otherwise happened, but only after three years of work. A lot of our current research is still (hopefully) in that pre-payoff period and so hasn’t lead to any concrete grants yet. It’s hard to figure out how to quantify this value.
A large portion of our research is of the form “given that this grant is being made, how can we make sure it goes as well as possible?” rather than actually advising on the initial grant. It’s hard to figure out how to quantify this value.
A lot more of our recent work has been focused on creating entirely new areas to put funding into (e.g., new incubated organizations, exploring new AI interventions). This takes time and is also hard to value.
We’ve been working this year on producing a figure that looks at itemizing decisions we’ve contributed to and estimating how much we’ve influenced that decision and how valuable that decision was, but we don’t have that work finished yet because it is complicated. Additionally, we’ve been involved in such a large number of decisions by this point that it is a lot of hard work to do all the follow-up and number crunching.
Do also keep in mind that influencing grantmaking is not RP’s sole objective and we achieve impact in other ways (e.g., talent recruitment + training + placement, conferences, incubated organizations, fiscal sponsorships).
All this to say is that I don’t actually have an answer to your question. But we did hire a Worldview Investigations Team that is working more on this.
Why does it make sense for Rethink Priorities to host research related to all five of the listed focus areas within one research org? It seems like they have little in common (other than, I guess, all being popular EA topics)?
I agree this is confusing. I get into this in my answer to Sebastian Schmidt.
We spoke a little at EAG London about how people underestimate the mental health challenges people face in EA, especially among the most senior people. You indicated a willingness to talk about it publicly. If you’re still up for it, could you tell us more about your own personal mental health over the past few years and your perceptions of what mental health is like amongst other effective altruists in leadership positions?
It was an AMA similar to this one where Will Macaskill revealed that he took antidepressant medication and that actually had a large impact on me. I have historically struggled with anxiety and depression and Will’s response contributed a large portion of the reason why I chose to ask my doctor about SSRIs in 2019. Luckily they worked and hopefully by sharing my experience I can pay this forward.
Howie Lempel has also been very open about his experience. I think mental health concerns are common among EA “leaders” and I think they have been pretty open about it. I hope that continues and we could always use more.
I have been lucky to find antidepressants, talk therapy, regular exercise, and proactively engaging with a supportive friend group to be a great combination to alleviate the ways in which anxiety would otherwise derail my day. I encourage other people suffering from these conditions to explore these options.
Anxiety and depression will still be a lifelong struggle for me. Even with all of this there are still a few days a year where I am so anxious and depressed that I sleep for sixteen hours and barely get out of bed. But it’s much less worse due to me being lucky enough to have effective treatment.
I hope this answer inspires others as Will’s inspired you.
RP seems to have a somewhat unique view among research organisations in identifying a funding gap rather than a talent gap for research staff. I would be very curious why you think this is the case and how you have solved the talent constraints.
I disagree; last I checked most AI safety research orgs think they could make more good hires with more money and see themselves as funding-constrained—at least all 4 that I’m familiar with: RP, GovAI, FAR, and AI Impacts.
Edit: also see the recent Alignment Grantmaking is Funding-Limited Right Now (note that most alignment funding on the margin goes to paying and supporting researchers, in the general sense of the word).
I agree with Zach’s comment that other organizations are also underfunded and so this is not a unique view among RP. See also my comment to Aaron Bergman on donation opportunities. I think my comment to Sebastian Schmidt also helps answer this question and gives a bit more context about how and why RP has been less focused on talent gaps historically.
What’s are some questions you hope someone’s gonna ask that seem relatively unlikely to get asked organically?
Bonus: what are the answers to those questions?
Honestly I love this question but I got asked a lot of real questions that I think were varied and challenging, so right now I don’t currently feel like I need even more!
Is RP research donor-driven in terms of priorities? Do you worry that Rethink could become vastly more focused on some cause areas over others due to available funding in the space, as opposed to more neglected areas that could be more impactful?
I do think RP, like nearly any other organization, has to “follow the money” and given that RP has historically relied a lot on restricted assets we do end up matching donor priorities. I think this could be good as it gives an independent check on our prioritization and encourages us to be responsive to the needs of the broader dollar-weighted EA community.
On the other hand, it is unlikely that donor priorities are indeed the best thing for us to work on and as we are funding constrained in all of our areas I do worry that we will be steered toward particular areas more than I’d like from impact assessment alone. This is one reason why we’ve been trying to make a large push to get more unrestricted funding for RP.
Aside from RP, what is your best guess for the org that is morally best to give money to?
I feel a lot of cluelessness right now about how to work out cross-cause comparisons and what decision procedures to use. Luckily we hired a Worldview Investigations Team to work a lot more on this, so hopefully we will have some answers soon.
In the meantime, I currently am pretty focused on mitigating AI risk due to what I perceive as both an urgent and large threat, even among other existential risks. And contrary to last year, I think AI risk work is actually surprisingly underfunded and could grow. So I would be keen to donate to any credible AI risk group that seems to have important work and would be able to spend more marginal money now.
As Co-CEO of RP, I am obligated to say that our AI Governance and Strategy Department is doing this work and is actively seeking funding. Our work on Existential Security and doing surveys is also very focused on AI and are also funding constrained. You can donate to RP here.
…but given that you asked me specifically for non-RP work here is my ranked list of remaining organizations:
Centre for Long-Term Resilience (CLTR) does excellent work and appears to me to be exceptionally well-positioned and well-connected to meet the large UK AI policy window, especially with the UK AI Summit. My understanding is that they have funding gaps and that marginal funding could encourage CLTR to hire more AI-focused staff than they otherwise would and thus hopefully make more and faster progress. Donate here.
The Future Society (TFS) does excellent work on EU AI policy and in particular the EU AI Act. TFS is not an EA organization but that doesn’t mean they don’t do good work and in my personal opinion I think they are really underrated among the EA-affiliated AI governance community. They historically have not focused on fundraising as much as I think they should and they seem to now have a sizable funding gap that I think they could execute well on. Donate here.
Centre for Governance of AI (GovAI) is also doing great work on AI policy in both the US and UK and is very well-positioned. I think it’s plausible that they do the best work of any AI policy organization out there (including RP) – the reason I rank them third is mainly because I’m skeptical of the size of their funding gap and their plans for using marginal money. Donate here.
The Humane League (THL) is my honorable mention. I view AI risk mitigation work as more pressing than animal welfare work right now, but I still care a lot about the plight of animals and so I still support THL. They have a sizable funding gap, execute very competently, and do great work. I think the moral weight work that Rethink Priorities did made some in EA think that shrimp or insect work is more cost-effective than the kind of work that THL does but I don’t actually think that’s true insofar as readily available donation opportunities exist, though I’m unsure what other RP staff think and of course RP does research on shrimp and insects that we think is cost-effective. Donate here.
Here are my caveats for the above list:
These are my own personal opinions. Your view may differ from mine even if you agree with me on the relevant facts due to differing values, differing risk tolerances, etc.
I haven’t thought about this that much and I’m answering off-the-cuff for an AMA. This is definitely very subject to change as my own opinions change. I view my opinions on donation targets to be unusually in flux right now.
Statements I have made about these organizations are my own opinions and have not been run by representatives of those organizations. Therefore, I may have misrepresented them.
I don’t know details about room for more funding for these organizations which could change their prioritization in important ways.
Note the question of where to donate as an individual and what to encourage RP to do as co-CEO of Rethink Priorities are very different, so this list shouldn’t necessarily be taken as indications of RP’s priorities.
I focused on concrete “endpoint grants” but you may find more value in trusting a grant recommender and giving them money, such as via the Long-Term Future Fund, Manifund, Nonlinear, etc.
I also value giving to political candidates and could view this as plausibly better than some options on my list above but due to US 501c3 law, I don’t want to solicit donations to such candidates.
I know very little about the technical alignment landscape so perhaps there are good AI risk mitigation efforts there to support that would beat the options I recommend.
A lot of information I am relying on to create this list is confidential and there’s also likely a lot of additional confidential information I don’t know about that could change my list.
In honor of this question and to put some skin in the game behind these recommendations, I have given each of the four organizations I listed $1000.
Have you considered doing an Animal Charity Evaluators review? I personally think Rethink puts out some of the most important animal-related research out there!
Thanks for the compliment! We have considered a few times but ultimately have declined to take the opportunity to review due to:
There are capacity limitations on our end.
We have concerns around how Rethink Priorities would be viewed by ACE’s audience given that we do a lot of research work in many different areas.
We like the opportunity to be constructively critical of ACE’s research work and like that they are also willing to challenge and push back on our research work. We are concerned this dynamic might get complicated if we are in a clear reviewer-reviewee relationship.
We do work with ACE a lot and are excited to continue to work with them. We’d definitely consider doing an ACE review in future years if invited. We also hope that fans of our work will consider supporting us financially even if we don’t have an ACE top charity designation!
What is some RP research that you think was extremely important or view-changing but got relatively little attention from the EA community or relevant stakeholders?
Hi everyone! I’m sorry I didn’t get to all the questions today—it was more work than I anticipated to put together. I will answer more tomorrow and I will keep going until everything has an answer!
What are some of your proudest ‘impact stories’ from RP’s research? E.g. you did research on insects and now X funders will dedicate $Y million to insect welfare
Are there any notable differences in your ability to have impact in the different areas you conduct research? E.g. one area where important novel insights are easier / harder, or one area where relevant research is more easily translated into practice
Yes. I think animal welfare remains incredibly understudied and thus it is easier to have a novel insight, but also there is less literature to draw from and you can end up more fundamentally clueless. Whereas in global health and development work there is much more research to draw from, which makes it nicer to be able to do literature reviews to turn existing studies and evidence into grant recommendations, but also means that a lot of the low-hanging fruit has been done already.
Similarly, there is a lot more money available to chase top global health interventions relative to animal welfare or x-risk work, but it is also comparably harder to improve recommendations as a lot of the recommendations are already pretty well-known by foundations and policymakers.
AI has been an especially interesting place to work in because it has been rapidly mainstreaming this year. Previously, there was not much to draw on but now there is much more to draw from and many more people are open to being advised on work in the area. However, there are also many more people trying to get involved and work is being produced at a very rapid pace, which can make it harder to keep up and harder to contribute.
Re existential security, what are your AGI timelines and p(doom|AGI) like, and do you support efforts calling for a global moratorium on AGI (to allow time for alignment research to catch up / establish the possibility of alignment of superintelligent AI)?
As for existential risk, my current very tentative forecast is that the world state at the end of 2100 to look something like:
73% - the world in 2100 looks broadly like it does now (in 2023) in the same sense that the current 2023 world looks broadly like it did in 1946. That is to say of course there will be a lot of technological and sociological change between now and then but by the end of 2100 there still won’t be unprecedented explosive economic growth (e.g.., >30% GWP growth per year), no existential disaster, etc.
9% - the world is in a singleton state controlled by an unaligned rogue AI acting on its own initiative.
6% - the future is good for humans but our AI / post-AI society causes some other moral disaster (e.g., widespread abuse of digital minds, widespread factory farming)
5% - we get aligned AI, solve the time of perils, and have a really great future
4% - the world is in a singleton state controlled by an AI-enabled dictatorship that was initiated by some human actor misusing AI intentionally
1% - all humans are extinct due to an unaligned rogue AI acting on its own initiative
2% - all humans are extinct due to something else on this list (e.g., some other AI scenario, nukes, biorisk, unknown unknowns)
I think conditional on producing minimal menace AI by the end of 2070, there’s a 28% chance an existential risk would follow within the next 100 years that could be attributed to that AI system.
Though I don’t know how seriously you should take this, because forecasting >75 years into the future is very hard.
Also my views of this are very incomplete and in flux and I look forward to refining them and writing more about them publicly.
This is interesting and something I haven’t seen much expressed within EA. What is happening in the 8% where the humans are still around and the unaligned singleton rogue AI is acting on it’s own initiative? Does it just take decades to wipe all the humans out? Are there digital uploads of (some) humans for the purposes of information saving?[1] Is a ceiling on intelligence/capability hit upon by the AI which means humans retain some economic niches? Is the misalignment only partial, so that the AI somehow shares some of humanity’s values (enough to keep us around)?
Does this mean that you think we get alignment by default? Or alignment is on track to be solved on this timeline? Or somehow we survive misaligned AI (as per the above discrepancy between your estimates for singleton unaligned rogue AI and human extinction)? As per my previous comment, I think the default outcome of AGI is doom with high likelihood (and haven’t received any satisfactory answers to the question If your AGI x-risk estimates are low, what scenarios make up the bulk of your expectations for an OK outcome?
This still seems like pretty much an existential catastrophe in my book, even if it isn’t technically extinction.
Thanks for elaborating, Peter! Do you mind sharing how you obtained those probabilities? Are they your subjective guesses?
I have trouble understanding what “AGI” specifically refers to and I don’t think it’s the best way to think about risks from AI. As you may know, in addition to being co-CEO at Rethink Priorities, I take forecasting seriously as a hobby and people actually for some reason pay me to forecast, making me a professional forecaster. So I think a lot in terms of concrete resolution criteria for forecasting questions and my thinking on these questions has actually been meaningfully bottlenecked right now by not knowing what those concrete resolution criteria are.
That being said, being a good thinker also involves having to figure out how to operate in some sort of undefined grey space, and so I should be at least somewhat comfortable enough with compute trends, algorithmic progress, etc. to be able to give some sort of answer. And so I think for the type of AI that I struggle to define but am worried about – the kind that has the capability of autonomously causing existential risk – the kind of AI that AI researcher Caroline Jeanmaire refers to as the “minimal menace” – I am willing to tentatively put the following distribution on that:
5% probability of happening before 2035
20% probability of before 2041
50% probability of before 2054
80% probability of before 2400
(Though of course that’s my opinion on my distribution, not saying that Caroline or others would agree.)
To be clear that’s an unconditional distribution, so it includes the possibility of us not producing “minimal menace” AI because we go extinct from something else first. It includes the possibility of AI development being severely delayed due to war or other disasters, the possibility of policy delaying AI development, etc.
I’m still actively working on refining this view so it may well change soon. But this is my current best guess.
Thanks for your detailed answers Peter. Caroline Jenmaire’s “minimal menace” is a good definition of AGI for our purposes (but also so is Holden Karnofsky’s PASTA, OpenPhil’s Transformative AI and Matthew Barnett’s TAI.)
I’m curious about your 5% by 2035 figure. Has this changed much as a result of GPT-4? And what is happening in the remaining 95%? How much of that is extra “secret sauce” remaining undiscovered? A big reason for me updating so heavily toward AGI being near (and correspondingly, doom being high given the woeful state-of-the-art in Alignment) is the realisation that there very well may be no additional secret sauce necessary and all that is needed is more compute and data (read: money) being thrown at it (and 2 OOMs increase in training FLOP over GPT-4 is possible within 6-12 months).
How likely do you consider this to be, conditional on business as usual? I think things are moving in the right direction, but we can’t afford to be complacent. Indeed we should be pushing maximally for it to happen (to the point where, to me, almost anything else looks like “rearranging deckchairs on the Titanic”).
Whilst I may not be a professional forecaster, I am a successful investor and I think I have a reasonable track record of being early to a number of significant global trends: veganism (2005), cryptocurrency (several big wins from investing early—BTC, ETH, DOT, SOL, KAS; maybe a similar amount of misses but overall up ~1000x), Covid (late Jan 2020), AI x-risk (2009), AGI moratorium (2023, a few days before the FLI letter went public).
I’m definitely interested in seeing these ideas explored, but I want to be careful before getting super into it. My guess is that a global moratorium would not be politically feasible. But pushing for a global moratorium could still be worthwhile to pursue even if it is unlikely to happen as it could be a good galvanizing ask that brings more general attention to AI safety issues and make other policy asks seem more reasonable by comparison. I’d like to see more thinking about this.
On the merits of the actual policy, I am unsure whether a moratorium is a good idea. My concern is may just produce a larger compute overhang which could increase the likelihood of future discontinuous and hard-to-control AI progress.
Some people in our community have been convinced that an immediate and lengthy AI moratorium is a necessary condition for human survival, but I don’t currently share that assessment.
Good to see that you think the ideas should be explored. I think a global moratorium is becoming more feasible, given the UN Security Council meeting on AI, The UK Summit, the Statement on AI risk, public campaigns etc.
Re compute overhang, I don’t think this is a defeater. We need the moratorium to be indefinite, and only lifted when there is a global consensus on an alignment solution (and perhaps even a global referendum on pressing go on more powerful foundation models).
This makes sense given your timelines and p(doom) outlined above. But I urge you (and others reading) to reconsider the level of danger we are now in[1].
Or, ahem, to rethink your priorities (sorry).
What are your thoughts, for you personally, around...
I) Time spent
II) Joy of use
III) Value of information gained
of Manifold vs Metaculus?
I use both Manifold and Metaculus every day and it’s not really clear to me which I spend time on more. The answer is “a lot” to both.
For joy of use, I think Manifold has worked hard to make the forecasting process very seamless and I like that. I also like the gamification of the mana profit system. That being said, I think the questions on Metaculus tend to be more interesting. I personally like having rigorous resolution criteria and I personally prefer being able to give my true probabilities rather than bet up or down. So Metaculus might suit my personality better.
Surprisingly I don’t really have a clear read which platform is more accurate. So I think the value of information is optimized by using both platforms. I’m keen to see this researched more.
By answering this question I should disclose that Metaculus pays me money for being a forecaster. I suppose Manifold also indirectly pays me money because RP is part of their Manifold for Charity program. So my feelings towards them are not exactly unbiased.
You said in your “Five years” post that you are planning to do more self-eval and impact assessments, and I strongly encourage this. What are the most realistic bits of evidence you could get from an impact report of Rethink Priorities which would cause you to dramatically update your strategy? (or, another generator: what are you most worried about learning from such assessments?)
What do you think the ideal ratio in terms of resource allocation between thinking/research and doing/action in EA would be? (I recognize those categories are ill-defined, and some activities won’t comfortably fall into either bucket. But they seem discrete enough to make a question about balancing different kinds of work worthwhile.)
I think it varies a lot by cause area but I think you would be unsurprised to hear me recommend more marginal thinking/research. I think we’re still pretty far from understanding how to best allocate a doing/action portfolio and there’d still be sizable returns from thinking more.
Rethink feels unique among EA orgs—it’s large, not attached to a university, not a foundation. Why aren’t there more standalone research shops? Should there be?
RP’s arrangement here is definitely not unique to EA, though I do agree we may be the largest EA-affiliated non-univeristy non-foundation research organization, as my guess is we are a little larger than GiveWell by FTE headcount. Though adding all those caveats ends up with me not saying very much, kinda like talking about being the largest private Catholic university in Vermont.
I think university affiliations definitely matter, especially for getting your work in front of policymakers. My guess is that research organizations choose to affiliate with a university when they can for this reason, and it’s a good one.
But I also like not having to worry about the bureaucracy that comes with interfacing with a university and I think this has historically allowed RP to be more agile and grow faster. I think it’s important that EA have both university and non-university research organizations.
(Obviously everyone would love to be attached to a multi-billion dollar foundation and if we can get more of those we obviously should, but I assume that’s not really an option.)
Hi Peter, thanks for your work. I have several questions:
Most organizations within EA are relatively small (<20). Why do you think that’s the case and why is RP different?
How do you decide on which research areas to focus on and, relatedly, how do you decide how to allocate money to them?
What do you focus on within civilizational resilience?
How do you decide whether something belongs to the longtermism department (i.e., whether it’ll affect the long-term future)?
We do broadly aim to maximize the cost-effectiveness of our research work and so we focus on allocating money to opportunities that we think are most cost-effective on the margin.
Given that, it may be surprising that we work in multiple cause areas, but we face some interesting constraints and considerations:
There is significant uncertainty about which priority area is most impactful. The general approach to RP has been that we can scale up multiple high-quality research teams in a variety of cause areas easier than we can figure out which cause area we ought to prioritize. Though we recently hired a Worldview Investigations Team to work a lot more on the broader question of how to allocate an EA portfolio. We also are investing a lot more in our own impact assessment. Together we hope that these will give us more insights into how to allocate our work going forward.
There may be diminishing returns to RP focusing on any one priority area.
A large amount of resources are not fungible across these different areas. The marginal opportunity cost to taking restricted funding is pretty low as we cannot easily allocate these resources to other areas, even if we were convinced they were higher impact.
Work on any single area might gain from our working on multiple areas as teams have much greater access to centralized resources, staff, funding, and productive oversight than what they would receive if the team existed independently and solely focused on that priority. Relationships within an area could poten be useful for work in another area.
Working across different priorities allows the organization to build capacity, reputation, and relations, and maintain option value for the future.
Thanks for this. I notice that all of these reasons are points in favor of working on multiple causes and seem to neglect considerations that would go in the other direction. And clearly, you take this considerations seriously too (e.g., scale and urgency) as you recently decided to focus exclusively on AI within the longtermism team now.
I’m not exactly sure and I think you’d have to ask some other smaller organizations. My best guess is that scaling organizations is genuinely hard and risky, and I can understand other organizations may feel that they work best and are more comfortable with being small. I think RP has been different by:
Working in multiple different cause areas lets us tap into multiple different funding sources, thus increasing the amount of money we would take. It also increased the amount of work we wanted to do and the amount of people we wanted to hire.
By being 100% remote-first from the beginning, we had a much larger talent pool to tap into. I think we’ve also been more willing to take chances on more junior-level researchers which has also broadened our talent pool. This allowed us to hire more.
I think just a general willingness and aspiration to be a big research organization and take on this risk, rather than intentionally go it slow.
This makes sense. Do you have any explicated intentions for how big you want to get?
We haven’t had to make too many fine-grained decisions, so it hasn’t been something that has come up enough to merit a clear decision procedure. I think the trickiest decision was what to do with research aimed at understanding and mitigating the negative effects of climate change. The main considerations were questions like “how do our stakeholders classify this work” and “what is the probability of this issue leading to human extinction within the century” and both of those considerations led to climate change work falling into our “global health and development” portfolio.
This year we’ve made an intentional decision to focus nearly all our longtermist work on AI due to our assessment of AI risk as both unusually large and urgent, even among other existential risks. We will revisit this decision in future years and to be clear this does not mean that we think other people shouldn’t work on non-AI x-risk or non-xrisk longtermism.
This year we’ve made an intentional decision to focus nearly all our longtermist work on AI due to our assessment of AI risk as both unusually large and urgent, even among other existential risks. We will revisit this decision in future years and to be clear this does not mean that we think other people shouldn’t work on non-AI x-risk or longtermism-work not oriented towards existential risk reduction. But that does mean we don’t have any current work on civilizational resilience right now.
That being said, we do have some work on this in the past:
Linch did a decent amount of research and coordination work around exploring civilizational refuges but RP is no longer working on this project.
Jam has previously done work on far-UVC, for example by contributing to “Air Safety to Combat Global Catastrophic Biorisks”.
We co-supported Luisa in writing “What is the likelihood that civilizational collapse would directly lead to human extinction (within decades)?” while she was a researcher at both Rethink Priorities and Forethought Foundation.
How has your experience as co-CEO been? How do you share responsibilities? Would you recommend it to other orgs?
I’ve personally liked it. There have been several times when I’ve talked with my co-CEO Marcus about whether one of us should just become CEO and it’s never really made sense. We work well together and the co-CEO dynamic creates a great balance between our pros and cons as leaders – Marcus leads the organization to be more deliberate and careful at the cost of potentially going too slowly and I lead the organization to be more visionary at the cost of potentially being too chaotic.
Right now we split the organization very well where Marcus handles the portfolios pertaining to Global Health and Development, Animal Welfare, and Worldview Investigations… and I handle the portfolios pertaining to AI Governance and Strategy, Existential Security (AI-focused incubation), and Surveys and Data Analysis (currently also mostly AI policy focused right now though you may know us mainly from the EA Survey).
I’m unsure if I’d recommend it to other orgs. I think most times it wouldn’t make sense. But I think it does make sense when there are two co-founders with an equally natural claim and desire to claim the CEO mantle, when they balance each other well, and when there is some sort of clear split and division of responsibility.
What’s something about you that might surprise people who only know your public, “professional EA” persona?
I like pop music, like Ariana Grande and Olivia Rodriguo, though Taylor Swift is the Greatest of All Time. I went to the Eras Tour and loved it.
I have strong opinions about the multiple types of pizza.
I’m nowhere near as good at coming up with takes and opinions off-the-cuff in verbal conversations as I am in writing. I’m 10x smarter when I have access to the internet.
There is a sense that the journal system is obviously flawed and could be trivially improved. Why hasn’t EA done this?
We publish lots of material, we have lots of resources. It seems possible to imagine building a few journals that run in a different way.
And even if others don’t respect them, if EA orgs did and they were less onerous to publish to I imagine outsiders would start to.
I haven’t actually thought much about the academic journal system, though I’m interested in what David Reinstein (former RP staff member) has been doing with his Unjournal.
Peter is one of the best people I know well. He is kind, empathetic, wise, hard working, well-calibrated, to name a few. Generally I want to be more like someone along one axis, whereas I wish I were more like Peter in many. I know that his character has been developed with work and over time so I’d like to commend him for this. And thank him for his hard work and the outputs of it.
I guess that to the reader I’d say that Peter is good in ways you can see, but also as good in many ways that you can’t—he gives good advice, he provides insight on any topic[1], he is both driven and realistic. There are few people I am more confident will handle difficult situations well. It’s probably worth asking questions you consider a bit outside his area of expertise because I guess he’ll answer those well too (or admit that he can’t, which is excellent).
Suggested prompts that people might not consider: Peter and Marcus and their experiences of co-leading RP, having been in and around EA for a long time, US politics, therapy, Taylor Swift, maintaining work life balance, grief, EA strategy questions, difficult decisions for RP.[2]
I’d encourage people to post their favourite bits of Peter’s writing, because it feels like all AMAs should include that.
This sounds like hyperbole, but I think I’d expect to take any topic to Peter and come out with a clearer or different perspective.
I checked with Peter about this list.
I think this is a particularly good piece by Peter, though I am crying reading it. https://www.pasteurscube.com/for-samantha-a-eulogy/
This is so very sweet—thank you!
Squiggle vs squigglepy?
I like both! I developed squigglepy because I liked squiggle so much but RP really wanted to make use of the Python ecosystem. So now we get the best of both worlds! I am a strong supporter of QURI, especially because Rethink Priorities now provides them with fiscal sponsorship.
[This is 75% a joke, as Peter developed squigglepy based on QURI’s squiggle]
(1) where do you think forecasting has its best use-cases? where do you think forecasting doesn’t help, or could hurt?
interested in your answer both as the co-CEO of an organization making important decisions, and an avid forecaster yourself.
(2) what are RP’s plans with the Special Projects Program?
The plan for RP Special Projects is to continue to fiscally sponsor our existing portfolio of organizations and see how that goes and continue to build capacity to support additional organizations sometime in the future. Current marginal Special Projects time is going into exploring more incubation work with our Existential Security department.
I’m actually surprisingly unsure about this, especially given how interested I am in forecasting. I think when it comes to actual institutional decision making it is pretty rare for forecasts to be used in very decision-relevant ways and a lot of the challenge comes from asking the right questions in advance rather than the actual skill of creating a good forecast. And a lot of the solutions proposed can be expensive, overengineered, and focus far too much on forecasting and not enough on the initial question writing. Michael Story gets into this well in “Why I generally don’t recommend internal prediction markets or forecasting tournaments to organisations”.
I think something like “Can Policymakers Trust Forecasters?” from the Institute for Progress takes a healthier view about how to use forecasting. Basically, you need to take some humility about what forecasting can accomplish but explicit quantification of your views is a good thing and it is also really good for society generally to grade experts on their accuracy rather than their ability to manipulate the media system.
Additionally, I do think that knowing about the world ahead seems generally valuable and forecasting still seems like one of the best ways to do that. For example, everything we know about existential risk essentially comes down to various kinds of forecasting.
Lastly, my guess is that a lot of the potential of forecasting for institutional decision making is still untapped and merits further meta-research and exploration.
Do you think that promoting alternative proteins is (by far) the most tractable way to make conventional animal agriculture obsolete?
Do you think increasing public funding and support for alternative proteins is the most pressing challenge facing the industry?
Do you think there is expert consensus on these questions?
Evidence for alternative proteins being the most tractable way to make conventional animal agriculture obsolete is fairly weak. For example, similar products (eg, plant-based milk, margarine) have not made their respective categories obsolete.
Instead, we do have and we will continue to need a multi-pronged approach to transitioning conventional animal agriculture to a more just and humane system.
~
Alternative proteins is a varied landscape so I imagine that the bottlenecks will be pretty different depending on the particular product, company, and approach. Unfortunately I am not up to date on details with regard to the funding gaps in this area.
~
Unfortunately there is not. There also just aren’t that many experts in this area in the first place.
Thanks so much for your insight!
I learned a lot although I wish i would have been more clear and asked about the tractability of alternative proteins to price parity (instead of just the tractability of “promoting” them). Because:
Plant-based milks are still more expensive (source) and maybe not as nutritious (e.g. less calcium, B12, etc.) so I think they (and many other existing products) may not be a reliable indicator of the potential of this field to make “conventional animal agriculture obsolete”.
I think their potential to replace factory farming lies in the viability of them becoming more/just as cheap, tasty, nutritious as conventical animal products but I’d love to know if you (and other experts) think that’s probably a pipe dream.
I’d love to be corrected if I’m wrong (although I’m sure you’re very busy) and also wanted to say thanks again.
Dear Mr. Wildeford,
To what extent your work depends in your own staff vs. the academic EA infraestructure?
There are organizations as “Effective Thesis” that try to re-direct academic resources for EA research. Do you have any relation with those organizations? Is there any way for external collaboration with your organization? Could you elaborate your vision of how “in house” and “external research” shall be optimally combined in Rethink Priorities?
Thank you very much for you excellent work.
Kind regards,
Arturo
Rethink Priorities does collaborate with academic institutions, mainly through hiring academics as contractors to either do novel research and/or to review our work. My sense is that these academics we work with are not unusually likely to be EA-affiliated though and I wouldn’t say we use any EA-specific academic infrastructure.