Along with my co-founder, Marcus A. Davis, I run Rethink Priorities. I’m also a Grant Manager for the Effective Altruism Infrastructure Fund and a top forecaster on Metaculus. Previously, I was a professional data scientist.
Peter Wildeford
Will—of course I have some lingering reservations but I do want to acknowledge how much you’ve changed and improved my life.
You definitely changed my life by co-creating Centre for Effective Altruism, which played a large role in organizations like Giving What We Can and 80,000 Hours, which is what drew me into EA. I was also very inspired by “Doing Good Better”.
To get more personal—you also changed my life when you told me in 2013 pretty frankly that my original plan to pursue a Political Science PhD wasn’t very impactful and that I should consider 80,000 Hours career coaching instead, which I did.
You also changed my life by being open about taking antidepressants, which is ~90% of the reason why I decided to also consider taking antidepressants even though I didn’t feel “depressed enough” (I definitely was). I felt like if you were taking them and you seemed normal / fine / not clearly and obviously depressed all the time yet benefitted from them that maybe I would also benefit them (I did). It really shattered a stereotype for me.
You’re now an inspiration for me in terms of resilience. Having an impact journey isn’t always everything up and up and up all the time. 2022 and 2023 were hard for me. I imagine they were much harder for you—but you persevere, smile, and continue to show your face. I like that and want to be like that too.
I am hiring an Executive Research Assistant—improve RP while receiving mentorship
I agree—I think the financial uncertainty created by having to renew funding each year is very significantly costly and stressful and makes it hard to commit to longer-term plans.
I’ve done this!
Hi Elizabeth,
I represent Rethink Priorities but the incubator Charlie is referencing was/is run by Charity Entrepreneurship, which is a different and fully separate org. So you would have to ask them.
If there are any of your questions you’d want me to answer with reference to Rethink Priorities, let me know!
Hi Charlie,
Peter Wildeford from Rethink Priorities here. I think about this sort of thing a lot. I’m disappointed in your cheating but appreciate your honesty and feedback.
We’ve considered many times about using a time verification system and even tried it once. But it was a pretty stressful experience for applicants since the timer then required the entire task to be done in one sitting. The system we used also introduced some logistical difficulty on our end.
We’d like to try to make things as easy for our applicants as possible since it’s already such a stressful experience. At the same time, we don’t want to incentivize cheating or make people feel like they have to cheat to stay ahead. It’s a difficult trade-off. But so far I think it’s been working—we’ve been hiring a lot of honest and high integrity people that I trust greatly and don’t feel like I need a timer to micromanage them.
More recently, we’ve been experimenting with more explicit honor code statements. We’ve also done more to pre-test all our work tests to ensure the time limits are reasonable and practical. We’ll continue to think and experiment around this and I’m very open to feedback from you or others about how to do this better.
Yes. I think animal welfare remains incredibly understudied and thus it is easier to have a novel insight, but also there is less literature to draw from and you can end up more fundamentally clueless. Whereas in global health and development work there is much more research to draw from, which makes it nicer to be able to do literature reviews to turn existing studies and evidence into grant recommendations, but also means that a lot of the low-hanging fruit has been done already.
Similarly, there is a lot more money available to chase top global health interventions relative to animal welfare or x-risk work, but it is also comparably harder to improve recommendations as a lot of the recommendations are already pretty well-known by foundations and policymakers.
AI has been an especially interesting place to work in because it has been rapidly mainstreaming this year. Previously, there was not much to draw on but now there is much more to draw from and many more people are open to being advised on work in the area. However, there are also many more people trying to get involved and work is being produced at a very rapid pace, which can make it harder to keep up and harder to contribute.
I think it varies a lot by cause area but I think you would be unsurprised to hear me recommend more marginal thinking/research. I think we’re still pretty far from understanding how to best allocate a doing/action portfolio and there’d still be sizable returns from thinking more.
-
I like pop music, like Ariana Grande and Olivia Rodriguo, though Taylor Swift is the Greatest of All Time. I went to the Eras Tour and loved it.
-
I have strong opinions about the multiple types of pizza.
-
I’m nowhere near as good at coming up with takes and opinions off-the-cuff in verbal conversations as I am in writing. I’m 10x smarter when I have access to the internet.
-
(1) where do you think forecasting has its best use-cases? where do you think forecasting doesn’t help, or could hurt?
I’m actually surprisingly unsure about this, especially given how interested I am in forecasting. I think when it comes to actual institutional decision making it is pretty rare for forecasts to be used in very decision-relevant ways and a lot of the challenge comes from asking the right questions in advance rather than the actual skill of creating a good forecast. And a lot of the solutions proposed can be expensive, overengineered, and focus far too much on forecasting and not enough on the initial question writing. Michael Story gets into this well in “Why I generally don’t recommend internal prediction markets or forecasting tournaments to organisations”.
I think something like “Can Policymakers Trust Forecasters?” from the Institute for Progress takes a healthier view about how to use forecasting. Basically, you need to take some humility about what forecasting can accomplish but explicit quantification of your views is a good thing and it is also really good for society generally to grade experts on their accuracy rather than their ability to manipulate the media system.
Additionally, I do think that knowing about the world ahead seems generally valuable and forecasting still seems like one of the best ways to do that. For example, everything we know about existential risk essentially comes down to various kinds of forecasting.
Lastly, my guess is that a lot of the potential of forecasting for institutional decision making is still untapped and merits further meta-research and exploration.
(2) what are RP’s plans with the Special Projects Program?
The plan for RP Special Projects is to continue to fiscally sponsor our existing portfolio of organizations and see how that goes and continue to build capacity to support additional organizations sometime in the future. Current marginal Special Projects time is going into exploring more incubation work with our Existential Security department.
Do you think that promoting alternative proteins is (by far) the most tractable way to make conventional animal agriculture obsolete?
Evidence for alternative proteins being the most tractable way to make conventional animal agriculture obsolete is fairly weak. For example, similar products (eg, plant-based milk, margarine) have not made their respective categories obsolete.
Instead, we do have and we will continue to need a multi-pronged approach to transitioning conventional animal agriculture to a more just and humane system.
~
Do you think increasing public funding and support for alternative proteins is the most pressing challenge facing the industry?
Alternative proteins is a varied landscape so I imagine that the bottlenecks will be pretty different depending on the particular product, company, and approach. Unfortunately I am not up to date on details with regard to the funding gaps in this area.
~
Do you think there is expert consensus on these questions?
Unfortunately there is not. There also just aren’t that many experts in this area in the first place.
Honestly I love this question but I got asked a lot of real questions that I think were varied and challenging, so right now I don’t currently feel like I need even more!
What do you focus on within civilizational resilience?
This year we’ve made an intentional decision to focus nearly all our longtermist work on AI due to our assessment of AI risk as both unusually large and urgent, even among other existential risks. We will revisit this decision in future years and to be clear this does not mean that we think other people shouldn’t work on non-AI x-risk or longtermism-work not oriented towards existential risk reduction. But that does mean we don’t have any current work on civilizational resilience right now.
That being said, we do have some work on this in the past:
-
Linch did a decent amount of research and coordination work around exploring civilizational refuges but RP is no longer working on this project.
-
Jam has previously done work on far-UVC, for example by contributing to “Air Safety to Combat Global Catastrophic Biorisks”.
-
We co-supported Luisa in writing “What is the likelihood that civilizational collapse would directly lead to human extinction (within decades)?” while she was a researcher at both Rethink Priorities and Forethought Foundation.
-
I agree with Zach’s comment that other organizations are also underfunded and so this is not a unique view among RP. See also my comment to Aaron Bergman on donation opportunities. I think my comment to Sebastian Schmidt also helps answer this question and gives a bit more context about how and why RP has been less focused on talent gaps historically.
I’m guessing what you mean is something like “One of RP’s aims is to advise grantmaking. How many total dollars of grantmaking have you advised?” You might then be tempted to take this number, divide it by our costs, and compare that to other organizations. But this is a tricky question to answer actually, since it never has been as straightforward of a relationship as I’d expect for a few reasons:
-
Our advice is marginal and we never make a sole and final decision on any grant. Also the amount of contribution varies a lot between grants. So you need some counterfactually-adjusted marginal figure.
-
Sometimes our advice leads to grantmakers being less likely to make a grant rather than more likely… how does that count?
-
The impact value of the grants themselves is not equal.
-
Some of our research work looks into decisions but doesn’t actually change the answer. For example, we look into an area that we think isn’t promising and confirm it isn’t promising so in absolute terms we got nowhere but the hits-based fact that it could’ve gone somewhere is valuable. It’s hard to figure out how to quantify this value.
-
A large portion of our research builds on itself. For example, our invertebrate work has led to some novel grantmaking that likely would not have otherwise happened, but only after three years of work. A lot of our current research is still (hopefully) in that pre-payoff period and so hasn’t lead to any concrete grants yet. It’s hard to figure out how to quantify this value.
-
A large portion of our research is of the form “given that this grant is being made, how can we make sure it goes as well as possible?” rather than actually advising on the initial grant. It’s hard to figure out how to quantify this value.
-
A lot more of our recent work has been focused on creating entirely new areas to put funding into (e.g., new incubated organizations, exploring new AI interventions). This takes time and is also hard to value.
-
We’ve been working this year on producing a figure that looks at itemizing decisions we’ve contributed to and estimating how much we’ve influenced that decision and how valuable that decision was, but we don’t have that work finished yet because it is complicated. Additionally, we’ve been involved in such a large number of decisions by this point that it is a lot of hard work to do all the follow-up and number crunching.
-
Do also keep in mind that influencing grantmaking is not RP’s sole objective and we achieve impact in other ways (e.g., talent recruitment + training + placement, conferences, incubated organizations, fiscal sponsorships).
All this to say is that I don’t actually have an answer to your question. But we did hire a Worldview Investigations Team that is working more on this.
-
I’ve personally liked it. There have been several times when I’ve talked with my co-CEO Marcus about whether one of us should just become CEO and it’s never really made sense. We work well together and the co-CEO dynamic creates a great balance between our pros and cons as leaders – Marcus leads the organization to be more deliberate and careful at the cost of potentially going too slowly and I lead the organization to be more visionary at the cost of potentially being too chaotic.
Right now we split the organization very well where Marcus handles the portfolios pertaining to Global Health and Development, Animal Welfare, and Worldview Investigations… and I handle the portfolios pertaining to AI Governance and Strategy, Existential Security (AI-focused incubation), and Surveys and Data Analysis (currently also mostly AI policy focused right now though you may know us mainly from the EA Survey).
I’m unsure if I’d recommend it to other orgs. I think most times it wouldn’t make sense. But I think it does make sense when there are two co-founders with an equally natural claim and desire to claim the CEO mantle, when they balance each other well, and when there is some sort of clear split and division of responsibility.
do you support efforts calling for a global moratorium on AGI (to allow time for alignment research to catch up / establish the possibility of alignment of superintelligent AI)?
I’m definitely interested in seeing these ideas explored, but I want to be careful before getting super into it. My guess is that a global moratorium would not be politically feasible. But pushing for a global moratorium could still be worthwhile to pursue even if it is unlikely to happen as it could be a good galvanizing ask that brings more general attention to AI safety issues and make other policy asks seem more reasonable by comparison. I’d like to see more thinking about this.
On the merits of the actual policy, I am unsure whether a moratorium is a good idea. My concern is may just produce a larger compute overhang which could increase the likelihood of future discontinuous and hard-to-control AI progress.
Some people in our community have been convinced that an immediate and lengthy AI moratorium is a necessary condition for human survival, but I don’t currently share that assessment.
p(doom|AGI)
As for existential risk, my current very tentative forecast is that the world state at the end of 2100 to look something like:
73% - the world in 2100 looks broadly like it does now (in 2023) in the same sense that the current 2023 world looks broadly like it did in 1946. That is to say of course there will be a lot of technological and sociological change between now and then but by the end of 2100 there still won’t be unprecedented explosive economic growth (e.g.., >30% GWP growth per year), no existential disaster, etc.
9% - the world is in a singleton state controlled by an unaligned rogue AI acting on its own initiative.
6% - the future is good for humans but our AI / post-AI society causes some other moral disaster (e.g., widespread abuse of digital minds, widespread factory farming)
5% - we get aligned AI, solve the time of perils, and have a really great future
4% - the world is in a singleton state controlled by an AI-enabled dictatorship that was initiated by some human actor misusing AI intentionally
1% - all humans are extinct due to an unaligned rogue AI acting on its own initiative
2% - all humans are extinct due to something else on this list (e.g., some other AI scenario, nukes, biorisk, unknown unknowns)
I think conditional on producing minimal menace AI by the end of 2070, there’s a 28% chance an existential risk would follow within the next 100 years that could be attributed to that AI system.
Though I don’t know how seriously you should take this, because forecasting >75 years into the future is very hard.
Also my views of this are very incomplete and in flux and I look forward to refining them and writing more about them publicly.
I am happy to see that Nick and Will have resigned from the EV Board. I still respect them as individuals but I think this was a really good call for the EV Board, given their conflicts of interests arising from the FTX situation. I am excited to see what happens next with the Board as well as governance for EV as a whole. Thanks to all those who have worked hard on this.