CEO of Rethink Priorities
Marcus_A_Davis
I would like to see more applications in the areas outlined in our RFP and I’d encourage anyone with interest in working on those topics to contact us.
More generally, I would like to see far more people and funding engaged in this area. Of course, that’s really difficult to accomplish. Outside of that, I’m not sure I’d point to anything in particular.
We don’t have a cost-effectiveness estimate of our grants. The reason as to why not, is it’s likely very difficult to produce, and while it could be useful, we’re not sure it’s worth the investment for now.
On who to be in touch with, I would suggest such a prospective student is in touch with groups like GFI and New Harvest if they would like advice on attempting to find advisors for this type of work.
On advice, I would generally stay away from career advice. If forced to answer, I would not give general advice that everyone or most people are better off attempting to do as high impact research as soon as is feasible.
I think we’re looking for promising projects and one clear sign of that is often a track-record of success. The more challenging the proposal, the more something like this might be important. However, we’re definitely open to funding people without a long track record if there are other reasons to believe the project would be successful.
Personally, I’d say good university grades alone is probably not a strong enough signal, but running or participating in successful small projects on a campus might be particularly if the projects were similar in scope or size to what was being proposed, and/or this person had good references on their capabilities from people we trusted.
The case of a nonprofit with a suboptimal track record is harder for me in the abstract. I think it depends a lot on the group’s track record and just how promising we believe the project to be. If a group has an actively bad track record, failing to produce what they’ve been paid to do or producing work of negative value, I’d think we’d be reluctant to fund them even if they were working in an area we considered promising. If the group was middling, but working in a highly promising area, I’d guess we would be more likely to fund them. However, there is obviously much grey area between these two poles and I think it really depends on the details of the proposal and track record of the group in determining whether we’d think such a project would be worth funding.
We grade all applications with the same scoring system. For the prior round, after the review of the primary and secondary investigator and we’ve all read their conclusions, each grant manager gave a score (excluding cases of conflict of interests) of +5 to −5, with +5 being the strongest possible endorsement of positive impact, and −5 being a grant with an anti-endorsement that’s actively harmful to a significant degree. We then averaged across scores, approving those at the very top, and dismissing those at the bottom, largely discussing only those grants that are around the threshold of 2.5 unless anyone wanted to actively make the case for or against something outside of these bounds (the size and scope of other grants, particularly the large grants we approve, is also discussed).
That said, in my mind, grants for research are valuable to the extent they unlock future opportunities to directly improve the welfare of animals. Of course, figuring out whether, or how much, that’s feasible with any given research grant can be very difficult. For direct work, you can, at least in theory, relatively straightforwardly try to estimate the impact on animals (or at least the range of animals impacted). We try to estimate plausible success and return on animal lives improved for both but given these facts there are some additional things I think we keep in mind. Some considerations:
Path to impact for research. If the research is on, say, a certain species of fish you can estimate how many of those fish are killed/raised/farmed per year and any trends in these figures. You could use that number of animals as a kind of upper bound on the animals possible to be impacted, before figuring out how many could plausibly be affected by actors, aligned (on this topic) foundations or governments or NGOs that could plausibly act on this information. And if these parties can act, how likely is it, and how big of change would it be.
For research with more diffuse or longer term impacts, you can attempt similar calculations or approximations, it can be difficult to assess with any precision, but this is also true of some direct work that involves field-building or, say, conferences.
There are other considerations, notably that research and direct work may have different counterfactual support options depending on the topic. There may be less funders interested in supporting certain types of research (say, non-academic work on neglected animals) and more on other topics that may be more established.
I don’t think it is true the EA AW Fund is essentially neartermist, though this may depend somewhat on what you mean. We definitely consider grants that have potential long term payoffs beyond the next few decades. In my opinion, much of the promise of PBM and cultivated meat relies on impacts that would be 15-100 years away and there’s no intrinsic reason held, for me and I believe other funders, to discount or not consider other areas for animal welfare that would have long term payoffs.
That said, as you suggest in (2), I do think it is true that it makes sense for the LTFF to focus more on thinking through and funding projects that involve what would happen assuming AGI were to come to exist. A hypothetical grant proposal which is focused on animal welfare but depends on AGI would probably make sense for both funds to consider or consult each other on and it would depend on the details of the grant as to whose ultimate domain we believe it falls under. We received applications at least somewhat along these lines in the prior grant round and this is what happened.
Given the above, I think it’s fair to say we would consider grants with reasoning like in your post, but sometimes the ultimate decision for that type of grant may make more sense to be considered for funding by the LTFF.
On the question of what I think of the moral circle expansion type arguments for prioritizing animal welfare work within longtermism, I’ll speak for myself. I think you are right that the precise nature of how moral circles expand and whether such expansion is unidimensional or multidimensional is an important factor. In general, I don’t have super strong views on this issue though so take everything I say here to be stated with uncertainty.
I’m somewhat skeptical, to varying degrees, about the practical ability to test people’s attitudes about moral circle expansion in a reliable enough way to gain the kind of confidence needed to determine if that’s a more tractable way to influence long run to determine, as you suggest it might, whether to prioritize clean meat research or advocacy against speciesism, which groups of animals to prioritize, or which subgroups of the public to target if attempting outreach. The reason for much of this skepticism (as you suggest as a possible limitation of this argument) is largely the transferability across domains and cultures, and the inherent wide error bars in understanding the impact of how significantly different facts of the world would impact responses to animal welfare (and everything else).
For example, supposing it would be possible to develop cost-competitive clean meat in the next 30 years, I don’t know what impact that would have on human responses to wild animal welfare or insects and I wouldn’t place much confidence, if any, in how people say they would respond to their hypothetical future selves facing that dilemma in 30 years (to say nothing of their ability to predict the demands of generations not yet born to such facts). Of course, reasons like this don’t apply to all of the work you suggested doing and, say, surveys and experiments on existing attitudes of those actively working in AI might tell us something about whether animals (and which animals if any) would be handled by potential AI systems. Perhaps you could use this information to decide we need to ensure non-human-like minds to be considered by those at elite AI firms.
I definitely would encourage people to send us any ideas that fall into this space, as I think it’s definitely worth considering seriously.
In the just completed round we got several applications from academics looking to support research on plant-based and cultivated meat projects though we ultimately decided not to support any of them. We definitely welcome grant applications in this area and our new requests for proposals explicitly calls for applications on work in this space. Additionally, I would direct them to consider applying to GFI’s alternative protein research grants, and the Food Systems Research Fund, among other locations, if they believe they have promising projects in this space.
On the specific reasoning, there are reasons against funding some work in this area, as there are every area we consider, but ultimately I don’t think the general case for or against grants in this space is decisive. It’s definitely true, as you point out, that some grant requests in this area can be high relative to the median grant request but this prior round featured five grants over $100,000. So, to me, the ultimate concern is the expected rate of return on the particular grant relative to other possible options we have before us. In this particular instance we didn’t fund one of these projects but I definitely wouldn’t want to deter researchers with valuable ideas from applying, as I think work in this space has the potential to be extremely valuable.
All the said, I think there are a some reasons other places might be a better fit for some other funders:
Academic social science research is often a better fit for the EA research fund or Food Systems Fund because of their expertise + focus.
Academic plant-based + cultured meat research is often a better fit for the GFI fund because of their expertise + focus.
Academic farm animal welfare science research is often a better fit for Humane Slaughter Association, or a bunch of other scientific funders.
I think we should be open to funding all of the above, but I think a $1M academic grant will always be a heavy lift if we only have ~$1-2M to give away (i.e. the academic grant would be almost the whole thing).
What new charities do you want to be created by EAs?
I don’t have any strong opinions about this and it would likely take months of work to develop them. In general, I don’t know enough to suggest that it is desirable that new charities work in areas I think could use more work rather than existing organizations up their work in those domains.
What are the biggest mistakes Rethink Priorities did?
Not doing enough early enough to figure out how to achieve impact from our work and communicate with other organizations and funders about how we can work together.
Thanks for the questions!
If one is only concerned w/ preventing needless suffering, prioritising the most extreme suffering, would donating to Rethink Priorities be a good investment for them, and if so, how so?
I think this depends on many factual beliefs you hold, including what groups of creatures count and what time period you are concerned about. Restricting ourselves to the present and assuming all plausibly sentient minds count (and ignoring extremes, say, less than 0.1% chance), I think farm and wild animals are plausibly candidates for enduring some of the worst suffering.
Specifically, I’d say it’s plausible some of the worst persistent current suffering is plausibly in farmed chickens and fish, and thus work to reduce the worst aspects of those is a decent bet to prevent extreme suffering. Similarly, wild animals likely experience the largest share of extreme suffering currently because of the sheer numbers and nature of life largely without interventions to prevent, say, the suffering of starvation, or extreme physical pain. For these reasons, work to improve conditions for wild animals plausibly could be a good investment.
Still restricted to the present, and outside the typical EA space altogether, I think it’s plausible much of the worst suffering in the world is committed during war crimes or torture under various authoritarian states. I do not know if there’s anything remotely tractable in this space or what good donation opportunities would be.
If you broaden consideration to include the future, a much wider set of creatures plausibly could experience extreme suffering including digital minds running at higher speeds, and/or with increased intensity of valenced experience beyond what’s currently possible in biological creatures. Here, what you think is the best bet would depend on many empirical beliefs again. I would say, only, that I’m excited about our longtermism work and think we’ll meaningfully contribute to creating the kind of future that decreases the risks of these types of outcomes.
Thanks for the question, Edo!
We keep a large list of project ideas, and regularly add to it by asking others for projects ideas including staff, funders, advisors, and organizations in the spaces we work in.
Hey Edo, thanks for the question!
We’ve had some experience working with volunteers. In the past, when we had less operational support than we do now, we found it challenging to manage and monitor volunteers but we think it’s something that we’re better placed to handle now so may explore again in the coming years, though we are generally hesitant about depending on free labor.
We’ve not really had experience publicly outsourcing questions to the EA community, but we regularly consult wider EA communities for input on questions we are working on. Finally, and I’m not sure this is what you meant, but we’ve also partnered with Metaculus on some forecasting questions.
Hey Josh, thanks for the question!
From first principles, our allocation depends on talent fit, the counterfactual value of our work, fundraising, and, of course, some assessment of how important we think the work is, all things considered.
At the operational level, we set targets as percentage of time we want to spend on each cause area based on these factors and we re-evaluate based on that as our existing commitments, the data, and as changes in our opinions about these matters warrant.
I think it’s going great! I think our combined skillset is a big pro when reviewing work, considering project ideas. In general, I think bouncing ideas off each other improves and sharpens our ideas. We are definitely able to cover more depth and breadth with the two of us than if only one person was leading the organization.
Additionally, Peter and I get along great and I enjoy working alongside him everyday (well, digitally anyway given we are remote).
Thanks for the question!
We hire for fairly specific roles, and the difference between those we do hire and don’t isn’t necessarily as simple as those brought on being better as researchers overall (to say nothing of differences in fit or skill across causes).
That said, we generally prioritize ability in writing, general reasoning, and quantitative skills. That is we value the ability to uncover and address considerations, counter-points, meta-considerations on a topic, produce quantitative models and do data analysis when appropriate (obviously this is more relevant in certain roles than others), and to compile this information into understandable writing that highlights the important features and addresses topics with clarity. However, which combination of these skills is most desired at a given time depends on current team fit and the role each hire would be stepping into.
For these reasons, it’s difficult to say with precision which skills I’d hope for more of among EA researchers. With those caveats, I’d still say a demonstration of these skills through producing high quality work, be it academic or in blog posts, is in fact a useful proxy for the kinds of work we do at RP.
Thanks for the questions!
On (1), we see our work in WAW as currently doing three things: (1) foundational research (e.g., understanding moral value and sentience, understanding well-being at various stages of life), (2) investigating plausible tractable interventions (i.e., feasible interventions currently happening or doable within 5 years), and (3) field building and understanding (e.g., currently we are running polls to see how “weird” the public finds WAW interventions).
We generally defer to WAI on matters of direct outreach (both academic and general public) and do not prioritize that area as much as WAI and Animal Ethics do. It’s hard to say more on how our vision differs from WAI without them commenting, but we collaborate with them a lot and we are next scheduled to sync on plans and vision in early January.
On (2), it’s hard to predict exactly what additional restrict donations do, but in general, we expect them to increase in the long run how much we spend in a cause by an amount similar to how much is donated. Reasons for this include: we budget on a fairly long-term basis, so we generally try to predict what we will spend in a space, and then raise that much funding. If we don’t raise as much as we’d like, we likely consider allocating our expenses differently; and if we raise more than we expected, we’d scale up our work in a cause area. Because our ability to work in spaces is influenced by how much we raise, generally raising more restricted funding in a space ought to lead to us doing more work in that space.
Ask Rethink Priorities Anything (AMA)
Thanks for the question!
I think the short answer is this what we think of doing projects in the improving the collective understanding space depends on a number of factors including the nature of the project, and the probability of that general change in perspective leading to actions changed in the future, and how important it would be if that change occurred.
One very simplistic model you can use to think about possible research projects in this area is:
Big considerations (classically “crucial considerations”, i.e. moral weight, invertebrate sentience)
New charities/interventions (presenting new ideas or possibilities that can be taken up)
Immediate influence (analysis to shift ongoing or pending projects, donations, or interventions)
It’s far easier to tie work in categories (2) or (3) into behavior changed. By contrast, projects or possible research that falls into the (1) can be very difficult to map to specific plausible changes ahead of time and, sometimes, even after the completion of the work. These projects can also be more likely to be boom or bust, in that the results of investigating them could have huge effects if we or others shift our beliefs but it can be fairly unlikely to change beliefs at all. That said, I think these types of projects can be very valuable and we try to dedicate some of our time to doing them.
I think it’s fair to say these types of “improving some collective understanding of prioritization” projects have been a minority of the types of projects we’ve done and that are listed for the coming year. However, there are many caveats here including but not limited to:
The nature of the project, our fit, and what others are working on has a big impact on which projects we take on. So even if, in theory, we thought a particular research idea was really worth pursuing there are many factors that go into whether we take on a particular project.
These types of projects have historically taken longer to complete, so they may be smaller in number but a larger share of our overall work hours than counting projects would suggest at first glance.
- Dec 15, 2020, 8:22 AM; 4 points) 's comment on Ask Rethink Priorities Anything (AMA) by (
Rethink Priorities 2020 Impact and 2021 Strategy
Hey I’m happy to see this on the forum! I think farmed shrimp interventions are a promising area and this report highlights some important considerations. I should note that Rethink Priorities has also been researching this topic for a while and I won’t go into detail as I’m not leading up this work and the person who is currently is on leave, but I think we’ve tentatively come to some different conclusions about the most promising next steps in this domain.
In the future, if anyone reading this is inclined to work on farmed shrimp, in addition to reviewing this report I’d hope you’d read over our forthcoming work and/or reach out to us about this area.
Given we know so little about their potential capacities and what alters their welfare, I’d suggest the potential factory farming of insects is potentially quite bad. However, I don’t know what methods are effective at discouraging people from consuming them, though some of the things you suggest seem plausible paths here. I think it is pretty hard to say much on the tractability of these things, without further research.
Also, we are generally keen to hear from folks who are interested in doing further work on invertebrates. And, personally, if you know of anyone interested in working on things like this I would encourage them to apply to be ED of the Insect Welfare Project.