CEO of Rethink Priorities
Marcus_A_Davis
We mean to say that the ideas for these projects and the vast majority of the funding were ours, including the moral weight work. To be clear, these projects were the result of our own initiative. They wouldn’t have gone ahead when they did without us insisting on their value.
For example, after our initial work on invertebrate sentience and moral weight in 2018-2020, in 2021 OP funded $315K to support this work. In 2023 they also funded $15K for the open access book rights to a forthcoming book based on the topic. In that period of 2021-2023, for public-facing work we spent another ~$603K on moral weight work with that money coming from individuals and RP’s unrestricted funding.
Similarly, the CURVE sequence of WIT this year was our idea and we are on track to spend ~$900K against ~$210K funded by Open Phil on WIT. Of that $210K the first $152K was on projects related to Open Phil’s internal prioritization and not the public work of the CURVE sequence. The other $58K went towards the development of the CCM. So overall less than 10% of our costs for public WIT work this year was covered by OP (and no other institutional donors were covering it either).
- Dec 11, 2023, 8:49 PM; 23 points) 's comment on Rethink Priorities needs your support. Here’s what we’d do with it. by (
Hey Vasco, thanks for the thoughtful reply.
I do find fanaticism problematic at a theoretical level since it suggests spending all your time and resources on quixotic quests. I would go one further and say I think if you have a series of axioms and it proposes something like fanaticism, this should at least potentially count against that combination of axioms. That said, I definitely think, as Hayden Wilkinson pointed out in his In Defence of Fanaticism paper, there are many weaknesses with alternatives to EV.
Also, the idea that fanaticism doesn’t come up in practice doesn’t seem quite right to me. On one level, yeah, I’ve not been approached by a wizard asking for my wallet and do not expect to be. But I’m also not actually likely going to be approached by anyone threatening to money-pump me (and even if I were I could reject the series of bets) and this is often held as a weakness to EV alternatives or certain sets of beliefs. On another level, in some sense to the extent I think we can say fanatical claims don’t come up in practice it is because we’ve already decided it’s not worth pursuing them and discount the possibility, including the possibility of going looking for actions that would be fanatical.* Within the logic of EV, even if you thought there weren’t any ways to get the fanatical result with ~99% certainty, it would seem you’d need to be ~100% certain to fully shut the door on at least expending resources seeing if it’s possible you could get the fanatical option. To the extent we don’t go around doing that I think it’s largely because we are practically rounding down those fanatical possibilities to 0 without consideration (to be clear, I think this is the right approach).
All the other problems attributed to expected utility maximisaton only show up if one postulates the possibility of unbounded or infite value, which I do not think makes sense
I don’t think this is true. As I said in response to Michael St. Jules in the comments, EV maximization (and EV with rounding down unless it’s modified here too) also argues for a kind of edge-case fanaticism, where provided a high enough EV if successful you are obligated to take an action that’s 50.000001% positive in expectation even if the downside is similarly massive.
It’s really not clear to me the rational thing to do is consistently bet on actions that would impact a lot of possible lives but, say ~0.0001% chance of making a difference and are net positive in expectation but have a ~49.999999% chance of causing lots of harm. This seems like a problem even within a finite and bounded utility function for pure EV.
I am confused about why RP is still planning to invest significant resources in global health and development… Maybe a significant fraction of RP’s team believes non-hedonic benefits to be a major factor?
I’ve not polled internally but I don’t think non-hedonic benefits issue is a driving force inside RP. Speaking for myself, I do think hedonism is makes up for at least more than half of what makes things valuable at least in part for the reasons outlined in that post.
The reasons we work across areas in general are because of differences in the amount of money in the areas, the number of influenceable actors, the non-fungibility of the resources in the spaces (both money and talent), and moral and decision-theoretic uncertainty.
In this particular comparison case of GHD and AW, there’s hundreds of millions more of plausibly influenceable dollars in the GHD space than in the AW space. For example, GiveWell obviously isn’t going to shift their resources to animal welfare, but they still move a lot of money and could do so more effectively in certain cases. GiveWell alone is likely larger than all of the farm animal welfare spending in the world by non-governmental actors combined, and that includes a large number of animal actors I think it’s not plausible to affect with research. Further, I think most people who work in most spaces aren’t “cause neutral” and, for example, the counterfactual of all our GHD researchers isn’t being paid by RP to do AW research that influences even a fraction of the money they could influence in GHD.
Additionally, you highlight that AW looks more cost-effective than GHD but you did not note that AMF looked pretty robustly positive across different decision theories and this was not true, say, of any of the x-risk interventions we considered in the series and some of the animal interventions. So, one additional reason to do GHD work is the robustness of the value proposition.
Ultimately, though, I’m still unsure about what the right overall approach is to these types of trade-offs and I hope further work from WIT can help clarify how best to make these tradeoffs between areas.
*A different approach is to resist this conclusion is to assert a kind of claim that you must drop your probability in claims of astronomical value, and that this always balances out increases in claims of value such that it’s never rational within EV to act on these claims. I’m not certain this is wrong but, like with other approaches to this issue, within the logic of EV it seems you need to be at ~100% certainty this is correct to not pursue fanatical claims anyway. You could say in reply the rules of EV reasoning don’t apply to claims about how you should reason about EV itself, and maybe that’s right and true. But these sure seem like patches on a theory with weaknesses, not clear truths anyone is compelled to accept at the pain of being irrational. Kludges and patches on theories are fine enough. It’s just not clear to me this possible move is superior to, say, just biting that you need to do rounding down to avoid this type of outcome.
Thanks for the engagement, Michael.
I largely agree with your notes and caveats.
However, on this:
Expected utility maximization can be guaranteed to avoid fanaticism while satisfying the standard EUT axioms (and countable extensions), with a bounded utility function and the bounds small enough or marginal returns decreasing fast enough, in relative terms… In my view, expected utility with a bounded utility function (not difference-making) is the most instrumentally rational of the options, and it and boundedness with respect to differences seem the most promising, but have barely been discussed in the sequence (if it all?). I would recommend exploring these options more.
I’m definitely in for exploring a variety of more options. We didn’t explore all possible options in this series, and I think we could, in theory, spend a lot more time investigating possible options including some of the combinations of theories, and more edge case versions of particular views like WLU you lay out.
However, I think while it is plausible EV could avoid some version of fanaticism that way, it still seems vulnerable to a very related issue like the following.
It seems there are actually two places for EV where rounding down or bound setting needs to happen to avoid issues with particularly risky gambles. (1) For really low probabilities (i.e. 1 in 100 trillion) with really high outcomes and (2) around the 50% line distinguishing actions that lean net positive from those that are neutral or negative in expectation. Conceptually, these are very similar but practically there may be different implications for doing them.
While it seems a bounded EV function with a function that assigns marginal returns a really steep decline could avoid the fanaticism of (1) (though this itself creates counterintuitive results), it doesn’t seem like this type of solution alone would resolve the issue where the the decision point is whether something is lean net positive but possibly only barely of (2).That is, there are many choices about actions where the sign of the action is uncertain and this applies, among other things, to x-risk interventions that have the possibility of having a very large expected utility if the action succeeds. Practically, it seems these types of choices are likely very common for charitable actors.
If despite a really large expected utility in your bounded function, you don’t think we should always take an action that is only, say, 50.0001% positive in expectation you wind up in a very similar place with regard to being “mugged” by high value outcomes that are not just unlikely to pay out but almost equally as likely to cause harm, then you think something has gone awry in EV. And it doesn’t seem reasonable bounds designed for avoiding really low probabilities but high EV outcomes will help you avoid this.
To be clear, I haven’t reasoned this out entirely, and I will just preemptively grant it’s possible you could create a different “bound” that would act on not just small probabilities, but also on these edge-cases where EU suggests taking these types of gambles. But if you do that this looks a lot like what you are doing is introducing a difference-making criteria to your decision theory. To the extent you may think this type of modified EU is viable, it is because it mimics the aversion of these other theories to certain types of uncertainty.
Basically, I’m actually not confident that this type of modification should matter much for us. The axiom choices matter here for which theory to put the most weight in but I’m unsure this type of distinction is buying you much practically if, say, after you make them you still end up with a set of theoretical options that look in practice like pure EV vs EV with rounding down vs something like WLU vs something like REU.
EDIT: grammar fix.
- Nov 14, 2023, 8:21 PM; 18 points) 's comment on How Rethink Priorities is Addressing Risk and Uncertainty by (
- Nov 15, 2023, 4:10 PM; 2 points) 's comment on How Rethink Priorities is Addressing Risk and Uncertainty by (
In trying to convince people to support global health charities I don’t think I’ve ever gotten the objection “but people in other countries don’t matter” or “they matter far less than Americans”, while I expect vegan advocates often hear that about animals.
I have gotten the latter one explicitly and the former implicitly, so I’m afraid you should get out more often :).
More generally, that foreigners and/or immigrants don’t matter, or matter little compared to native born locals, is fundamental to political parties around the world. It’s a banal take in international politics. Sure, some opposition to global health charities is an implied or explicit empirical claim about the role of government. But fundamentally, not all of it as a lot of people don’t value the lives of the out-group and people not in your country are in the out-group (or at least not in the in-group) for much of the world’s population.
First, I think GiveWell’s research, say, is mostly consumed by people who agree people matter equally regardless of which country they live in.
GiveWell donors are not representative of all humans. I think a large fraction of humanity would select the “we’re all equal” option on a survey but clearly don’t actually believe it or act on it (which brings us back to revealed preferences in trades like those humans make about animal lives).
But even if none of that is true, were someone to make this argument about the value of the global poor, the best moral (I make no claims about what’s empirically persuasive) response is “make a coherent and defensible argument against the equal moral worth of humans including the global poor”, and not something like “most humans actually agree that the global poor have equal value so don’t stray too far from equality in your assessment.” If you do the latter, you are making a contingent claim based on a given population at a given time. To put it mildly, for most of human history I do not believe we even would have gotten people to half-heartedly select the “moral equality for all humans” option on a survey. For me at least, we aren’t bound in our philosophical assessment of value by popular belief here or for animal welfare.
David’s post is here: Perceived Moral Value of Animals and Cortical Neuron Count
What do you think of this rephrasing of your original argument:
I suspect people rarely get deeply interested in the the value of foreign aid unless they come in with an unusually high initial intuitive view that being human is what matters, not being in my country… If you somehow could convince a research group, not selected for caring non-Americans, to pursue this question in isolation, I’d predict they’d end up with far less foreign aid-friendly results.
I think this argument is very bad and I suspect you do too. You can rightfully point out that in this context someone starting out at the 5th percentile before going into a foreign aid investigation and then determining foreign aid is much more valuable than the general population thinks would be, in some sense, stronger evidence than if they had instead started at the 95th percentile. However, that seems not super relevant. What’s relevant is whether it is defensible at all to norm to a population based on their work on a topic given a question of values like this (that or if there were some disanalogy between this and animals).
Generally, I think the typical American when faced with real tradeoffs (they actually are faced with these tradeoffs implicitly as part of a package vote) don’t value the lives of the global poor equally to the lives of their fellow Americans. More importantly, I think you shouldn’t norm where your values on global poverty end up after investigation back to what the typical American thinks. I think you should weigh the empirical and philosophical evidence about how to value the lives of the global poor directly and not do too much, if any, reference class checking about other people’s views on the topic. The same argument holds for whether and how much we should value people 100 years from now after accounting for empirical uncertainty.
Fundamentally, the question isn’t what people substantively do think (except for practical purposes), the question is what beliefs are defensible after weighing the evidence. I think it’s fine to be surprised by what RP’s moral weight work says on capacity for welfare, and I think there are still high uncertainty in this domain. I just don’t think either of our priors, or the general population’s priors, about the topic should be taken very seriously.
Maybe. We’re a little unsure about this right now. The code base for this is part of the bigger Cross-Cause Cost-Effectiveness Model which we haven’t made a final determination on whether we will release it.
Jeff, are you saying you think “an intuition that a human year was worth about 100-1000 times more than a chicken year” is a starting point of “unusually pro-animal views”?
In some sense, this seems true relative to most humans’ implied views by their actions. But, as Wayne pointed out above, this same critique could apply to, say, the typical American’s views about global health and development. Generally, it doesn’t seem to buy much to frame things relative to people who’ve never thought about a given topic substantively and I don’t think you’d think this would be a good critique of a foreign aid think tank looking into how much to value global health and development.
Maybe you are making a different point here?
Also, it would help more if you were being explicit about what you think a neutral baseline is. What would you consider more typical or standard views about animals from which to update? Moment to moment human experience is worth 10,000x that of a chicken conditional on chickens being sentient? 1,000,000x? And, whatever your position, why do you think that is a more reasonable starting point?
Thanks for the question, but unfortunately we can not share more about those involved or the total.
I can say we’re confident this unlocked millions for something that otherwise wouldn’t have happened. We think maybe half of the money moved would not have been spent, and some lesser amount would have been spent on less promising opportunities from an EA perspective.
Thanks for the question and the kind words. However, I don’t think I can answer this without falling back somewhat on some rather generic advice. We do a lot of things that I think has contributed to where we are now, but I don’t think any of them are particularly novel:
We try to identify really high quality hires, bring them on, train them up and trust them to execute their jobs.
We seek feedback from our staff, and proactively seek to improve any processes that aren’t working.
We try to follow research and management best practices, and gather ideas on these fronts from organizations and leaders that have previously been successful.
We try to make RP a genuinely pleasant place to work for everyone on our staff.
As to your ideas about the possibility of RP’s success being high founder quality, I think Peter and I try very hard to do the best we can but I think in part due to survivorship bias it’s difficult for me to say that we have any extraordinary skills others don’t possess. I’ve met many talented, intelligent, and driven people in my life, some of whom have started ventures that have been successful and others who have struggled. Ultimately, I think it’s some combination of these traits, luck, and good timing that has lead us to be where we are today.
Thanks for the question! I think describing the current state will hint at a lot on what might make us change the distribution, so I’m primarily going to focus on that.
I think the current distribution of what we work on is dependent on a number of factors, including but not limited to:
What we think about research opportunities in each space
What we think about the opportunity to exert meaningful influence in the space
Funding opportunities
Our ability to hire people
In a sense, I think we’re cause neutral in that we’d be happy to work on any cause provided the good opportunities arise to do so. We do have opinions on high level cause prioritization (though I know there’s some disagreement inside RP about this topic) but I think given the changing nature of marginal value of additional work in any given the above considerations, and others, we meld our work (and staff) to where we think we can have the highest impact.
In general, though this is fairly generic and high level, were we to come to think our in a given area wasn’t useful or the opportunity cost were too high to continue to work on it, we would decide to pursue other things. Similarly, if the reverse was true for some particular possible projects we weren’t working on, we would take them on
Given we know so little about their potential capacities and what alters their welfare, I’d suggest the potential factory farming of insects is potentially quite bad. However, I don’t know what methods are effective at discouraging people from consuming them, though some of the things you suggest seem plausible paths here. I think it is pretty hard to say much on the tractability of these things, without further research.
Also, we are generally keen to hear from folks who are interested in doing further work on invertebrates. And, personally, if you know of anyone interested in working on things like this I would encourage them to apply to be ED of the Insect Welfare Project.
I would like to see more applications in the areas outlined in our RFP and I’d encourage anyone with interest in working on those topics to contact us.
More generally, I would like to see far more people and funding engaged in this area. Of course, that’s really difficult to accomplish. Outside of that, I’m not sure I’d point to anything in particular.
We don’t have a cost-effectiveness estimate of our grants. The reason as to why not, is it’s likely very difficult to produce, and while it could be useful, we’re not sure it’s worth the investment for now.
On who to be in touch with, I would suggest such a prospective student is in touch with groups like GFI and New Harvest if they would like advice on attempting to find advisors for this type of work.
On advice, I would generally stay away from career advice. If forced to answer, I would not give general advice that everyone or most people are better off attempting to do as high impact research as soon as is feasible.
I think we’re looking for promising projects and one clear sign of that is often a track-record of success. The more challenging the proposal, the more something like this might be important. However, we’re definitely open to funding people without a long track record if there are other reasons to believe the project would be successful.
Personally, I’d say good university grades alone is probably not a strong enough signal, but running or participating in successful small projects on a campus might be particularly if the projects were similar in scope or size to what was being proposed, and/or this person had good references on their capabilities from people we trusted.
The case of a nonprofit with a suboptimal track record is harder for me in the abstract. I think it depends a lot on the group’s track record and just how promising we believe the project to be. If a group has an actively bad track record, failing to produce what they’ve been paid to do or producing work of negative value, I’d think we’d be reluctant to fund them even if they were working in an area we considered promising. If the group was middling, but working in a highly promising area, I’d guess we would be more likely to fund them. However, there is obviously much grey area between these two poles and I think it really depends on the details of the proposal and track record of the group in determining whether we’d think such a project would be worth funding.
We grade all applications with the same scoring system. For the prior round, after the review of the primary and secondary investigator and we’ve all read their conclusions, each grant manager gave a score (excluding cases of conflict of interests) of +5 to −5, with +5 being the strongest possible endorsement of positive impact, and −5 being a grant with an anti-endorsement that’s actively harmful to a significant degree. We then averaged across scores, approving those at the very top, and dismissing those at the bottom, largely discussing only those grants that are around the threshold of 2.5 unless anyone wanted to actively make the case for or against something outside of these bounds (the size and scope of other grants, particularly the large grants we approve, is also discussed).
That said, in my mind, grants for research are valuable to the extent they unlock future opportunities to directly improve the welfare of animals. Of course, figuring out whether, or how much, that’s feasible with any given research grant can be very difficult. For direct work, you can, at least in theory, relatively straightforwardly try to estimate the impact on animals (or at least the range of animals impacted). We try to estimate plausible success and return on animal lives improved for both but given these facts there are some additional things I think we keep in mind. Some considerations:
Path to impact for research. If the research is on, say, a certain species of fish you can estimate how many of those fish are killed/raised/farmed per year and any trends in these figures. You could use that number of animals as a kind of upper bound on the animals possible to be impacted, before figuring out how many could plausibly be affected by actors, aligned (on this topic) foundations or governments or NGOs that could plausibly act on this information. And if these parties can act, how likely is it, and how big of change would it be.
For research with more diffuse or longer term impacts, you can attempt similar calculations or approximations, it can be difficult to assess with any precision, but this is also true of some direct work that involves field-building or, say, conferences.
There are other considerations, notably that research and direct work may have different counterfactual support options depending on the topic. There may be less funders interested in supporting certain types of research (say, non-academic work on neglected animals) and more on other topics that may be more established.
I don’t think it is true the EA AW Fund is essentially neartermist, though this may depend somewhat on what you mean. We definitely consider grants that have potential long term payoffs beyond the next few decades. In my opinion, much of the promise of PBM and cultivated meat relies on impacts that would be 15-100 years away and there’s no intrinsic reason held, for me and I believe other funders, to discount or not consider other areas for animal welfare that would have long term payoffs.
That said, as you suggest in (2), I do think it is true that it makes sense for the LTFF to focus more on thinking through and funding projects that involve what would happen assuming AGI were to come to exist. A hypothetical grant proposal which is focused on animal welfare but depends on AGI would probably make sense for both funds to consider or consult each other on and it would depend on the details of the grant as to whose ultimate domain we believe it falls under. We received applications at least somewhat along these lines in the prior grant round and this is what happened.
Given the above, I think it’s fair to say we would consider grants with reasoning like in your post, but sometimes the ultimate decision for that type of grant may make more sense to be considered for funding by the LTFF.
On the question of what I think of the moral circle expansion type arguments for prioritizing animal welfare work within longtermism, I’ll speak for myself. I think you are right that the precise nature of how moral circles expand and whether such expansion is unidimensional or multidimensional is an important factor. In general, I don’t have super strong views on this issue though so take everything I say here to be stated with uncertainty.
I’m somewhat skeptical, to varying degrees, about the practical ability to test people’s attitudes about moral circle expansion in a reliable enough way to gain the kind of confidence needed to determine if that’s a more tractable way to influence long run to determine, as you suggest it might, whether to prioritize clean meat research or advocacy against speciesism, which groups of animals to prioritize, or which subgroups of the public to target if attempting outreach. The reason for much of this skepticism (as you suggest as a possible limitation of this argument) is largely the transferability across domains and cultures, and the inherent wide error bars in understanding the impact of how significantly different facts of the world would impact responses to animal welfare (and everything else).
For example, supposing it would be possible to develop cost-competitive clean meat in the next 30 years, I don’t know what impact that would have on human responses to wild animal welfare or insects and I wouldn’t place much confidence, if any, in how people say they would respond to their hypothetical future selves facing that dilemma in 30 years (to say nothing of their ability to predict the demands of generations not yet born to such facts). Of course, reasons like this don’t apply to all of the work you suggested doing and, say, surveys and experiments on existing attitudes of those actively working in AI might tell us something about whether animals (and which animals if any) would be handled by potential AI systems. Perhaps you could use this information to decide we need to ensure non-human-like minds to be considered by those at elite AI firms.
I definitely would encourage people to send us any ideas that fall into this space, as I think it’s definitely worth considering seriously.
In the just completed round we got several applications from academics looking to support research on plant-based and cultivated meat projects though we ultimately decided not to support any of them. We definitely welcome grant applications in this area and our new requests for proposals explicitly calls for applications on work in this space. Additionally, I would direct them to consider applying to GFI’s alternative protein research grants, and the Food Systems Research Fund, among other locations, if they believe they have promising projects in this space.
On the specific reasoning, there are reasons against funding some work in this area, as there are every area we consider, but ultimately I don’t think the general case for or against grants in this space is decisive. It’s definitely true, as you point out, that some grant requests in this area can be high relative to the median grant request but this prior round featured five grants over $100,000. So, to me, the ultimate concern is the expected rate of return on the particular grant relative to other possible options we have before us. In this particular instance we didn’t fund one of these projects but I definitely wouldn’t want to deter researchers with valuable ideas from applying, as I think work in this space has the potential to be extremely valuable.
All the said, I think there are a some reasons other places might be a better fit for some other funders:
Academic social science research is often a better fit for the EA research fund or Food Systems Fund because of their expertise + focus.
Academic plant-based + cultured meat research is often a better fit for the GFI fund because of their expertise + focus.
Academic farm animal welfare science research is often a better fit for Humane Slaughter Association, or a bunch of other scientific funders.
I think we should be open to funding all of the above, but I think a $1M academic grant will always be a heavy lift if we only have ~$1-2M to give away (i.e. the academic grant would be almost the whole thing).
What new charities do you want to be created by EAs?
I don’t have any strong opinions about this and it would likely take months of work to develop them. In general, I don’t know enough to suggest that it is desirable that new charities work in areas I think could use more work rather than existing organizations up their work in those domains.
What are the biggest mistakes Rethink Priorities did?
Not doing enough early enough to figure out how to achieve impact from our work and communicate with other organizations and funders about how we can work together.
Hey Saulius,
I’m very sorry that you felt that way – that wasn’t our intention. We aren’t going to get into the details of your resignation in public, but as you mention in your follow up comment, neither this incident, nor our disagreement over WAW views were the reason for your resignation.
As you recall, you did publish your views on wild animal welfare publicly. Because RP leadership was not convinced by the reasoning in your piece, we rejected your request to publish it under the RP byline as an RP article representative of an RP position. This decision was based on the work itself; OP was not at all a factor involved in this decision. Moreover, we made no attempt to censor your views or prevent them from being shared (indeed I personally encouraged you to publish the piece if you wanted).
To add some additional context without getting into the details of this specific scenario, we can share some general principles about how we approach donor engagement.
We have ~40 researchers working across a variety of areas. Many of them have views about what we should do and what research should be done. By no means do we expect our staff to publicly or privately agree with the views of leadership, let alone with our donors. Still, we have a donor engagement policy outlining how we like to handle communication with donors.
One relevant dimension is that we think that if one of our researchers, especially while representing RP, is sending something to a funder that has a plausible implication that one of the main funders of a department should seriously reduce or stop funding that department, we should know they are planning to do so before they do so, and roughly what is being said so that we can be prepared. While we don’t want to be seen as censoring our researchers, we do think it’s important to approach these sorts of things with clarity and tact.
There are also times when we think it is important for RP to speak with a unified voice to our most important donors and represent a broader, coordinated consensus on what we think. Or, if minority views of one of our researchers that RP leadership disagrees with are to be considered, this needs to be properly contextualized and coordinated so that we can interact with our donors with full knowledge of what is being shared with them (for example, we don’t want to accidentally convey that the view of a single member of staff represents RP’s overall position).
With regard to cause prioritization, funders don’t filter or factor into our views in any way. They haven’t been involved in any way with setting what we do or don’t say in our cause prioritization work. Further, as far as I’m aware, OP hasn’t adopted the kind of approach we’ve suggested on any of our major cause prioritization on moral weight or as seen in the CURVE sequence.