Co-CEO of Rethink Priorities
Marcus_A_Davis
Rethink Priorities 2020 Impact and 2021 Strategy
Ask Rethink Priorities Anything (AMA)
How Rethink Priorities is Addressing Risk and Uncertainty
Announcing Rethink Priorities
Announcing PriorityWiki: A Cause Prioritization Wiki
We mean to say that the ideas for these projects and the vast majority of the funding were ours, including the moral weight work. To be clear, these projects were the result of our own initiative. They wouldn’t have gone ahead when they did without us insisting on their value.
For example, after our initial work on invertebrate sentience and moral weight in 2018-2020, in 2021 OP funded $315K to support this work. In 2023 they also funded $15K for the open access book rights to a forthcoming book based on the topic. In that period of 2021-2023, for public-facing work we spent another ~$603K on moral weight work with that money coming from individuals and RP’s unrestricted funding.
Similarly, the CURVE sequence of WIT this year was our idea and we are on track to spend ~$900K against ~$210K funded by Open Phil on WIT. Of that $210K the first $152K was on projects related to Open Phil’s internal prioritization and not the public work of the CURVE sequence. The other $58K went towards the development of the CCM. So overall less than 10% of our costs for public WIT work this year was covered by OP (and no other institutional donors were covering it either).
- 11 Dec 2023 20:49 UTC; 23 points) 's comment on Rethink Priorities needs your support. Here’s what we’d do with it. by (
Thanks for the question!
I think the short answer is this what we think of doing projects in the improving the collective understanding space depends on a number of factors including the nature of the project, and the probability of that general change in perspective leading to actions changed in the future, and how important it would be if that change occurred.
One very simplistic model you can use to think about possible research projects in this area is:
Big considerations (classically “crucial considerations”, i.e. moral weight, invertebrate sentience)
New charities/interventions (presenting new ideas or possibilities that can be taken up)
Immediate influence (analysis to shift ongoing or pending projects, donations, or interventions)
It’s far easier to tie work in categories (2) or (3) into behavior changed. By contrast, projects or possible research that falls into the (1) can be very difficult to map to specific plausible changes ahead of time and, sometimes, even after the completion of the work. These projects can also be more likely to be boom or bust, in that the results of investigating them could have huge effects if we or others shift our beliefs but it can be fairly unlikely to change beliefs at all. That said, I think these types of projects can be very valuable and we try to dedicate some of our time to doing them.
I think it’s fair to say these types of “improving some collective understanding of prioritization” projects have been a minority of the types of projects we’ve done and that are listed for the coming year. However, there are many caveats here including but not limited to:
The nature of the project, our fit, and what others are working on has a big impact on which projects we take on. So even if, in theory, we thought a particular research idea was really worth pursuing there are many factors that go into whether we take on a particular project.
These types of projects have historically taken longer to complete, so they may be smaller in number but a larger share of our overall work hours than counting projects would suggest at first glance.
- 15 Dec 2020 8:22 UTC; 4 points) 's comment on Ask Rethink Priorities Anything (AMA) by (
Charity Science: Health—A New Direct Poverty Charity Founded on EA Principles
Hey Vasco, thanks for the thoughtful reply.
I do find fanaticism problematic at a theoretical level since it suggests spending all your time and resources on quixotic quests. I would go one further and say I think if you have a series of axioms and it proposes something like fanaticism, this should at least potentially count against that combination of axioms. That said, I definitely think, as Hayden Wilkinson pointed out in his In Defence of Fanaticism paper, there are many weaknesses with alternatives to EV.
Also, the idea that fanaticism doesn’t come up in practice doesn’t seem quite right to me. On one level, yeah, I’ve not been approached by a wizard asking for my wallet and do not expect to be. But I’m also not actually likely going to be approached by anyone threatening to money-pump me (and even if I were I could reject the series of bets) and this is often held as a weakness to EV alternatives or certain sets of beliefs. On another level, in some sense to the extent I think we can say fanatical claims don’t come up in practice it is because we’ve already decided it’s not worth pursuing them and discount the possibility, including the possibility of going looking for actions that would be fanatical.* Within the logic of EV, even if you thought there weren’t any ways to get the fanatical result with ~99% certainty, it would seem you’d need to be ~100% certain to fully shut the door on at least expending resources seeing if it’s possible you could get the fanatical option. To the extent we don’t go around doing that I think it’s largely because we are practically rounding down those fanatical possibilities to 0 without consideration (to be clear, I think this is the right approach).
All the other problems attributed to expected utility maximisaton only show up if one postulates the possibility of unbounded or infite value, which I do not think makes sense
I don’t think this is true. As I said in response to Michael St. Jules in the comments, EV maximization (and EV with rounding down unless it’s modified here too) also argues for a kind of edge-case fanaticism, where provided a high enough EV if successful you are obligated to take an action that’s 50.000001% positive in expectation even if the downside is similarly massive.
It’s really not clear to me the rational thing to do is consistently bet on actions that would impact a lot of possible lives but, say ~0.0001% chance of making a difference and are net positive in expectation but have a ~49.999999% chance of causing lots of harm. This seems like a problem even within a finite and bounded utility function for pure EV.
I am confused about why RP is still planning to invest significant resources in global health and development… Maybe a significant fraction of RP’s team believes non-hedonic benefits to be a major factor?
I’ve not polled internally but I don’t think non-hedonic benefits issue is a driving force inside RP. Speaking for myself, I do think hedonism is makes up for at least more than half of what makes things valuable at least in part for the reasons outlined in that post.
The reasons we work across areas in general are because of differences in the amount of money in the areas, the number of influenceable actors, the non-fungibility of the resources in the spaces (both money and talent), and moral and decision-theoretic uncertainty.
In this particular comparison case of GHD and AW, there’s hundreds of millions more of plausibly influenceable dollars in the GHD space than in the AW space. For example, GiveWell obviously isn’t going to shift their resources to animal welfare, but they still move a lot of money and could do so more effectively in certain cases. GiveWell alone is likely larger than all of the farm animal welfare spending in the world by non-governmental actors combined, and that includes a large number of animal actors I think it’s not plausible to affect with research. Further, I think most people who work in most spaces aren’t “cause neutral” and, for example, the counterfactual of all our GHD researchers isn’t being paid by RP to do AW research that influences even a fraction of the money they could influence in GHD.
Additionally, you highlight that AW looks more cost-effective than GHD but you did not note that AMF looked pretty robustly positive across different decision theories and this was not true, say, of any of the x-risk interventions we considered in the series and some of the animal interventions. So, one additional reason to do GHD work is the robustness of the value proposition.
Ultimately, though, I’m still unsure about what the right overall approach is to these types of trade-offs and I hope further work from WIT can help clarify how best to make these tradeoffs between areas.
*A different approach is to resist this conclusion is to assert a kind of claim that you must drop your probability in claims of astronomical value, and that this always balances out increases in claims of value such that it’s never rational within EV to act on these claims. I’m not certain this is wrong but, like with other approaches to this issue, within the logic of EV it seems you need to be at ~100% certainty this is correct to not pursue fanatical claims anyway. You could say in reply the rules of EV reasoning don’t apply to claims about how you should reason about EV itself, and maybe that’s right and true. But these sure seem like patches on a theory with weaknesses, not clear truths anyone is compelled to accept at the pain of being irrational. Kludges and patches on theories are fine enough. It’s just not clear to me this possible move is superior to, say, just biting that you need to do rounding down to avoid this type of outcome.
Lessons for estimating cost-effectiveness (of vaccines) more effectively
Thanks for the question and the kind words. However, I don’t think I can answer this without falling back somewhat on some rather generic advice. We do a lot of things that I think has contributed to where we are now, but I don’t think any of them are particularly novel:
We try to identify really high quality hires, bring them on, train them up and trust them to execute their jobs.
We seek feedback from our staff, and proactively seek to improve any processes that aren’t working.
We try to follow research and management best practices, and gather ideas on these fronts from organizations and leaders that have previously been successful.
We try to make RP a genuinely pleasant place to work for everyone on our staff.
As to your ideas about the possibility of RP’s success being high founder quality, I think Peter and I try very hard to do the best we can but I think in part due to survivorship bias it’s difficult for me to say that we have any extraordinary skills others don’t possess. I’ve met many talented, intelligent, and driven people in my life, some of whom have started ventures that have been successful and others who have struggled. Ultimately, I think it’s some combination of these traits, luck, and good timing that has lead us to be where we are today.
Thanks for the question!
We hire for fairly specific roles, and the difference between those we do hire and don’t isn’t necessarily as simple as those brought on being better as researchers overall (to say nothing of differences in fit or skill across causes).
That said, we generally prioritize ability in writing, general reasoning, and quantitative skills. That is we value the ability to uncover and address considerations, counter-points, meta-considerations on a topic, produce quantitative models and do data analysis when appropriate (obviously this is more relevant in certain roles than others), and to compile this information into understandable writing that highlights the important features and addresses topics with clarity. However, which combination of these skills is most desired at a given time depends on current team fit and the role each hire would be stepping into.
For these reasons, it’s difficult to say with precision which skills I’d hope for more of among EA researchers. With those caveats, I’d still say a demonstration of these skills through producing high quality work, be it academic or in blog posts, is in fact a useful proxy for the kinds of work we do at RP.
Thanks for the questions!
On (1), we see our work in WAW as currently doing three things: (1) foundational research (e.g., understanding moral value and sentience, understanding well-being at various stages of life), (2) investigating plausible tractable interventions (i.e., feasible interventions currently happening or doable within 5 years), and (3) field building and understanding (e.g., currently we are running polls to see how “weird” the public finds WAW interventions).
We generally defer to WAI on matters of direct outreach (both academic and general public) and do not prioritize that area as much as WAI and Animal Ethics do. It’s hard to say more on how our vision differs from WAI without them commenting, but we collaborate with them a lot and we are next scheduled to sync on plans and vision in early January.
On (2), it’s hard to predict exactly what additional restrict donations do, but in general, we expect them to increase in the long run how much we spend in a cause by an amount similar to how much is donated. Reasons for this include: we budget on a fairly long-term basis, so we generally try to predict what we will spend in a space, and then raise that much funding. If we don’t raise as much as we’d like, we likely consider allocating our expenses differently; and if we raise more than we expected, we’d scale up our work in a cause area. Because our ability to work in spaces is influenced by how much we raise, generally raising more restricted funding in a space ought to lead to us doing more work in that space.
David’s post is here: Perceived Moral Value of Animals and Cortical Neuron Count
What do you think of this rephrasing of your original argument:
I suspect people rarely get deeply interested in the the value of foreign aid unless they come in with an unusually high initial intuitive view that being human is what matters, not being in my country… If you somehow could convince a research group, not selected for caring non-Americans, to pursue this question in isolation, I’d predict they’d end up with far less foreign aid-friendly results.
I think this argument is very bad and I suspect you do too. You can rightfully point out that in this context someone starting out at the 5th percentile before going into a foreign aid investigation and then determining foreign aid is much more valuable than the general population thinks would be, in some sense, stronger evidence than if they had instead started at the 95th percentile. However, that seems not super relevant. What’s relevant is whether it is defensible at all to norm to a population based on their work on a topic given a question of values like this (that or if there were some disanalogy between this and animals).
Generally, I think the typical American when faced with real tradeoffs (they actually are faced with these tradeoffs implicitly as part of a package vote) don’t value the lives of the global poor equally to the lives of their fellow Americans. More importantly, I think you shouldn’t norm where your values on global poverty end up after investigation back to what the typical American thinks. I think you should weigh the empirical and philosophical evidence about how to value the lives of the global poor directly and not do too much, if any, reference class checking about other people’s views on the topic. The same argument holds for whether and how much we should value people 100 years from now after accounting for empirical uncertainty.
Fundamentally, the question isn’t what people substantively do think (except for practical purposes), the question is what beliefs are defensible after weighing the evidence. I think it’s fine to be surprised by what RP’s moral weight work says on capacity for welfare, and I think there are still high uncertainty in this domain. I just don’t think either of our priors, or the general population’s priors, about the topic should be taken very seriously.
Jeff, are you saying you think “an intuition that a human year was worth about 100-1000 times more than a chicken year” is a starting point of “unusually pro-animal views”?
In some sense, this seems true relative to most humans’ implied views by their actions. But, as Wayne pointed out above, this same critique could apply to, say, the typical American’s views about global health and development. Generally, it doesn’t seem to buy much to frame things relative to people who’ve never thought about a given topic substantively and I don’t think you’d think this would be a good critique of a foreign aid think tank looking into how much to value global health and development.
Maybe you are making a different point here?
Also, it would help more if you were being explicit about what you think a neutral baseline is. What would you consider more typical or standard views about animals from which to update? Moment to moment human experience is worth 10,000x that of a chicken conditional on chickens being sentient? 1,000,000x? And, whatever your position, why do you think that is a more reasonable starting point?
Thanks for the questions!
If one is only concerned w/ preventing needless suffering, prioritising the most extreme suffering, would donating to Rethink Priorities be a good investment for them, and if so, how so?
I think this depends on many factual beliefs you hold, including what groups of creatures count and what time period you are concerned about. Restricting ourselves to the present and assuming all plausibly sentient minds count (and ignoring extremes, say, less than 0.1% chance), I think farm and wild animals are plausibly candidates for enduring some of the worst suffering.
Specifically, I’d say it’s plausible some of the worst persistent current suffering is plausibly in farmed chickens and fish, and thus work to reduce the worst aspects of those is a decent bet to prevent extreme suffering. Similarly, wild animals likely experience the largest share of extreme suffering currently because of the sheer numbers and nature of life largely without interventions to prevent, say, the suffering of starvation, or extreme physical pain. For these reasons, work to improve conditions for wild animals plausibly could be a good investment.
Still restricted to the present, and outside the typical EA space altogether, I think it’s plausible much of the worst suffering in the world is committed during war crimes or torture under various authoritarian states. I do not know if there’s anything remotely tractable in this space or what good donation opportunities would be.
If you broaden consideration to include the future, a much wider set of creatures plausibly could experience extreme suffering including digital minds running at higher speeds, and/or with increased intensity of valenced experience beyond what’s currently possible in biological creatures. Here, what you think is the best bet would depend on many empirical beliefs again. I would say, only, that I’m excited about our longtermism work and think we’ll meaningfully contribute to creating the kind of future that decreases the risks of these types of outcomes.
Thanks for the question. We have forthcoming work on ballot initiatives which will hopefully be published in January and other work that we plan to keep unpublished (though accessible to allies) for the foreseeable future.
In addition, we have some plans to investigate potentially high value policies for animal welfare.
On CE’s work, we communicate with them fairly regularly about their work and their plans, in addition to reading and considering the outputs of their work.
Thanks for the question and thanks for the compliment about our work! As to the impact of the work, from our Impact survey:
Invertebrate sentience was the second most common (13) piece of work that changed beliefs. Also the second largest number of changed actions of all our work (alongside EA survey) including 1 donation influenced, 1 research inspiration, and 4 unspecified actions.
Informally, I could add many people (probably >10) in the animal welfare space have personally told me they think our work on invertebrates changed their opinion about invertebrate sentience (though there is, of course, a chance these people were overemphasizing the work to me). A couple of academics have also privately told us they thought our work was worthwhile and useful to them. These people largely aren’t donors though and I doubt many of them have started to give to invertebrate charities.
That said, I think the impact of this project in particular is difficult to judge. The diffuse impact of possibly introducing or normalizing discussion of this topic is difficult to capture in surveys, particularly when the answers are largely anonymous, and the payoffs even if people have been convinced to take them seriously may not occur until there is an actionable intervention to possibly support.
I don’t think it is true the EA AW Fund is essentially neartermist, though this may depend somewhat on what you mean. We definitely consider grants that have potential long term payoffs beyond the next few decades. In my opinion, much of the promise of PBM and cultivated meat relies on impacts that would be 15-100 years away and there’s no intrinsic reason held, for me and I believe other funders, to discount or not consider other areas for animal welfare that would have long term payoffs.
That said, as you suggest in (2), I do think it is true that it makes sense for the LTFF to focus more on thinking through and funding projects that involve what would happen assuming AGI were to come to exist. A hypothetical grant proposal which is focused on animal welfare but depends on AGI would probably make sense for both funds to consider or consult each other on and it would depend on the details of the grant as to whose ultimate domain we believe it falls under. We received applications at least somewhat along these lines in the prior grant round and this is what happened.
Given the above, I think it’s fair to say we would consider grants with reasoning like in your post, but sometimes the ultimate decision for that type of grant may make more sense to be considered for funding by the LTFF.
On the question of what I think of the moral circle expansion type arguments for prioritizing animal welfare work within longtermism, I’ll speak for myself. I think you are right that the precise nature of how moral circles expand and whether such expansion is unidimensional or multidimensional is an important factor. In general, I don’t have super strong views on this issue though so take everything I say here to be stated with uncertainty.
I’m somewhat skeptical, to varying degrees, about the practical ability to test people’s attitudes about moral circle expansion in a reliable enough way to gain the kind of confidence needed to determine if that’s a more tractable way to influence long run to determine, as you suggest it might, whether to prioritize clean meat research or advocacy against speciesism, which groups of animals to prioritize, or which subgroups of the public to target if attempting outreach. The reason for much of this skepticism (as you suggest as a possible limitation of this argument) is largely the transferability across domains and cultures, and the inherent wide error bars in understanding the impact of how significantly different facts of the world would impact responses to animal welfare (and everything else).
For example, supposing it would be possible to develop cost-competitive clean meat in the next 30 years, I don’t know what impact that would have on human responses to wild animal welfare or insects and I wouldn’t place much confidence, if any, in how people say they would respond to their hypothetical future selves facing that dilemma in 30 years (to say nothing of their ability to predict the demands of generations not yet born to such facts). Of course, reasons like this don’t apply to all of the work you suggested doing and, say, surveys and experiments on existing attitudes of those actively working in AI might tell us something about whether animals (and which animals if any) would be handled by potential AI systems. Perhaps you could use this information to decide we need to ensure non-human-like minds to be considered by those at elite AI firms.
I definitely would encourage people to send us any ideas that fall into this space, as I think it’s definitely worth considering seriously.
Hey Saulius,
I’m very sorry that you felt that way – that wasn’t our intention. We aren’t going to get into the details of your resignation in public, but as you mention in your follow up comment, neither this incident, nor our disagreement over WAW views were the reason for your resignation.
As you recall, you did publish your views on wild animal welfare publicly. Because RP leadership was not convinced by the reasoning in your piece, we rejected your request to publish it under the RP byline as an RP article representative of an RP position. This decision was based on the work itself; OP was not at all a factor involved in this decision. Moreover, we made no attempt to censor your views or prevent them from being shared (indeed I personally encouraged you to publish the piece if you wanted).
To add some additional context without getting into the details of this specific scenario, we can share some general principles about how we approach donor engagement.
We have ~40 researchers working across a variety of areas. Many of them have views about what we should do and what research should be done. By no means do we expect our staff to publicly or privately agree with the views of leadership, let alone with our donors. Still, we have a donor engagement policy outlining how we like to handle communication with donors.
One relevant dimension is that we think that if one of our researchers, especially while representing RP, is sending something to a funder that has a plausible implication that one of the main funders of a department should seriously reduce or stop funding that department, we should know they are planning to do so before they do so, and roughly what is being said so that we can be prepared. While we don’t want to be seen as censoring our researchers, we do think it’s important to approach these sorts of things with clarity and tact.
There are also times when we think it is important for RP to speak with a unified voice to our most important donors and represent a broader, coordinated consensus on what we think. Or, if minority views of one of our researchers that RP leadership disagrees with are to be considered, this needs to be properly contextualized and coordinated so that we can interact with our donors with full knowledge of what is being shared with them (for example, we don’t want to accidentally convey that the view of a single member of staff represents RP’s overall position).
With regard to cause prioritization, funders don’t filter or factor into our views in any way. They haven’t been involved in any way with setting what we do or don’t say in our cause prioritization work. Further, as far as I’m aware, OP hasn’t adopted the kind of approach we’ve suggested on any of our major cause prioritization on moral weight or as seen in the CURVE sequence.