CEO of Rethink Priorities
Marcus_A_Davis
Thanks for the question, Edo!
We keep a large list of project ideas, and regularly add to it by asking others for projects ideas including staff, funders, advisors, and organizations in the spaces we work in.
Hey Edo, thanks for the question!
We’ve had some experience working with volunteers. In the past, when we had less operational support than we do now, we found it challenging to manage and monitor volunteers but we think it’s something that we’re better placed to handle now so may explore again in the coming years, though we are generally hesitant about depending on free labor.
We’ve not really had experience publicly outsourcing questions to the EA community, but we regularly consult wider EA communities for input on questions we are working on. Finally, and I’m not sure this is what you meant, but we’ve also partnered with Metaculus on some forecasting questions.
Hey Josh, thanks for the question!
From first principles, our allocation depends on talent fit, the counterfactual value of our work, fundraising, and, of course, some assessment of how important we think the work is, all things considered.
At the operational level, we set targets as percentage of time we want to spend on each cause area based on these factors and we re-evaluate based on that as our existing commitments, the data, and as changes in our opinions about these matters warrant.
I think it’s going great! I think our combined skillset is a big pro when reviewing work, considering project ideas. In general, I think bouncing ideas off each other improves and sharpens our ideas. We are definitely able to cover more depth and breadth with the two of us than if only one person was leading the organization.
Additionally, Peter and I get along great and I enjoy working alongside him everyday (well, digitally anyway given we are remote).
Thanks for the question!
We hire for fairly specific roles, and the difference between those we do hire and don’t isn’t necessarily as simple as those brought on being better as researchers overall (to say nothing of differences in fit or skill across causes).
That said, we generally prioritize ability in writing, general reasoning, and quantitative skills. That is we value the ability to uncover and address considerations, counter-points, meta-considerations on a topic, produce quantitative models and do data analysis when appropriate (obviously this is more relevant in certain roles than others), and to compile this information into understandable writing that highlights the important features and addresses topics with clarity. However, which combination of these skills is most desired at a given time depends on current team fit and the role each hire would be stepping into.
For these reasons, it’s difficult to say with precision which skills I’d hope for more of among EA researchers. With those caveats, I’d still say a demonstration of these skills through producing high quality work, be it academic or in blog posts, is in fact a useful proxy for the kinds of work we do at RP.
Thanks for the questions!
On (1), we see our work in WAW as currently doing three things: (1) foundational research (e.g., understanding moral value and sentience, understanding well-being at various stages of life), (2) investigating plausible tractable interventions (i.e., feasible interventions currently happening or doable within 5 years), and (3) field building and understanding (e.g., currently we are running polls to see how “weird” the public finds WAW interventions).
We generally defer to WAI on matters of direct outreach (both academic and general public) and do not prioritize that area as much as WAI and Animal Ethics do. It’s hard to say more on how our vision differs from WAI without them commenting, but we collaborate with them a lot and we are next scheduled to sync on plans and vision in early January.
On (2), it’s hard to predict exactly what additional restrict donations do, but in general, we expect them to increase in the long run how much we spend in a cause by an amount similar to how much is donated. Reasons for this include: we budget on a fairly long-term basis, so we generally try to predict what we will spend in a space, and then raise that much funding. If we don’t raise as much as we’d like, we likely consider allocating our expenses differently; and if we raise more than we expected, we’d scale up our work in a cause area. Because our ability to work in spaces is influenced by how much we raise, generally raising more restricted funding in a space ought to lead to us doing more work in that space.
Thanks for the question!
I think the short answer is this what we think of doing projects in the improving the collective understanding space depends on a number of factors including the nature of the project, and the probability of that general change in perspective leading to actions changed in the future, and how important it would be if that change occurred.
One very simplistic model you can use to think about possible research projects in this area is:
Big considerations (classically “crucial considerations”, i.e. moral weight, invertebrate sentience)
New charities/interventions (presenting new ideas or possibilities that can be taken up)
Immediate influence (analysis to shift ongoing or pending projects, donations, or interventions)
It’s far easier to tie work in categories (2) or (3) into behavior changed. By contrast, projects or possible research that falls into the (1) can be very difficult to map to specific plausible changes ahead of time and, sometimes, even after the completion of the work. These projects can also be more likely to be boom or bust, in that the results of investigating them could have huge effects if we or others shift our beliefs but it can be fairly unlikely to change beliefs at all. That said, I think these types of projects can be very valuable and we try to dedicate some of our time to doing them.
I think it’s fair to say these types of “improving some collective understanding of prioritization” projects have been a minority of the types of projects we’ve done and that are listed for the coming year. However, there are many caveats here including but not limited to:
The nature of the project, our fit, and what others are working on has a big impact on which projects we take on. So even if, in theory, we thought a particular research idea was really worth pursuing there are many factors that go into whether we take on a particular project.
These types of projects have historically taken longer to complete, so they may be smaller in number but a larger share of our overall work hours than counting projects would suggest at first glance.
- Dec 15, 2020, 8:22 AM; 4 points) 's comment on Ask Rethink Priorities Anything (AMA) by (
Hey I’m happy to see this on the forum! I think farmed shrimp interventions are a promising area and this report highlights some important considerations. I should note that Rethink Priorities has also been researching this topic for a while and I won’t go into detail as I’m not leading up this work and the person who is currently is on leave, but I think we’ve tentatively come to some different conclusions about the most promising next steps in this domain.
In the future, if anyone reading this is inclined to work on farmed shrimp, in addition to reviewing this report I’d hope you’d read over our forthcoming work and/or reach out to us about this area.
I think 1 and 2 should result in the exact same experiences (and hence same intensity) since the difference is just some neurons that didn’t do anything or interact with the rest of the brain, even though 2 has a greater proportion of neurons firing. The claim that their presence/absence makes a difference to me seems unphysical, because they didn’t do anything in 1 where they were present.
I’m unclear why you think proportion couldn’t matter in this scenario.
I’ve written a pseudo program in Python below in which proportion does matter, removing neurons that don’t fire alters the experience, and the the raw number of neurons involved is incidental to the outputs (10 out of 100 gets the same result as 100 out of 1000) [assuming there is a set of neurons to be checked at all]. I don’t believe consciousness works this way in humans or other animals but I don’t think anything about this is obviously incorrect given the constraints of your thought experiment.
One place where this might be incorrect is by checking if a neuron was firing, this might be seen as violating the constraint on the inactive neurons actually being inactive. But this could be conceived a third group of neurons checking for input from this set. But even if this particular program is slightly astray, it seems plausible an altered version of this would meet the criteria for proportion to matter.
def experience_pain(nociceptive_neurons_list): # nociceptive_neurons_list is a list of neurons represented by 0's and 1's, where 1 is when an individual neuron is firing, and 0 is not proportion_of_neurons_firing = proportion_of_neurons_firing(nociceptive_neurons_list) if proportion_of_neurons_firing < 0.3: return pain_intensity(1) elif proportion_of_neurons_firing > 0.3 && proportion_of_neurons_firing < 0.6: return pain_intensity(2) elif proportion_of_neurons_firing > 0.6 && proportion_of_neurons_firing < 1: return pain_intensity(5) elif proportion_of_neurons_firing == 1: return pain_intensity(10) else: return pain_intensity(0) def proportion_of_neurons_firing(nociceptive_neurons_list): num_neurons_firing = 0 for neuron in nociceptive_neurons_list: if neuron == 1: num_neurons_firing += num_neurons_firing # add 1 for every neuron that is firing return num_neurons_firing/get_number_of_pain_neurons(nociceptive_neurons_list) #return the proportion firing def get_number_of_pain_neurons(nociceptive_neurons_list): return len(nociceptive_neurons_list) # get length of list pain_list_all_neurons = [0, 0, 0, 1, 1] pain_list_only_firing = [1, 1] experience_pain(pain_list_all_neurons) # would return pain_intensity(2) experience_pain(pain_list_only_firing) # would return pain_intensity(10)
We in fact do (1) then (2). However, to continue your example, donations to animal work still end up going to animals. If it were the case, say, that we hit the animal total needed for 2020 before the overall total, additional animal donations would go to animal work for 2021.*
It is true in this scenario that in 2020 we’d end up spending less unrestricted funding on animals, but the total spent on animals that year wouldn’t change and the animal donations for 2020 would not then be spent on non-animal work.
*We would very much state publicly when we have no more room for further donations in general, and by cause area.
Internally, as part of Rethink Charity, we have fairly standard formal anti-harassment, discrimination, and reasonable accommodation policies. That is, we comply with all relevant anti-discrimination laws, including [Title VII of the Civil Rights Act of 1964, Americans with Disabilities Act (ADA) and Age Discrimination in Employment Act (ADEA.)] We explicitly prohibit offensive behavior (e.g. derogatory comments towards colleagues of a specific gender or ethnicity.)
We also provide a way for any of our staff to offer anonymous feedback and information to senior management (which can help assist someone in the reporting a claim of harassment or discrimination)
Finally, I’d note that during our hiring round last year we pretty actively sought out and promoted our job to a diverse pool of candidates and we tracked performance of hiring on these metrics. We plan to continue this going forward.
Thanks for the question. We have forthcoming work on ballot initiatives which will hopefully be published in January and other work that we plan to keep unpublished (though accessible to allies) for the foreseeable future.
In addition, we have some plans to investigate potentially high value policies for animal welfare.
On CE’s work, we communicate with them fairly regularly about their work and their plans, in addition to reading and considering the outputs of their work.
I honestly don’t know. I’d probably be doing research at another EA charity, or potentially leading (or trying to lead) a slightly different EA charity that doesn’t currently exist. Generally, I have previously seriously considered working at other EA organizations but it’s been some time since I’ve seriously considered this topic.
Thanks for the question and thanks for the compliment about our work! As to the impact of the work, from our Impact survey:
Invertebrate sentience was the second most common (13) piece of work that changed beliefs. Also the second largest number of changed actions of all our work (alongside EA survey) including 1 donation influenced, 1 research inspiration, and 4 unspecified actions.
Informally, I could add many people (probably >10) in the animal welfare space have personally told me they think our work on invertebrates changed their opinion about invertebrate sentience (though there is, of course, a chance these people were overemphasizing the work to me). A couple of academics have also privately told us they thought our work was worthwhile and useful to them. These people largely aren’t donors though and I doubt many of them have started to give to invertebrate charities.
That said, I think the impact of this project in particular is difficult to judge. The diffuse impact of possibly introducing or normalizing discussion of this topic is difficult to capture in surveys, particularly when the answers are largely anonymous, and the payoffs even if people have been convinced to take them seriously may not occur until there is an actionable intervention to possibly support.
We have raised half his salary for 2020 and 2021 on a grant explicitly for this purpose. If you’d like to talk more about this, I’d be happy for you to shoot me an email: marcus [at] rtcharity.org
Thanks for the question! We do research informed by input from funders, organizations, and researchers that we think will help funders make better grants and help direct work organizations do to higher impact work.
So our plans for distribution vary by the audience in question. For funders and particular researchers we make direct efforts to share our work with them. Additionally, we try to regularly have discussions about our work and priorities with the relevant existing research EA communities (researchers themselves and org leaders). However, as we’ve said recently in our impact and strategy update, we think we can do a better job of this type of communication going forward.
For the wider EA community, we haven’t undertaken significant efforts to drive more discussion on posts but this is something potentially worth considering. I’d say one driver of whether we’d actually decide to do this would be if we came to believe more work here would potentially increase the chances we hit the goals I mentioned above.
Thanks for the question! We do not view our work as necessarily focused on the West. To the extent our work so far has focused on such countries, it’s because that’s where we think our comparative advantage currently has centered but as our team learns, and possibly grows, this won’t necessarily hold over time.
Thanks for the question! To echo Ozzie, I don’t think it’s fair to directly compare the quality of our work to the quality of GPI’s work given we work in overlapping but quite distinct domains with different aims and target audiences.
Additionally, we haven’t prioritized publishing in academic journals, though we have considered it for many projects. We don’t believe publishing in academic journals is necessarily the best path towards impact in the areas we’ve published in given our goals and don’t view it as our comparative advantage.
All this said, we don’t deliberately err more towards quantity over quality, but we do consider the time tradeoff of further research on a given topic during the planning and execution phases of a project (though I don’t think this is in any way unique to us within EA). We do try to publish more frequently because of our desire for (relatively) shorter feedback loops. I’d also say we think our work is high quality but I’ll let the work speak for itself.
Finally, I take no position on whether EA organizations in general ought to err more or less towards academic publications as I think it depends on a huge number of factors specific to the aims and staffs of each organization.
My ranges represent what I think is a reasonable position is on the probability of each creatures sentience given all current input and expected future input. Still, as I said:
...the range is still more of a guideline for my subjective impression than a declaration of what all agents would estimate given their engagement with the literature
I could have made a 90% subjective confidence interval, but I wasn’t confident enough that such an explicit goal in creating or distributing my understanding would be helpful.
Thanks for the questions!
I think this depends on many factual beliefs you hold, including what groups of creatures count and what time period you are concerned about. Restricting ourselves to the present and assuming all plausibly sentient minds count (and ignoring extremes, say, less than 0.1% chance), I think farm and wild animals are plausibly candidates for enduring some of the worst suffering.
Specifically, I’d say it’s plausible some of the worst persistent current suffering is plausibly in farmed chickens and fish, and thus work to reduce the worst aspects of those is a decent bet to prevent extreme suffering. Similarly, wild animals likely experience the largest share of extreme suffering currently because of the sheer numbers and nature of life largely without interventions to prevent, say, the suffering of starvation, or extreme physical pain. For these reasons, work to improve conditions for wild animals plausibly could be a good investment.
Still restricted to the present, and outside the typical EA space altogether, I think it’s plausible much of the worst suffering in the world is committed during war crimes or torture under various authoritarian states. I do not know if there’s anything remotely tractable in this space or what good donation opportunities would be.
If you broaden consideration to include the future, a much wider set of creatures plausibly could experience extreme suffering including digital minds running at higher speeds, and/or with increased intensity of valenced experience beyond what’s currently possible in biological creatures. Here, what you think is the best bet would depend on many empirical beliefs again. I would say, only, that I’m excited about our longtermism work and think we’ll meaningfully contribute to creating the kind of future that decreases the risks of these types of outcomes.