Centre for Exploratory Altruism Research (CEARCH)
Introduction
The Centre for Exploratory Altruism Research (CEARCH) emerged from the 2022 Charity Entrepreneurship Incubation Programme. In a nutshell, we do cause prioritization research, as well as subsequent outreach to update the EA and non-EA communities on our findings.
Exploratory Altruism
The Problem
There are many potential cause areas (e.g. improving global health, or reducing pandemic risk, or addressing long-term population decline), but we may not have identified what the most impactful causes are. This is the result of a lack of systematic cause prioritization research.
EAās three big causes (i.e. global health, animal welfare and AI risk) were not chosen by systematic research, but by historical happenstance (e.g. Peter Singer being a strong supporter of animal rights, or the Future of Humanity Institute influencing the early EA movement in Oxford).
Existing cause research is not always fully systematic; for lack of time, it does not always involve (a) searching for as many causes as possible (e.g. more than a thousand) and then (b) researching and evaluating all of them to narrow down to the top causes.
The search space for causes is vast, and existing EA research organizations agree that there is room for a new organization.
The upshot of insufficient cause prioritization research, and of not knowing the most impactful causes, is that we cannot direct our scarce resources accordingly. Consequently, global welfare is lower and the world worse off than it could be.
Our Solution
To solve this problem, CEARCH carries out:
A comprehensive search for causes.
Rigorous cause prioritization research, with (a) shallow research reviews done for all causes, (b) intermediate research reviews for more promising causes, and finally (c) deep research reviews for potential top causes.
Reasoning transparency and outreach to allow both the EA and non-EA movement to update on our findings and to support the most impactful causes available.
Our Vision
We hope to discover a Cause X every three years and significantly increase support for it.
Expected Impact
If youāre interested in the expected impact of exploratory altruism, do take a look at our website (link), where we discuss our theory of change and the evidence base. Charity Entrepreneurship also has a detailed report out on exploratory altruism (link).
Team & Partners
The current team currently comprises Joel Tan, the founder (link).
However, weāre looking to hire additional researchers in the near future- do reach out (link) if youāre interested in working with us. Do also feel free to get in touch if you wish to discuss cause prioritization research/āoutreach, provide advice in general, or if you believe CEARCH can help you in any way.
Research Methodology
Research Process
Our research process is iterative:
Each cause is subject to an initial shallow research round of one week of desktop research.
If the causeās estimated cost-effectiveness is at least one magnitude greater than a GiveWell top charity, it passes to the intermediate research round of two weeks of desktop research and expert interviews.
Then, if the causeās estimated cost-effectiveness is still at least one magnitude greater than a GiveWell top charity, it passes to the deep research round of four weeks of desktop research, expert interviews and potential commissioning of surveys and quantitative modelling.
The idea behind the threshold is straightforwardāresearch at the shallower level tends to overestimate a causeās cost-effectiveness, so if a cause doesnāt appear effective early on, itās probably not going to be a better-than-GiveWell bet, let alone a Cause X magnitudes more important than our current top causes. Consequently, itās likely a better use of time to move on to the next candidate cause, than to spend more time on this particular cause.
Evaluative Framework
CEARCH attempts to identify a causeās marginal expected value (MEV):
MEV = t * Ī£(n = p * m * s * c)
where
t = tractability, or proportion of problem solved per additional unit of resources spent
p = probability of benefit/ācost
m = moral weight of benefit/ācost accrued per individual
s = scale in terms of number of individuals benefited/āharmed at any one point in time
c = persistence of the benefits/ācosts
This can be viewed as an extension of the ITN framework, for this approach also takes into account the three ITN factors:
Importance: Factored in with p * m * s * c.
Tractability: Factored in with t.
Neglectedness: Factored in with (i) c, since the persistence of the benefits will depends on how long the problem would have lasted and harmed people sans intervention, and that in turn is a function of the extent to which the cause is neglected; and (ii) t, since tractability is a function of neglectedness to the extent that diminishing marginal returns apply.
However, the MEV framework has the additional following advantage:
Through c, it takes into account of not just the decline (i.e. non-persistence) of a problem from active intervention (i.e. the neglectedness issue), but also decline from secular trends (e.g. economic growth reducing disease burden through better sanitation, nutrition, and greater access to healthcare).
In implementing the MEV framework, especial effort is made to brainstorm for what benefits and costs there areāthough, in our experience, the health effects tend to swamp the non-health effects.
For more details, refer to this comprehensive write-up on CEARCHās evaluative framework (link).
Research Findings
We recently finished conducting shallow research on nuclear war, fungal disease, and asteroid impact. To summarize our findings:
Nuclear War
Taking into account the expected benefits of denuclearization (i.e. fewer deaths and injuries from nuclear war), the expected costs (i.e. more deaths and injuries from conventional war due to weakened deterrence), and the tractability of lobbying for denuclearization, CEARCH finds that the marginal expected value of lobbying for denuclearization to be 248 DALYs per USD 100,000, which is around 39% as cost-effective as giving to a GiveWell top charity.
For more details, refer to our cost-effectiveness analysis (link) on the matter as well as the accompanying research report (link).
Fungal Disease
Considering the expected benefits of eliminating fungal infections (i.e. fewer deaths, less morbidity and greater economic output) as well as the tractability of vaccine development, CEARCH finds that the marginal expected value of vaccine development for fungal infections to be 1,104 DALYs per USD 100,000, which is around 1.7x as cost-effective as giving to a GiveWell top charity.
For more details, refer to our cost-effectiveness analysis (link) on the matter as well as the accompanying research report (link).
Asteroids
Factoring in the expected benefits of preventing asteroid impact events (i.e. fewer deaths and injuries) as well as the tractability of lobbying for asteroid defence, CEARCH finds that the marginal expected value of such asteroid defence lobbying to be 1,352 DALYs per USD 100,000, which is around 2.1x as cost-effective as giving to a GiveWell top charity.
For more details, refer to our cost-effectiveness analysis (link) on the matter as well as the accompanying research report (link).
General Comments
The causes were selected purely out of interest, not because these causes were expected to be especially cost-effective. However, expectations at the outset were that, in terms of their cost-effectiveness, the causes would rank in the following way (in descending order):
Fungal diseases: Importance probably low compared to longtermist causes, though the problem is certain and there seem to be decently tractable solutions (e.g. advance market commitments).
Nuclear war: Change here is likely to be extremely intractable, while the per annum probabilities are fairly low if still meaningful.
Asteroid impact: High impact on occurrence but not neglected given DART, while the probability of occurrence is extremely low and one imagines that tractability isnāt that great (effective but expensive).
The results (asteroid impact being the most cost-effective cause, followed by fungal disease, and then nuclear war) were hence moderately surprising. While we wouldnāt over-update on such a small sample, we do think itās a data point against the value of intuition in selecting cause areas for initial cause prioritization research, and for making the effort to research as many causes as possible, even ones that do not seem especially important on the surface.
Going Forward
CEARCH will be publishing more detailed forum posts on nuclear war/āfungal disease/āasteroid impact, and will also continue doing research into additional causes, following the process and methodology outlined above. Comments and criticisms on our research methodology and on our specific research results are, of course, welcome.
- LongterĀmism and anĀiĀmals: ReĀsources + join our DisĀcord comĀmuĀnity! by 31 Jan 2023 10:45 UTC; 102 points) (
- EA & LW FoĀrums Weekly SumĀmary (17 ā 23 Oct 22ā²) by 25 Oct 2022 2:57 UTC; 35 points) (
- EA & LW FoĀrums Weekly SumĀmary (17 ā 23 Oct 22ā²) by 25 Oct 2022 2:57 UTC; 10 points) (LessWrong;
- 21 Oct 2022 4:50 UTC; 3 points) 's comment on ShalĀlow ReĀport on Asteroids by (
- 29 Oct 2022 0:14 UTC; 1 point) 's comment on ProĀpose and vote on poĀtenĀtial EA Wiki arĀtiĀcles /ā tags [2022] by (
Congratulations on launching your new organisation!
When I read your post I realised that I was confused by a few things:
(A) It seems like you think that there hasnāt been enough optimisation pressure going into the causes that EA is currently focussed on (and possibly that āsystematic researchā is the only/ābest way to get sufficient levels of optimisation.
I think this is probably wrong for a few reasons
1. There are quite a few examples of people switching between cause areas (e.g. Holden, Will M, Toby Ord moving from GHD to Longtermism). Also, organisations seem to have historically done a decent amount of pivoting (GiveWell ā GiveWell Labs/ā Open Phil, 80k spinning out ACE ā¦).
2. Finding cause x has been a meme for a pretty long time and I think looking for new cause/āproject etc. has been pretty baked in to EA since the start. I think we just havenāt found better things because the things we currently have are very good according to sone worldview.
3. My impression is that many EAs (particularly EAs that are highly involved) have done cause prioritisation themselves. Maybe not to the rigour that you would like but my sense is that many community members doing this work themselves and then doing some aggregation by looking at what people end up doing gives some data (although I agree itās not perfect). To some degree cause exploration happens by default in EA.
(B) I am also a bit confused why the goal or proxy goal is find a cause every 3 years? Is it 3 rather than 1 or 6 due to resource constraints or is this number mostly determined by some a priori sense of how many causes their āshouldā be.
(C) Minor: You said that EAs big 3 cause areas are global health, animal welfare and AI risk. I am not sure what the natural way or carving up the cause area space is but Iād guess that Bio security should also be on this list. Maybe something pointing at meta EA depending on what you think of as a ācauseā.
I think there are also good worldview-based explanations for why these causes should have been easy to discover and should remain among the main causes:
The interventions that are most cost-effective with respect to outcomes measured with RCTs (for humans) are GiveWell charity interventions. Also, for human welfare, your dollar tends to go further in developing countries, because wealthier countries spend more on health and consumption (individually and at the government level) and so already pick the lowest hanging fruit.
If you donāt require RCTs or even formal rigorous studies, but still expect feedback on outcomes close to your outcomes of interest or remain averse to putting everything into a single one-shot (described in 3), you get high-leverage policy and R&D interventions beating GiveWell charities. Corporate and institutional farmed animal interventions will also beat GiveWell charities, if you also grant substantial moral weight to nonhuman animals.
If you arenāt averse to allocating almost everything into shifting the distribution of a basically binary outcome like extinction (one-shotting) with very low probability, and you just take expected values through and weaken your standards of evidence even more (basically no direct feedback on the primary outcomes of interest), you get some x-risk and global catastrophic risk interventions beating GiveWell charities, and if you donāt discount moral patients in the far future or donāt care much about nonhuman animals, they can beat all animal interventions. AI risk stands out as by far the most likely and most neglected such risk to many in our community. (There are some subtleties Iām neglecting.)
Thanks a lot for the feedback!
(a) Agreed that there is a lot of research being down, and I think my main concern (and CEās too, I understand, though I wonāt speak for Joey and his team on this) is the issue of systematicityācauses can appear more or less important based on the specific research methodology employed, and so 1000 causes evaluated by a 1000 people just doesnāt deliver the same actionable information as a 1000 causes evaluated by a single organization employing a single methodology.
My main outstanding uncertainty at this point is just whether such an attempt at broad systematic research is really feasible given how much time research even at the shallow stage is taking.
I understand that GWWC is looking to do evaluation of evaluators (i.e. GiveWell, FP, CE etc) and in many ways, maybe thatās far more feasible in terms of providing the EA community with systematic, comparative resultsāif you get a sense of how much more optimistic/āpessimistic various evaluators are, you can penalize their individual cause/āintervention prioritizations, and get a better sense of how disparate causes stack up against one another even if different methodologies/āassumptions are used.
(b) The timeline for (hopefully) finding a Cause X is fairly arbitrary! I definitely donāt have a good/āstrong sense of how long itāll take, so itās probably best to see the timeline as a kind of stretch-goal meant to push the organization. I guess the other issue is how much more impactful we expect Cause X to be - the DCP global health interventions vary by like 10,000 in cost-effectiveness, and if you think that interventions within broad cause areas (i.e. global health vs violent conflict vs political reform vs economic policy) vary at least as much, then one might expect there to be some Cause X out there three to four magnitudes more impactful than top GiveWell stuff, but itās so hard to say.
(c) Wrote about the issue of cause classification in somewhat more detail in the response to Aidan below!
very cool! A couple qās:
sounds plausible to me, but curious why you think this.
On nuclear war, did you try to factor in the chance that a nuclear exchange could lead to a catastrophic collapse that leads to extinction?
(1) Theoretically, additional detail to your CEA means: (a) a more discrete and granular theory of change, which necessarily reduces the probability of success, and (b) trying to measure more flow-through effects/āexternalities, which while typically positive, are more uncertain and tend also to be less important compared to the primary health effects measured. With the impact of (a) > (b), more research attrites the estimated cost-effectiveness.
(2) Empirically, and from past experience, this has been the case for various organizations, to my understand. Eric Hausen has spoken about Charity Science Healthās process (more you look at something, the worse it seems), and GiveWell has written about this before, I believe (somewhere, might dig it up eventually!)
These were also two questions that jumped to mind for me as I read this post.
On the catastrophic collapse issueāno, didnāt look at that! It wouldnāt change the headline cost-effectiveness that much, but it might depend on your views on astronomical waste.
Is CEARCH pronounced āsearchā?
Yep! My fellow 2022 CE incubatees and I probably spent more time than was wise on brainstorming cool-sounding names and backronyms. On hindsight, perhaps I should have just gone with Cause Research Advancement and Prioritization (CRAP)!
Whatever the answer, I donāt think it can be prevented š
Love to see it.
Because of the way youād framed the problemāthat people have rarely evaluated thousands of causesāI was expecting the āshallowā research round to be a lot shorter than a week. At that rate, if you wanted to do shallow research on 1000 causes a year, youād need 20 researchers.
Youāre absolutely right that the shallow research part is fairly time-intensive, and not at all ideal. I had started out thinking one could get away with <=1 dayās worth of research at the shallow stage, but I found that just wasnāt sufficient to get a high-confidence evaluation (taking into consider the research, the construction of a CEA, the double-checking of all calculations, writing up a report, etc). To put things in context, Open Phil takes a couple of weeks for their shallow research, and bringing that down to 1 week already involves considerable sacrifice (not being able to get expert opinions beyond what is already published), and getting it further down to 1-3 days would be too detrimental to research quality, I think.
Aside from attempting to shorten the research process, ramping up the size of the research team would be the obvious solution, as you say, and itās what Iāll be trying to pursue in the near term. Of course, funding constraints (at the organizational level) and general talent constraint (at the movement level) probably constrain us. Hence, Iām fairly enthusiastic about Akhilās and Leonieās Cause Innovation Bootcamp!
That makes a lot of sense. I find things often take what feels like a long time, even when youāre trying to go fast.
Exciting stuff! Looking forward to seeing what you come up with. I agree that the movement has not been systematic enough on cause prioritisation.
One thing Iām curious about.. where do you draw the line on:
(a) Where one cause ends and the other begins /ā how to group causes:
For example, arenāt fungal diseases, nuclear war and asteroids all sub-causes of global health, in that we only (or at least mainly) care about them insofar as they threaten global health? AI safety is the same (except that in addition to mattering because it threatens health, it also matters because it has the opportunity to bring about happiness).
(b) Where causes end and interventions begin:
Youāre measuring the promise of these cause areas in DALYs per $100k, which means youāve started thinking about the solutions already. Is CEARCH doing intervention exploration too?
(a) Itās definitely fairly arbitrary, but the way I find it useful to think about it is that causes are problems, and you can break them down into:
High-level cause area: The broadest possible classification, like (i) problems that primarily affect humans in the here and now; (ii) problems that affect non-human animals; (iii) problems that primarily affect humans in the long run; and (iv) meta problems to do with EA itself.
Cause Area: High-level cause domains (e.g. neartermist human problems) can then be broken down into various intermediate-level cause areas (e.g. global disease and poverty ā global health ā communicable diseases ā vector-borne diseases ā mosquito borne diseases) until they reach the narrowest, individual cause level)
Cause: At the bottom, we have problems that are defined in the most narrow way possible (e.g. malaria).
In terms of what level cause prioritization research should focus onāIām not sure if thereās an optimal level to always focus on. On the one hand, going narrow makes the actual research easier; on the other, you increase the amount of time needed to explore the search space, and also risk missing out on cross-cause solutions (e.g. vaccines for fungal diseases in general and not just, say, candidiasis).
(b) I think Michael Plantās thesis had a good framing of the issue, and at the risk of summarizing his work poorly, I think the main point is that if causes are problems then interventions are solutions, and since we ultimately care about solving problems in a way that does the most good, we canāt really do cause prioritization research without also doing intervention evaluation.
The real challenge is identifying which solutions are the most effective, since at the shallow research stage we donāt have the time to look into everything. I canāt say I have a good answer this challenge, but in practice I would just briefly research what causes there are, and choose what superficially seems like the most effective. On the public health front, where the data is better, my understanding is that vaccines are (maybe unsurprisingly) very cost-effective, and same for gene drives.