The case of the missing cause prioritisation research

Introduction /​ summary

In 2011 I came across Giving What We Can, which shortly blossomed into effective altruism. Call me a geek if you like but I found it exciting, like really exciting. Here were people thinking super carefully about the most effective ways to have an impact, to create change, to build a better world. Suddenly a boundless opportunity to do vast amounts of good opened up before my eyes. I had only just got involved and by giving to fund bednets and had already magnified my impact on the world 100 times.

And this was just the beginning. Obviously bednets were not the most effective charitable intervention, they were just the most effective we had found to date – with just a tiny amount of research. Imagine what topic could be explored next: the long run effects of interventions, economic growth, political change, geopolitics, conflict studies, etc. We could work out how to compare charities of vastly different cause areas, or how to do good beyond donations (some people were already starting to talk about career choices). Some people said we should care about animals (or AI risk), I didn’t buy it (back then), but imagine, we could work out what different value sets lead to different causes and the best charities for each.

As far as I could tell the whole field of optimising for impact seemed vastly under-explored. This wasn’t too surprising – most people don’t seem to care that much about doing charitable giving well and anyway it was only just coming to light how truly bad our intuitions were at making charitable choices (with the early 2000’s aid skepticism movement).

Looking back, I was optimistic. Yet in some regards my optimism was well-placed. In terms of spreading ideas, my small group of geeky uni friends went on to create something remarkable, to shift £m if not £bn of donations to better causes, to help 1000s maybe 100,000s of people make better career decisions. I am no longer surprised if a colleague, tinder date or complete stranger has heard of effective altruism (EA) or gives money to AMF (a bednet charity).

However, in terms of the research I was so excited about, of developing the field of how to do good, there has been minimal progress. After nearly a decade, bednets and AI research still seem to be at the top of everyone’s Christmas donations wish list. I think I assumed that someone had got this covered, that GPI or FHI or whoever will have answers, or at least progress on cause research sometime soon. But last month, whilst trying to review my career, I decided to look into this topic, and, oh boy, there just appears to be a massive gaping hole. I really don’t think it is happening.

I don’t particularly want to shift my career to do cause prioritisation research right now. So I am writing this piece in the hope that I can either have you, my dear reader, persuade me this work is not of utmost importance, or have me persuade you to do this work (so I don’t have to).

A. The importance of cause prioritisation research

What is your view on the effective altruism community and what it has achieved? What is the single most important idea to come out of the community? Feel free to take a moment to reflect. (Answers on a postcard, or comment).

It seems to me (predictably given the introduction) that far and away the most valuable thing EA has done is the development of and promotion of cause prioritisation as a concept. This idea seems (shockingly and unfortunately) unique to EA.[1] It underpins all EA thinking, guides where EA aligned foundations give and leads to people seriously considering novel causes such as animal welfare or longtermism.

This post mostly focuses on the current progress of and neglectedness of this work over the past few years. But let us start with a quick recap of why cause prioritisation research might be important and tractible. The argument is nicely set out in Paul Christiano’s The Case for Cause Prioritization as the Best Cause (written 2013-14). To give a short summary Paul says:

1. Some causes are significantly higher impact than others. We theoretically expect and empirically observe impact to be “heavy tailed” with some causes being orders of magnitude more impactful (see also Prospecting for Gold). We should not yet be confident in our top causes and many of our current approaches to improve the world rely on highly speculative assumptions (eg about long term effects). So if we could make progress on prioritisation we should expect to have a large positive impact.

2. it is reasonable to think that research would make progress because:

  • Very little research has been done on this so far.

  • The work that has been done suggests that progress is difficult but not impossible.

  • We can see research programs that could be useful (see some of my ideas below).

  • Human history reflects positively on our ability to build a collective understanding of a difficult subject and eventually make headway.

  • Even if difficult, we should at least try! We would learn why such research is hard and should keep going until we reach a point of diminishing returns.

(Also this week 80000 Hours has just written this: Why global priorities research is even more important than I thought)

In short:

Cause prioritisation is hugely valuable to guide how we do good.

B. The case of the missing cause prioritisation research

Let me take you through my story, and set out some of the research gaps as I have experienced them.

Community building

From 2013 until 2017 I ran the EA community in London. I set myself the goal of building a vibrant welcoming and cohesive community and I like to think I did OK. But occasionally the intellectual framework was just not there. For while I might say “we are a new community, we don’t yet have the answer to this” but after a few years the excuse got thin. The research on specific causes areas got deeper, but the cause prioritisation research did not. In particular I struggled to provide materials to people who did not fall close to thinking along classical utilitarian lines.[2]

And it was damaging. It is damaging. More and more, as I look across the EA movement I see the people who join are not those who are open minded souls keen to understand what it means to do the most good, but people who are already focused on the causes we champion: global development or animal welfare or preventing extinction risk. Now I love my cause committed compatriots, but I do think we are at risk of creating a community that is unwelcoming to the true explorers, a community that is intellectually entrenched and forever doomed to only see those three cause areas.

I think we need to do cause prioritisation from the point of view of different value sets and different cultures. This is important for building a good community, especially for spreading to other countries (as discussed here and here). This is also important for reaching truth. Different people with different life experiences will not only ask different questions, but have different hypotheses about what the answers might be.[3]

I could say more on this but honestly I think most of it is covered in the amazing post by Objections to value alignment between EAs by CarlaZoeC which I recommend you check out.

Parliament

One thing I notice is that, with few exceptions, the path to change for EA folk who want to improve the long-run future is research. They work at research institutions, design AI systems, fund research, support research. Those that do not do research seem to be trying to accumulate power or wealth or CV points in the vague hope that at some point the researchers will know what needs doing.

Post community building I moved back into policy and most recently have found myself in the policy space, building support for future generations in the UK Parliament. Not research. Not waiting. But creating change.

From this vantage point it doesn’t feel like the EA community has thought much about policy. For example there is a huge focus on AI policy, but the justification for this is weak. Even if you fully believe the longtermist arguments that top programmers should work on AI alignment, it does not immediately follow that good policy people can have more long term impact in AI policy compared to policy on resilience, macroeconomics, institution design, nuclear non-proliferation, climate change, democracy promotion, political polarisation, etc, etc.

Most of the cause prioritisation research has been focused on how to do good with money. But there is very little on how to do good if you have political capital, public status, media influence and so on. Trying to weigh up and compare all the different policy approaches I list above would be a mighty undertaking and I do not expect answers soon, but it would be nice to see someone trying to take on the task, and not focusing solely on where to shift money.

My own values

Most recently I have been thinking about what career route to go down next, what my values are, and what has been written on cause prioritisation.

Looking around it feels a like there is a split down the middle of the EA community:[4]

  1. On the one hand you have the empiricals: those who believe that doing good is difficult, common sense leads you astray and to create change we need hard data, ideally at least a few RCTs.

  2. On the other side are the theorists: those who believe you just need to think really hard and to choose a cause we need expected value calculations and it matters not if calculations are highly uncertain if the numbers tend to infinity.

Personally I find myself somewhat drawn to the uncharted middle ground. Call me indecisive if you like but it appears to me that both ends of this spectrum are making errors in judgement. Certainly neither of the approaches above come close to how well-run government institutions or large successful corporations make decisions.

(I also don’t think these two areas are as far apart as it first seems. If you look at the structural change and policy research GiveWell is interested in it is not too far away from long-termist research suggestions on institutional change.)

I think this split provides a way of breaking down the work I would love to see:

Beyond RCTs – It would be lovely to see the ‘empiricals’ crew move beyond basic global health, to have them say “great we have shown that you can, despite the challenges, identify interventions that work and compare them. Now let’s get a bit more complicated and do some more research and find other interventions and consider long run effects and so on”. There could be research looking for strong empirical evidence into:

  • the second order or long run effects of existing interventions.

  • how to drive economic growth, policy change, structural changes, and so forth.

  • unexplored areas that could be highly impactful such as access to painkillers or mental health. (There could be experimental hits based giving.)

It honestly shocks me that the EA community has had so little progress in this space in a decade.

Beyond speculation – it would be great if the ‘theorists’ looked a bit more at making their claims more credible. From my point of view, I could save a human life for ~£3000. I don’t want to let kids die needlessly if I can stop it. I personally think that the future is really important but before I drop the ball on all the things I know will have an impact it would be nice to have:

  • Some evidence that we can reliably affect the future: What empirical evidence is there that we can reliably impact the long run trajectory of humanity and how have similar efforts gone in the past?

  • Cause and intervention prioritization. What are the options, the causes and interventions to influence the long-term, which of these can be practically impacted, have feedback loops that can be used for judging success, and so forth? I would love to see more comparisons of causes like improving institutions, increasing economic growth, global conflict prevention, etc.

  • Less dodgy reasoning. I am not going into here all the errors, groupthink, and mistakes that I think EA longtermists often make. Let me give just one example, if you look at best practice in risk assessment methodologies[5] it looks very different from the naive expected value calculations used in EA – if someone tells me to dedicate my life to stopping global risks it would be good if I was confident they actually understood risk mitigation. I think there needs to be much better research into how to make complex decisions despite high uncertainty. There is a whole field of decision making under deep uncertainty (or knightian uncertainty) used in policy design, military decision making and climate science but rarely discussed in EA.

In short:

You could categorise this research in a bunch of different ways but if I had to make a list the projects I would be super excited to see are:

  1. The basics: I think we could see progress just by doing investigations of a broad range of different potentially top causes and comparisons across causes. (The search for “cause X”).

  2. Consideration of different views and ethics and how this affects what causes might be most important.

  3. Consideration of how to prioritise depending on the type of power you have, be it money or political power or media influence or something else.

  4. Empirical cause selection beyond RCTs. The impact of system change and policy change in international development and more consideration of second order effects.

  5. Theoretical cause selection beyond speculation. Evidence of how to reason well despite uncertainty and more comparisons of different causes.

This research would ensure that we continue to learn how to do good, not entrenched in our ways, and taking the actions that will have the biggest impact on the world.

C. Whodunnit?

So is anyone doing this? Lets run through my list.

[Edit: disclaimer, I have looked though organisations plans, research agendas and so forth and done the best I can but I did not invest time in talking to people at all the organisations in this space – so it is possible I may have mischaracterised specific organisations compared to how they would describe themselves – apologies]

1. The basics – partially happening – 510

Shallow investigations of how to do good within a few cause areas are being done by Open Philanthropy Project (OpenPhil) and to a lesser extent by Founders Pledge (FP). The main missing part is that there is little written that compares across these different causes or looks at how one might prioritise one cause over another (except for occasional mentions in the FP reports and the OpenPhil spreadsheets here and here).

More granular, but still high level intervention research is being done by Charity Entrepreneurship.

2. Different views – not happening – 010

No organisation is doing this. There is no systematic work in this space. The most that is going on is a few individuals or small groups that have taken up specific approaches (still largely hedonistic utilitarianism adjacent) and run with it, such as the Happier Lives Institute (HLI) or the Organisation for the Prevention of Intense Suffering (OPIS).

3. Policy and beyond – not happening – 210

No organisation is doing research into how to prioritise if you have political power or media influence or something other than money. 80000 Hours (80K) appeared to do some of this in the past but are now focusing on their priority paths. They have said that the details of what those paths are may change. It is unclear if such changes indicate that they will do more research themselves or if they expect to change in light of others research. Either way the rough direction feels fairly set so I do not expect much more high level cause prioritisation research from them soon.

4. Beyond RCTs – not happening – 110

GiveWell keeps setting out plans to expand the scope of their research (see 2018 plans and 2019 plans) and, in their own words they “failed to achieve this goal” (see 2018 review and 2019 review). When asked they said that “We primarily attribute this to having a limited number of staff who were positioned to conduct this work, and those staff having many competing demands on their time … we are continuing to hire and expect this will enable us to make additional progress in new areas.” I am not super optimistic given their 2020 plan for new research is less ambitious than previously insofar as it focuses solely on public health.

Open Philanthropy are mostly deferring to GiveWell although they express support of GiveWell’s unmaterialised plans to expand their research and they are funding the Center for Global Development’s policy work. The only useful new research in this space seems to be a small amount of work from Founders Pledge, it is unclear the extent to which they plan to do more work in this area.

5. Beyond speculation (practical longtermism) – partially happening – 610

The best source of research and experimentation in this space is again OpenPhil. They are experimenting with trying to influence policy related to the far future and doing research on topics relevant to long termism. However as already highlighted it is unclear how OpenPhil are comparing different causes, rather than looking out for giving opportunities across a variety of causes and seeing what they can fund and what the impact of that will be.

The Global Priorities Institute (GPI) are looking to improve the quality of thinking in this space. They have so far produced only philosophy papers. It is useful stuff and valuable for building traction in academia, but personally I am pretty sceptical about humans solving philosophy soon and would rather have some answers within the next few decades.

There are a few others doing small amounts of research on specific topics such as Center on Long Term Risk (CLR) and Future of Humanity Institute (FHI).

Overall there seems to be a lot of longtermsim research but the amount that is going into what you could plausibly call cause prioritisation is small and with the possible, but unclear, exception of OpenPhil progress in this space is minimal.

Now this is just one way of thinking through the work I would like to see based on my subjective experiences of navigating this community for the past decade, I am sure this could be done differently but overall I give the EA community a whopping 28% for cause prioritisation research. Better than Titanic II (tagline: they said it couldn’t happen twice) but not quite as good as The Emoji Movie.

In short:

There is not nearly enough work in this space.

D. Why is this underinvested in and next steps

I think that this space needs new organisations (and/​or existing organisations to significantly refocus in this direction). But before you swallow everything I have said hook line and sinker and head off to start a cause prioritisation organisation I think we need to examine why this work might be underinvested in and what we can learn.

In the order that I think is important, some of the challenges are:

1. It is unclear what the theory of change would be for research organisations in this space.

Different organisations have different theories of change for research.

  • For a big funder (like Open Philanthropy) the theory of change is:
    do research → shift money.

  • For individual academics the theory of change is:
    do research → get published + have imapct.

  • For organisations with a big audience (like 80000 Hours) the theory of change is:
    do research → influence audience.

But for a new organisation to solely focus on doing the research that they believed would be most useful for improving the world it is unclear what the theory of change would be. Some options are:

  • Do research → build audience on quality of research → then influence audience

  • Do research + persuade other organisations to use your research → influence their audiences and money

These paths are valid but they have a difficult extra step. Any organisation entering this space needs to be doing multiple things at once and needs to convince funders that they can create value from the research. For example Let’s Fund has done some useful research but struggled to demonstrate that they can turn research into money moved.

I do not have a magic solution to this. Ideally a new organisation in this space would have enough initial cause neutral funding to allow a reasonable amount of research to be done to demonstrate effectiveness. One idea is to have some level of pre-commitment from a large funder (or from an organisation such as OpenPhil or 80K) that they would use the research. Another idea is to have good influencers on board at the start, for example for policy research having a ex-senior politician on board could help make the case your research would be noticed – the Copenhagen Consensus seemed to start this way.

(Also, I have never worked in academia so there may be theories of change in the academic space that others could identify.)

2. It is difficult to compete with the existing organisations that are just not quite doing this.

I think one of the reasons why not enough has been done in this space is that organisations and individuals reach conclusions about what is most important for themselves (not necessarily in a way that is convincing to others) and then choose to focus on that.

For example 80000 Hours have [edited: focused on specific] priority paths. The Future of Humanity Institute has focused heavily on AI, setting up the Centre for the Governance of AI. Even GiveWell used to have a broader remit before they focused in on global health. (There are of course advantages to focus. For example GiveWell’s focus led to them significantly improving their charity recommendations, they no longer recommend terrible approaches like microfinance, but it has limited exploration.)

I think that people are hesitant to do something new if they think it is being done, and funders want to know why the new thing is different so the abundance of organisations that used to do cause prioritisation research or do research that is a subcategory of cause prioritisation research limits other organisations from starting up.

My solution to this is to write this post to convince others that this work is not being done.

3. This work is not intractable but it is difficult

This work is difficult. It is not like standard academic research as it needs to pull in a vast variety of different areas and topics, from ethics, to economics, to history, to international relations. Finding polymaths to compare across different interventions of different types is very difficult.

For example finding good staff has clearly impacted GiveWell’s ability to expand their research.

I suggest new organisations in this space might want to consider working differently, for example having a large budget for contracting top quality research across different fields and lower numbers of paid staff.

I also suggest interdisciplinary input into drafting research agendas. (One economics student told me that when reading the GPI research agenda, the economics parts read like it was written by philosophers. Maybe this contributes to the lack of headway on their economics research plans.)

When drafting this post I began to wonder if such research is actually intractable. I think Paul’s arguments counter this somewhat but the thing that gives me the most hope is that some of the best research in this space appears to be random posts from individuals on the EA forum. For example Growth and the case against randomista development, Reducing long-term risks from malevolent actors (part funded by CLR) Does climate change deserve more attention within EA, Increasing Access to Pain Relief in Developing Countries, High Time For Drug Policy Reform. I am also impressed with new organisations such as the fledgling Happier Lives Institute who are challenging the way we think about wellbeing. This makes me think there is likely a lot of tractable important cause prioritisation research that could be done and the problem is a lack of effort not tractability.

4. It is difficult to find cause neutral funding.

I think funders like to choose their cause and stick with it so there is a lack of cause neutral funding.

For example Rethink Priorities looked really exciting when it got started with their co-founder expressing strong support for practical prioritisation research. But their research has mostly focused on animal welfare interventions, not on comparing between causes. They cite having to follow the funding as the main reason for this.

I think funders who have benefited from cause prioritisation research done to date should apportion a chunk of their future funding to support more such research.

In short

There are a bunch of barriers to good cause prioritisation research. But I believe they are all overcomeable, and they do not make a strong case that such research is intractable.

Conclusion

So there we have it dear reader my musing and thoughts on cause prioritisation, mixed in with a broad undercurrent of dissatisfaction with the EA community. Maybe I am just more jaded in my old age (early 30s) but I think I was more optimistic about the intellectual direction of the EA community when it had no power or influence nearly a decade ago. Intellectual progress in the field of doing good has been much slower than I hoped.

But I am an optimistic fellow. I do think we can make progress. There has been just enough traction to give me hope. It just needs a bit more effort, a bit more searching.

So my request to you. Either disagree with me, tell me that sufficient progress is happening, or change how you act in some small way. Be a bit more uncertain, a bit more willing to donate to fund or to go into cause prioritisation research. And if you work in an EA org please stop focusing so much on the cause areas you each believe are most important and increase the amount of cause neutral work and funding that you do.

I am considering starting a new organisation in this space with a focus on policy interventions. If you want to be involved or have ideas, or have some reason to think this is not actually a good use of my time, then comment below or message me.

And do comment. I want your thoughts big or small. Most of my recent posts on this forum had minimal comments.

Did you read the post by CarlaZoeC that I linked to above? I hope not because they write better than me so I am going to end by stealing their conclusion:

“EA is not your average activist group on the market-place on ideas on how to live. It has announced far greater ambitions: to research humanity’s future, to reduce sentient suffering and to navigate towards a stable world”

“But if the ambition is great, the intellectual standards must match it. … Humanity lacks clarity on the nature of the Good, what constitutes a mature civilization or how to use technology. In contrast, EA appears to have suspiciously concrete answers.”

“I wish EA would more visibly respect the uncertainty they deal in. Indeed, some EAs are exemplary—some wear uncertainty like a badge of honour.… For them, EA is a quest, an attempt to approach big questions of valuable futures, existential risk and the good life, rather than implementing an answer. I wish this would be the norm. I wish all would enjoy and commit to the search, instead of pledging allegiance to preliminary answers. … [it is like that that we] have the best chance of succeeding in the EA quest.”

FOOTNOTES

[1] This is based on my experience of diving into a range of activism spaces, charity projects and other assorted communities of people trying to do good. It is very rare for people to think strategically about what to focus on to the most good. GiveWell also make the case that charitable foundations tend not to think this way in this post.

[2] This experience did lead me to start an EA London charity evaluation giving circle for people who had strong moral intuitions that equality and justice were of value. Write up here.

[3] This sentence is a quote from the discussion about the value of diversity in the most recent 80K podcast. But for more on this I also recommend checking out In Defence of Epistemic Modesty.

[4] I accept this is somewhat caricatured, but I maintain that many people in EA fall close to these archetypes. (Except for the effective animal activism folk who nicely bridge this gap, maybe I should just go join them.)

[5] Look out for my upcoming report with CSER on this topic