The case of the missing cause prioritisation research
Introduction /â summary
In 2011 I came across Giving What We Can, which shortly blossomed into effective altruism. Call me a geek if you like but I found it exciting, like really exciting. Here were people thinking super carefully about the most effective ways to have an impact, to create change, to build a better world. Suddenly a boundless opportunity to do vast amounts of good opened up before my eyes. I had only just got involved and by giving to fund bednets and had already magnified my impact on the world 100 times.
And this was just the beginning. Obviously bednets were not the most effective charitable intervention, they were just the most effective we had found to date â with just a tiny amount of research. Imagine what topic could be explored next: the long run effects of interventions, economic growth, political change, geopolitics, conflict studies, etc. We could work out how to compare charities of vastly different cause areas, or how to do good beyond donations (some people were already starting to talk about career choices). Some people said we should care about animals (or AI risk), I didnât buy it (back then), but imagine, we could work out what different value sets lead to different causes and the best charities for each.
As far as I could tell the whole field of optimising for impact seemed vastly under-explored. This wasnât too surprising â most people donât seem to care that much about doing charitable giving well and anyway it was only just coming to light how truly bad our intuitions were at making charitable choices (with the early 2000âs aid skepticism movement).
Looking back, I was optimistic. Yet in some regards my optimism was well-placed. In terms of spreading ideas, my small group of geeky uni friends went on to create something remarkable, to shift ÂŁm if not ÂŁbn of donations to better causes, to help 1000s maybe 100,000s of people make better career decisions. I am no longer surprised if a colleague, tinder date or complete stranger has heard of effective altruism (EA) or gives money to AMF (a bednet charity).
However, in terms of the research I was so excited about, of developing the field of how to do good, there has been minimal progress. After nearly a decade, bednets and AI research still seem to be at the top of everyoneâs Christmas donations wish list. I think I assumed that someone had got this covered, that GPI or FHI or whoever will have answers, or at least progress on cause research sometime soon. But last month, whilst trying to review my career, I decided to look into this topic, and, oh boy, there just appears to be a massive gaping hole. I really donât think it is happening.
I donât particularly want to shift my career to do cause prioritisation research right now. So I am writing this piece in the hope that I can either have you, my dear reader, persuade me this work is not of utmost importance, or have me persuade you to do this work (so I donât have to).
A. The importance of cause prioritisation research
What is your view on the effective altruism community and what it has achieved? What is the single most important idea to come out of the community? Feel free to take a moment to reflect. (Answers on a postcard, or comment).
It seems to me (predictably given the introduction) that far and away the most valuable thing EA has done is the development of and promotion of cause prioritisation as a concept. This idea seems (shockingly and unfortunately) unique to EA.[1] It underpins all EA thinking, guides where EA aligned foundations give and leads to people seriously considering novel causes such as animal welfare or longtermism.
This post mostly focuses on the current progress of and neglectedness of this work over the past few years. But let us start with a quick recap of why cause prioritisation research might be important and tractible. The argument is nicely set out in Paul Christianoâs The Case for Cause Prioritization as the Best Cause (written 2013-14). To give a short summary Paul says:
1. Some causes are significantly higher impact than others. We theoretically expect and empirically observe impact to be âheavy tailedâ with some causes being orders of magnitude more impactful (see also Prospecting for Gold). We should not yet be confident in our top causes and many of our current approaches to improve the world rely on highly speculative assumptions (eg about long term effects). So if we could make progress on prioritisation we should expect to have a large positive impact.
2. it is reasonable to think that research would make progress because:
Very little research has been done on this so far.
The work that has been done suggests that progress is difficult but not impossible.
We can see research programs that could be useful (see some of my ideas below).
Human history reflects positively on our ability to build a collective understanding of a difficult subject and eventually make headway.
Even if difficult, we should at least try! We would learn why such research is hard and should keep going until we reach a point of diminishing returns.
(Also this week 80000 Hours has just written this: Why global priorities research is even more important than I thought)
In short:
Cause prioritisation is hugely valuable to guide how we do good.
B. The case of the missing cause prioritisation research
Let me take you through my story, and set out some of the research gaps as I have experienced them.
Community building
From 2013 until 2017 I ran the EA community in London. I set myself the goal of building a vibrant welcoming and cohesive community and I like to think I did OK. But occasionally the intellectual framework was just not there. For while I might say âwe are a new community, we donât yet have the answer to thisâ but after a few years the excuse got thin. The research on specific causes areas got deeper, but the cause prioritisation research did not. In particular I struggled to provide materials to people who did not fall close to thinking along classical utilitarian lines.[2]
And it was damaging. It is damaging. More and more, as I look across the EA movement I see the people who join are not those who are open minded souls keen to understand what it means to do the most good, but people who are already focused on the causes we champion: global development or animal welfare or preventing extinction risk. Now I love my cause committed compatriots, but I do think we are at risk of creating a community that is unwelcoming to the true explorers, a community that is intellectually entrenched and forever doomed to only see those three cause areas.
I think we need to do cause prioritisation from the point of view of different value sets and different cultures. This is important for building a good community, especially for spreading to other countries (as discussed here and here). This is also important for reaching truth. Different people with different life experiences will not only ask different questions, but have different hypotheses about what the answers might be.[3]
I could say more on this but honestly I think most of it is covered in the amazing post by Objections to value alignment between EAs by CarlaZoeC which I recommend you check out.
Parliament
One thing I notice is that, with few exceptions, the path to change for EA folk who want to improve the long-run future is research. They work at research institutions, design AI systems, fund research, support research. Those that do not do research seem to be trying to accumulate power or wealth or CV points in the vague hope that at some point the researchers will know what needs doing.
Post community building I moved back into policy and most recently have found myself in the policy space, building support for future generations in the UK Parliament. Not research. Not waiting. But creating change.
From this vantage point it doesnât feel like the EA community has thought much about policy. For example there is a huge focus on AI policy, but the justification for this is weak. Even if you fully believe the longtermist arguments that top programmers should work on AI alignment, it does not immediately follow that good policy people can have more long term impact in AI policy compared to policy on resilience, macroeconomics, institution design, nuclear non-proliferation, climate change, democracy promotion, political polarisation, etc, etc.
Most of the cause prioritisation research has been focused on how to do good with money. But there is very little on how to do good if you have political capital, public status, media influence and so on. Trying to weigh up and compare all the different policy approaches I list above would be a mighty undertaking and I do not expect answers soon, but it would be nice to see someone trying to take on the task, and not focusing solely on where to shift money.
My own values
Most recently I have been thinking about what career route to go down next, what my values are, and what has been written on cause prioritisation.
Looking around it feels a like there is a split down the middle of the EA community:[4]
On the one hand you have the empiricals: those who believe that doing good is difficult, common sense leads you astray and to create change we need hard data, ideally at least a few RCTs.
On the other side are the theorists: those who believe you just need to think really hard and to choose a cause we need expected value calculations and it matters not if calculations are highly uncertain if the numbers tend to infinity.
Personally I find myself somewhat drawn to the uncharted middle ground. Call me indecisive if you like but it appears to me that both ends of this spectrum are making errors in judgement. Certainly neither of the approaches above come close to how well-run government institutions or large successful corporations make decisions.
(I also donât think these two areas are as far apart as it first seems. If you look at the structural change and policy research GiveWell is interested in it is not too far away from long-termist research suggestions on institutional change.)
I think this split provides a way of breaking down the work I would love to see:
Beyond RCTs â It would be lovely to see the âempiricalsâ crew move beyond basic global health, to have them say âgreat we have shown that you can, despite the challenges, identify interventions that work and compare them. Now letâs get a bit more complicated and do some more research and find other interventions and consider long run effects and so onâ. There could be research looking for strong empirical evidence into:
the second order or long run effects of existing interventions.
how to drive economic growth, policy change, structural changes, and so forth.
unexplored areas that could be highly impactful such as access to painkillers or mental health. (There could be experimental hits based giving.)
It honestly shocks me that the EA community has had so little progress in this space in a decade.
Beyond speculation â it would be great if the âtheoristsâ looked a bit more at making their claims more credible. From my point of view, I could save a human life for ~ÂŁ3000. I donât want to let kids die needlessly if I can stop it. I personally think that the future is really important but before I drop the ball on all the things I know will have an impact it would be nice to have:
Some evidence that we can reliably affect the future: What empirical evidence is there that we can reliably impact the long run trajectory of humanity and how have similar efforts gone in the past?
Cause and intervention prioritization. What are the options, the causes and interventions to influence the long-term, which of these can be practically impacted, have feedback loops that can be used for judging success, and so forth? I would love to see more comparisons of causes like improving institutions, increasing economic growth, global conflict prevention, etc.
Less dodgy reasoning. I am not going into here all the errors, groupthink, and mistakes that I think EA longtermists often make. Let me give just one example, if you look at best practice in risk assessment methodologies[5] it looks very different from the naive expected value calculations used in EA â if someone tells me to dedicate my life to stopping global risks it would be good if I was confident they actually understood risk mitigation. I think there needs to be much better research into how to make complex decisions despite high uncertainty. There is a whole field of decision making under deep uncertainty (or knightian uncertainty) used in policy design, military decision making and climate science but rarely discussed in EA.
In short:
You could categorise this research in a bunch of different ways but if I had to make a list the projects I would be super excited to see are:
The basics: I think we could see progress just by doing investigations of a broad range of different potentially top causes and comparisons across causes. (The search for âcause Xâ).
Consideration of different views and ethics and how this affects what causes might be most important.
Consideration of how to prioritise depending on the type of power you have, be it money or political power or media influence or something else.
Empirical cause selection beyond RCTs. The impact of system change and policy change in international development and more consideration of second order effects.
Theoretical cause selection beyond speculation. Evidence of how to reason well despite uncertainty and more comparisons of different causes.
This research would ensure that we continue to learn how to do good, not entrenched in our ways, and taking the actions that will have the biggest impact on the world.
C. Whodunnit?
So is anyone doing this? Lets run through my list.
[Edit: disclaimer, I have looked though organisations plans, research agendas and so forth and done the best I can but I did not invest time in talking to people at all the organisations in this space â so it is possible I may have mischaracterised specific organisations compared to how they would describe themselves â apologies]
1. The basics â partially happening â 5â10
Shallow investigations of how to do good within a few cause areas are being done by Open Philanthropy Project (OpenPhil) and to a lesser extent by Founders Pledge (FP). The main missing part is that there is little written that compares across these different causes or looks at how one might prioritise one cause over another (except for occasional mentions in the FP reports and the OpenPhil spreadsheets here and here).
More granular, but still high level intervention research is being done by Charity Entrepreneurship.
2. Different views â not happening â 0â10
No organisation is doing this. There is no systematic work in this space. The most that is going on is a few individuals or small groups that have taken up specific approaches (still largely hedonistic utilitarianism adjacent) and run with it, such as the Happier Lives Institute (HLI) or the Organisation for the Prevention of Intense Suffering (OPIS).
3. Policy and beyond â not happening â 2â10
No organisation is doing research into how to prioritise if you have political power or media influence or something other than money. 80000 Hours (80K) appeared to do some of this in the past but are now focusing on their priority paths. They have said that the details of what those paths are may change. It is unclear if such changes indicate that they will do more research themselves or if they expect to change in light of others research. Either way the rough direction feels fairly set so I do not expect much more high level cause prioritisation research from them soon.
4. Beyond RCTs â not happening â 1â10
GiveWell keeps setting out plans to expand the scope of their research (see 2018 plans and 2019 plans) and, in their own words they âfailed to achieve this goalâ (see 2018 review and 2019 review). When asked they said that âWe primarily attribute this to having a limited number of staff who were positioned to conduct this work, and those staff having many competing demands on their time ⌠we are continuing to hire and expect this will enable us to make additional progress in new areas.â I am not super optimistic given their 2020 plan for new research is less ambitious than previously insofar as it focuses solely on public health.
Open Philanthropy are mostly deferring to GiveWell although they express support of GiveWellâs unmaterialised plans to expand their research and they are funding the Center for Global Developmentâs policy work. The only useful new research in this space seems to be a small amount of work from Founders Pledge, it is unclear the extent to which they plan to do more work in this area.
5. Beyond speculation (practical longtermism) â partially happening â 6â10
The best source of research and experimentation in this space is again OpenPhil. They are experimenting with trying to influence policy related to the far future and doing research on topics relevant to long termism. However as already highlighted it is unclear how OpenPhil are comparing different causes, rather than looking out for giving opportunities across a variety of causes and seeing what they can fund and what the impact of that will be.
The Global Priorities Institute (GPI) are looking to improve the quality of thinking in this space. They have so far produced only philosophy papers. It is useful stuff and valuable for building traction in academia, but personally I am pretty sceptical about humans solving philosophy soon and would rather have some answers within the next few decades.
There are a few others doing small amounts of research on specific topics such as Center on Long Term Risk (CLR) and Future of Humanity Institute (FHI).
Overall there seems to be a lot of longtermsim research but the amount that is going into what you could plausibly call cause prioritisation is small and with the possible, but unclear, exception of OpenPhil progress in this space is minimal.
Now this is just one way of thinking through the work I would like to see based on my subjective experiences of navigating this community for the past decade, I am sure this could be done differently but overall I give the EA community a whopping 28% for cause prioritisation research. Better than Titanic II (tagline: they said it couldnât happen twice) but not quite as good as The Emoji Movie.
In short:
There is not nearly enough work in this space.
D. Why is this underinvested in and next steps
I think that this space needs new organisations (and/âor existing organisations to significantly refocus in this direction). But before you swallow everything I have said hook line and sinker and head off to start a cause prioritisation organisation I think we need to examine why this work might be underinvested in and what we can learn.
In the order that I think is important, some of the challenges are:
1. It is unclear what the theory of change would be for research organisations in this space.
Different organisations have different theories of change for research.
For a big funder (like Open Philanthropy) the theory of change is:
do research â shift money.For individual academics the theory of change is:
do research â get published + have imapct.For organisations with a big audience (like 80000 Hours) the theory of change is:
do research â influence audience.
But for a new organisation to solely focus on doing the research that they believed would be most useful for improving the world it is unclear what the theory of change would be. Some options are:
Do research â build audience on quality of research â then influence audience
Do research + persuade other organisations to use your research â influence their audiences and money
These paths are valid but they have a difficult extra step. Any organisation entering this space needs to be doing multiple things at once and needs to convince funders that they can create value from the research. For example Letâs Fund has done some useful research but struggled to demonstrate that they can turn research into money moved.
I do not have a magic solution to this. Ideally a new organisation in this space would have enough initial cause neutral funding to allow a reasonable amount of research to be done to demonstrate effectiveness. One idea is to have some level of pre-commitment from a large funder (or from an organisation such as OpenPhil or 80K) that they would use the research. Another idea is to have good influencers on board at the start, for example for policy research having a ex-senior politician on board could help make the case your research would be noticed â the Copenhagen Consensus seemed to start this way.
(Also, I have never worked in academia so there may be theories of change in the academic space that others could identify.)
2. It is difficult to compete with the existing organisations that are just not quite doing this.
I think one of the reasons why not enough has been done in this space is that organisations and individuals reach conclusions about what is most important for themselves (not necessarily in a way that is convincing to others) and then choose to focus on that.
For example 80000 Hours have [edited: focused on specific] priority paths. The Future of Humanity Institute has focused heavily on AI, setting up the Centre for the Governance of AI. Even GiveWell used to have a broader remit before they focused in on global health. (There are of course advantages to focus. For example GiveWellâs focus led to them significantly improving their charity recommendations, they no longer recommend terrible approaches like microfinance, but it has limited exploration.)
I think that people are hesitant to do something new if they think it is being done, and funders want to know why the new thing is different so the abundance of organisations that used to do cause prioritisation research or do research that is a subcategory of cause prioritisation research limits other organisations from starting up.
My solution to this is to write this post to convince others that this work is not being done.
3. This work is not intractable but it is difficult
This work is difficult. It is not like standard academic research as it needs to pull in a vast variety of different areas and topics, from ethics, to economics, to history, to international relations. Finding polymaths to compare across different interventions of different types is very difficult.
For example finding good staff has clearly impacted GiveWellâs ability to expand their research.
I suggest new organisations in this space might want to consider working differently, for example having a large budget for contracting top quality research across different fields and lower numbers of paid staff.
I also suggest interdisciplinary input into drafting research agendas. (One economics student told me that when reading the GPI research agenda, the economics parts read like it was written by philosophers. Maybe this contributes to the lack of headway on their economics research plans.)
When drafting this post I began to wonder if such research is actually intractable. I think Paulâs arguments counter this somewhat but the thing that gives me the most hope is that some of the best research in this space appears to be random posts from individuals on the EA forum. For example Growth and the case against randomista development, Reducing long-term risks from malevolent actors (part funded by CLR) Does climate change deserve more attention within EA, Increasing Access to Pain Relief in Developing Countries, High Time For Drug Policy Reform. I am also impressed with new organisations such as the fledgling Happier Lives Institute who are challenging the way we think about wellbeing. This makes me think there is likely a lot of tractable important cause prioritisation research that could be done and the problem is a lack of effort not tractability.
4. It is difficult to find cause neutral funding.
I think funders like to choose their cause and stick with it so there is a lack of cause neutral funding.
For example Rethink Priorities looked really exciting when it got started with their co-founder expressing strong support for practical prioritisation research. But their research has mostly focused on animal welfare interventions, not on comparing between causes. They cite having to follow the funding as the main reason for this.
I think funders who have benefited from cause prioritisation research done to date should apportion a chunk of their future funding to support more such research.
In short
There are a bunch of barriers to good cause prioritisation research. But I believe they are all overcomeable, and they do not make a strong case that such research is intractable.
Conclusion
So there we have it dear reader my musing and thoughts on cause prioritisation, mixed in with a broad undercurrent of dissatisfaction with the EA community. Maybe I am just more jaded in my old age (early 30s) but I think I was more optimistic about the intellectual direction of the EA community when it had no power or influence nearly a decade ago. Intellectual progress in the field of doing good has been much slower than I hoped.
But I am an optimistic fellow. I do think we can make progress. There has been just enough traction to give me hope. It just needs a bit more effort, a bit more searching.
So my request to you. Either disagree with me, tell me that sufficient progress is happening, or change how you act in some small way. Be a bit more uncertain, a bit more willing to donate to fund or to go into cause prioritisation research. And if you work in an EA org please stop focusing so much on the cause areas you each believe are most important and increase the amount of cause neutral work and funding that you do.
I am considering starting a new organisation in this space with a focus on policy interventions. If you want to be involved or have ideas, or have some reason to think this is not actually a good use of my time, then comment below or message me.
And do comment. I want your thoughts big or small. Most of my recent posts on this forum had minimal comments.
Did you read the post by CarlaZoeC that I linked to above? I hope not because they write better than me so I am going to end by stealing their conclusion:
âEA is not your average activist group on the market-place on ideas on how to live. It has announced far greater ambitions: to research humanityâs future, to reduce sentient suffering and to navigate towards a stable worldâ
âBut if the ambition is great, the intellectual standards must match it. ⌠Humanity lacks clarity on the nature of the Good, what constitutes a mature civilization or how to use technology. In contrast, EA appears to have suspiciously concrete answers.â
âI wish EA would more visibly respect the uncertainty they deal in. Indeed, some EAs are exemplaryâsome wear uncertainty like a badge of honour.⌠For them, EA is a quest, an attempt to approach big questions of valuable futures, existential risk and the good life, rather than implementing an answer. I wish this would be the norm. I wish all would enjoy and commit to the search, instead of pledging allegiance to preliminary answers. ⌠[it is like that that we] have the best chance of succeeding in the EA quest.â
FOOTNOTES
[1] This is based on my experience of diving into a range of activism spaces, charity projects and other assorted communities of people trying to do good. It is very rare for people to think strategically about what to focus on to the most good. GiveWell also make the case that charitable foundations tend not to think this way in this post.
[2] This experience did lead me to start an EA London charity evaluation giving circle for people who had strong moral intuitions that equality and justice were of value. Write up here.
[3] This sentence is a quote from the discussion about the value of diversity in the most recent 80K podcast. But for more on this I also recommend checking out In Defence of Epistemic Modesty.
[4] I accept this is somewhat caricatured, but I maintain that many people in EA fall close to these archetypes. (Except for the effective animal activism folk who nicely bridge this gap, maybe I should just go join them.)
[5] Look out for my upcoming report with CSER on this topic
- Big List of Cause Candidates by (25 Dec 2020 16:34 UTC; 301 points)
- Should the EA comÂmuÂnity be cause-first or memÂber-first? by (29 May 2023 15:50 UTC; 206 points)
- ReÂsults from the First Decade Review by (13 May 2022 15:01 UTC; 164 points)
- The case against âEA cause arÂeasâ by (17 Jul 2021 6:37 UTC; 148 points)
- RowÂing and SteerÂing the EffecÂtive AltruÂism Movement by (9 Jan 2022 17:28 UTC; 146 points)
- AnÂnouncÂing AnÂiÂmal Welfare vs Global Health DeÂbate Week (Oct 7-13) by (23 Sep 2024 8:27 UTC; 145 points)
- A case for the effecÂtiveÂness of protest by (29 Nov 2021 11:50 UTC; 123 points)
- Big List of Cause CanÂdiÂdates: JanÂuary 2021âMarch 2022 update by (30 Apr 2022 17:21 UTC; 123 points)
- Four catÂeÂgories of effecÂtive alÂtruÂism critiques by (9 Apr 2022 15:48 UTC; 100 points)
- ConÂverÂgence theÂsis beÂtween longterÂmism and neartermism by (30 Dec 2021 16:03 UTC; 100 points)
- Why EA meta, and the top 3 charÂity ideas in the space by (6 Jan 2021 15:47 UTC; 88 points)
- DoÂing Good Badly? - Michael Plantâs theÂsis, ChapÂters 5,6 on Cause PriÂoriÂtiÂzaÂtion by (4 Mar 2021 16:57 UTC; 75 points)
- GetÂting a feel for changes of karma and conÂtroÂversy in the EA FoÂrum over time by (7 Apr 2021 7:49 UTC; 75 points)
- CEAâs 2020 AnÂnual Review by (10 Dec 2020 23:45 UTC; 69 points)
- How to think about an unÂcerÂtain fuÂture: lesÂsons from other secÂtors & misÂtakes of longterÂmist EAs by (5 Sep 2020 12:51 UTC; 63 points)
- Should marginal longterÂmist donaÂtions supÂport funÂdaÂmenÂtal or inÂterÂvenÂtion reÂsearch? by (30 Nov 2020 1:10 UTC; 43 points)
- EA FoÂrum Prize: WinÂners for JanÂuary 2021 by (2 Apr 2021 2:58 UTC; 43 points)
- 's comment on DoÂing EA Better by (19 Jan 2023 1:13 UTC; 38 points)
- EA UpÂdates for AuÂgust 2020 by (28 Aug 2020 10:29 UTC; 34 points)
- Which EA orÂganiÂsaÂtionsâ reÂsearch has been useÂful to you? by (11 Nov 2020 9:39 UTC; 33 points)
- How to critÂiÂcise EffecÂtive Altruism by (3 Jun 2021 11:24 UTC; 31 points)
- [Link] ReadÂing the EA FoÂrum; auÂdio content by (29 Jun 2021 21:29 UTC; 31 points)
- New Cause area: The Meta-Cause [Cause ExÂploÂraÂtion Prize] by (11 Aug 2022 17:21 UTC; 30 points)
- Where should I donate? by (22 Nov 2021 20:56 UTC; 29 points)
- 's comment on EAs unÂderÂesÂtiÂmate unÂcerÂtainty in cause prioritisation by (23 Aug 2022 18:11 UTC; 23 points)
- [Linkpost] GiveWell money moved in 2020 - up by 60%! by (13 Nov 2021 17:25 UTC; 19 points)
- EA FoÂrum Prize: WinÂners for AuÂgust 2020 by (5 Nov 2020 6:22 UTC; 17 points)
- Thoughts on âtraÂjecÂtory changesâ by (7 Apr 2021 2:18 UTC; 16 points)
- What opÂporÂtuÂniÂties are there to use data sciÂence in global priÂoriÂties reÂsearch? by (18 Aug 2020 2:48 UTC; 15 points)
- 's comment on My misÂtakes on the path to impact by (11 Dec 2020 12:51 UTC; 14 points)
- 's comment on DemocratisÂing Riskâor how EA deals with critics by (2 Jan 2022 0:13 UTC; 12 points)
- ComÂpoundÂing asÂsumpÂtions and what it mean to be altruistic by (1 Sep 2022 7:34 UTC; 9 points)
- Rossa OâKeeffe-OâDonoÂvan: An inÂtroÂducÂtion to global priÂoriÂties research by (25 Oct 2020 5:48 UTC; 9 points)
- 's comment on Why Iâve come to think global priÂoriÂties reÂsearch is even more imÂporÂtant than I thought by (16 Aug 2020 0:27 UTC; 8 points)
- 's comment on Should I tranÂsiÂtion from ecoÂnomics to AI reÂsearch? by (28 Feb 2021 19:42 UTC; 7 points)
- 's comment on DoÂing good while clueless by (3 Dec 2021 5:48 UTC; 6 points)
- Food for Thought 6: MaxÂimiÂsaÂtion is perilous by (24 Jul 2023 14:34 UTC; 5 points)
- 's comment on InÂtroÂducÂing our Newest CharÂity RecomÂmenÂdaÂtionsâFrom ReÂducÂing Brick Kiln EmisÂsions to SeÂcurÂing Scale-up FundÂing for AlterÂnaÂtive Proteins by (22 Aug 2025 10:37 UTC; 4 points)
- 's comment on linÂcolnqâs Quick takes by (30 Dec 2021 18:45 UTC; 4 points)
- 's comment on MichaelStJulesâs Quick takes by (19 Aug 2020 3:21 UTC; 2 points)
- 's comment on Open LetÂter to Young EAs by (13 Oct 2024 19:41 UTC; 2 points)
- 's comment on ConÂverÂgence theÂsis beÂtween longterÂmism and neartermism by (31 Dec 2021 17:53 UTC; 2 points)
- 課éĄĺčŁăŽăăă°ăŞăšă by (20 Aug 2023 14:59 UTC; 2 points)
- 's comment on Eeveeâs Quick takes by (6 Jan 2021 18:49 UTC; 1 point)
Thanks, I definitely agree that there should be more prioritization research. (I work at GPI, so maybe thatâs predictable.) And I agree that for all the EA talk about how important it is, thereâs surprisingly little really being done.
One point Iâd like to raise, though: I donât know what youâre looking for exactly, but my impression is that good prioritization research will in general not resemble what EA people usually have in mind when they talk about âcause prioritizationâ. So when putting together an overview like this, one might overlook some of even what little prioritization research is being done.
In my experience, people usually imagine a process of explicitly listing causes, thinking through and evaluating the consequences of working in each of them, and then ranking the results (kind of like GiveWell does with global poverty charities). I expect that the main reason more of this doesnât exist is that, when people try to start doing this, they typically conclude it isnât actually the most helpful way to shed light on which cause EA actors should focus on.
I think that, more often than not, a more helpful way to go about prioritizing is to build a model of the world, just rich enough to represent all the levers between which youâre considering and the ways you expect them to interact, and then to see how much better the world gets when you divide your resources among the levers this way or that. By analogy, a ânaĂŻveâ governmentâs approach to prioritizing between, say, increasing this yearâs GDP and decreasing this yearâs carbon emissions would be to try to account explicitly for the consequences of each and to compare them. Taking the lowering emissions side, this will produce a tangled web of positive and negative consequences, which interact heavily both with each other and with the consequences of increasing GDP: it will mean
less consumption this year,
less climate damage next year,
less accumulated capital next year with which to mitigate climate damage,
more of an incentive for people next year to allow more emissions,
more predictable weather and therefore easier production next year,
âŚbut this might mean more (or less) emissions next year,
âŚand so on.
It quickly becomes clear that finishing the list and estimating all its items is hopeless. So what people do instead is write down an âintegrated assessment modelâ. What the IAM is ultimately modeling, albeit in very low resolution, is the whole world, with governments, individuals, and various economic and environmental moving parts behaving in a way that straightforwardly gives rise to the web of interactions that would appear on that infinitely long list. Then, if youâre, say, a government in 2020, you just solve for the policyâthe level of the carbon cap, the level of green energy subsidization, and whatever else the model allows you to considerâthat maximizes your objective function, whatever that may be. What comes out of the model will be sensitive to the construction of the model, of course, and so may not be very informative. But Iâd say it will be at least as informative as an attempt to do something that looks more like what people sometimes seem to mean by cause prioritization.
If the project of âwriting down stylized models of the world and solving for the optimal thing for EAs to do in themâ counts as cause prioritization, Iâd say two projects Iâve had at least some hand in over the past year count: (at least sections 4 and 5.1 of) my own paper on patient philanthropy and (at least section 6.3 of) Leopold Aschenbrennerâs paper on existential risk and growth. Anyway, I donât mean to plug these projects in particular, I just want to make the case that theyâre examples of a class of work that is being done to some extent and that should count as prioritization research.
âŚAnd examples of what GPI will hopefully soon be fostering more of, for whatever thatâs worth! Itâs all philosophy so far, I know, but my paper and Leoâs are going on the GPI website once theyâre just a bit more polished. And weâve just hired two econ postdocs Iâm really excited about, so weâll see what they come up with.
Hey Phil. Iâm someone who is very interested in the work of GPI and am impressed by what I have seen so far. Iâm looking forward to seeing what the new economists get up to!
I had a look at Leopoldâs paper a while back, have listened to you on the 80K podcast and have watched a few of GPIâs videos including Christian Tarsneyâs one on the epistemic challenge to longtermism. I notice that in a lot of this research, key results are highly sensitive to the value of certain parameters. My memory is slightly hazy on specifics but I think in Christianâs paper the validity of longtermism depends largely on the existence and frequency of exogenous nullifying events (ENEs) that can essentially wipe out any trajectory change efforts that came before (apologies if Iâm not being perfectly accurate here).
I am wondering if empirical estimation of key parameters is a gap in current cause prioritisation research. Because the value of these parameters is so important in determining results from the models, it seems very high-value to more accurately estimate these parameters. Do you know if anyone is actually doing this? Is anyone for example looking into the nature of ENEs? Is this something new economists at GPI might engage in? If this type of research isnât suitable for GPI, does GPI need closer links to other research institutions that are interested in carrying out more empirical research?
Thanks! I agree that people in EAâincluding Christian, Leopold, and myselfâhave done a fair bit of theory/âmodeling work at this point which would benefit from relevant empirical work. I donât think this is what either of the current new economists will engage in anytime soon, unfortunately. But I donât think it would be outside a GPI economistâs remit, especially once weâve grown.
OK thatâs good to hear. It probably makes sense to spend some time laying a solid theoretical base to build on. Iâm aware of how new GPI still is so Iâm looking forward to seeing how things progress!
Hi, Thank you for this really helpful comment. It was really interesting to read about how you work on cause prioritisation research and use IAMs. Glad that GPI will be expanding.
I think this is one of the most important things we can be doing. Maybe even the most important since it covers such a wide area and so much government policy is so far from optimal.
I donât think thatâs right. Iâve written about what it means for a system to do âthe optimal thingâ and the answer cannot be that a single policy maximizes your objective function:
Unless by policy, you mean âthe entirety of what government doesâ, then yes. But given that youâre going to consider one area at a time, and youâre âonly including all the levers between which youâre consideringâ, you could reach a local optimum rather than a truly ideal end state. The way I like to think about it is âHow would a system for prisons (for example) be in the best possible future?â This is not necessarily going to be the system that does the greatest good at the margin when constrained to the domain youâre considering (though they often are). Rather than think about a system maximizing your objective function, itâs better to think of systems as satisfying goals that are aligned with your objective function.
I wonder if we could create an open source library of IAMs for researchers and EAs to use and audit.
At a glance, Salesforceâs AI Economist seems like an attempted implementation of an IAM.
Thanks for the post! Much of it resonated with me.
A few quick thoughts:
1. I could see some reads of this being something like, âEA researchers are doing a bad job and should feel bad.â I wouldnât agree with this (mainly the latter bit) and assume the author wouldnât either. Lots of EAs I know seem to be doing about the best that they know of and have a lot of challenges they are working to overcome.
2. Iâve had some similar frustrations over the last few years. I think that there is a fair bit of obvious cause prioritization research to be done thatâs getting relatively little attention. Iâm not as confident as you seem to be about this, but agree it seems to be an issue.
3. I would categorize many of the issues as being systematic between different sectors. I think significant effort in these areas would require bold efforts with significant human and financial capital, and these clusters are rare. Right now the funding situation is still quite messy for ventures outside the core OpenPhil cause areas.
I could see an academic initiative taking some of them on, but that would be a significant undertaking from at least one senior academic who may have to take a major risk to do so. Right now we have a few senior academics who led/âcreated the existing main academic/âEA clusters, and these projects were very tied to the circumstances of the senior people.
If you want a job in Academia, itâs risky to do things outside the common tracks, and if you want one outside of Academia, itâs often riskier. One in-between is making new small nonprofits. This is also a significant undertaking however. The funding situation for small ongoing efforts is currently quite messy; these are often too small for OpenPhil but too big for EA funds.
4. One reason why funding is messy is because itâs thought that groups doing a bad job at these topics could be net negative. Thus, few people are trusted to lead important research in new areas that are core to EA. This could probably be improved with significantly more vetting, but this takes a lot of time. Now that I think about it, OpenPhil has very intensive vetting for their hires, and these are just hires; after they are hired they get managers and can be closely worked with. If a funder funds a totally new research initiative, they will have a vastly lower amount of control (or understanding) over it than organizations do over their employees. Right now we donât have organizations around who can do near hiring-level amounts of funding for small initiatives, perhaps we should though.
5. We only have so many strong EA researchers, and fewer people capable of leading teams and obtaining funding. Right now a whole lot of great ones are focused on AI (this often requires many years of grad school or training) and Animals. My impression is that on the margin, moving some people from these fields to other fields (cause prioritization or experimental new things) could be good, though a big change to several individuals.
6. It seems really difficult to convince committed researchers to change fields. They often have taken years to develop expertise, connections, and citations, so changing that completely is very costly. An alternative is to focus on young, new people, but those people take a while to mature as researchers.
In EA we just donât have many âgreat generic researchersâ who we can reassign from one topic to something very different on short notice. More of this seems great to me, but itâs tricky to setup and attract talent for.
7. I think itâs possible that older/âexperienced researchers donât want to change careers, and new ones arenât trusted with funding. Looking back Iâm quite happy that Ellie and Holden started GiveWell without feeling like they needed to work in an existing org for 4 years first. Iâm not sure what to do here, but would like to see more bets on smart young people.
8. I think there are several interesting âgapsâ in EA and am sure that most others would agree. Solving them is quite challenging, it could require a mix of coordination, effort, networking, and thinking. Iâd love to see some senior people try to do work like this full-time. In general Iâd love for see more âEA researcher/âfunding coordinationâ, that seems like the root of a lot of our problems.
9. I think Rethink Priorities has a pretty great model and could be well suited to these kinds of problems. My impression is funding has been a bottleneck for them. I think that Peter may respond to this, so can do so directly. If there are funders out there who are excited to fund any of the kinds of work described in this article, Iâd suggest reaching out to Rethink Priorities and seeing if they could facilitate that. They would be my best bet for that kind of arrangement at the moment.
10. Personally, I think forecasting/âtooling efforts could help out cause prioritization work quite a bit (this is what Iâm working on), but it will take some time, and obviously arenât direct work on the issue.
Tank you Ozzie. Very very helpful. To respond.
1. EA researchers are doing a great job. Much kudos to them. Fully agree with you on that. I think this is mostly a coordination issue.
3. Agree a messy funding situation is a problem. Not so sure there is that big huge gap between groups funded by EA Funds and groups funded by OpenPhil.
4. Maybe we should worry less about âgroups doing a bad job at these topics could be net negativeâ. I am not a big donor so find this hard to judge this well. Also I am all for funding well evidenced projects (see my skepticism below about funding âsmart young peopleâ). But I am not convinced that we should be that worried that research on this will lead to harm, except in a few very specific cases. Poor research will likely just be ignored. Also most Foundations vet staff more carefully than they vet projects they fund.
5-6. Agree research leaders are rare (hopefully this inspires them). Disagree that junior researchers are rare. You said: âWe only have so many strong EA researchers, and fewer people capable of leading teams and obtaining funding.â + âIt seems really difficult to convince committed researchers to change fieldsâ Very good points. That said I think Rethink Priories have been positively surprised at how many very high quality applicants they have had for research roles. So maybe junior researchers are there. My hope this post inspires some people to set up more organisations working in this space.
7. Not so sure about âmore bets on smart young peopleâ. Not sure I agree. I tend to prefer giving to or hiring people with experience or evidence of traction. But I donât have a strong view and would change my mind if there was good evidence on this. There might also be ways to test less experienced people before funding the, like through a âCharity Entrepreneurshipâ type fellowship scheme.
8. Iâd love to have more of your views on what an âEA researcher/âfunding coordinationâ looks like as I could maybe make it happen. I am a Trustee of EA London. EA London is already doing a lot of global coordination of EA work (especially under COVID). I have been thinking and talking to David (EA London staff) about scaling this up, hiring a second person etc. If you have a clear vision of what this might look like or what it could add I would consider pushing more on this.
9. Rethink Priorities is OK. I have donated to them in the past but might stop as not sure they are making much headway on the issue listed here. Peter said âI think we definitely do âBeyond speculation (practical longtermism) ⌠So far weâve mainly been favoring within-cause intervention prioritizationâ.
10. Good luck with your work on forecasting efforts.
Thanks for the response!
Quick responses:
4. I havenât investigated this much myself, I was relaying what I know from donors (I donât donate myself). Iâve heard a few times that OpenPhil and some of the donors behind EA Funds are quite worried about negative effects. My impression is that the reason for some of this is simple, but there are some more complicated reasons that go into the thinking here that havenât been written up fully. I think Oliver Habryka has a bunch of views here.
5-6. I didnât mean to imply that junior researchers are ârareâ, just that they are limited in number (which is obvious). My impression is that thereâs currently a bottleneck to give the very junior researchers experience and reputability, which is unfortunate. This is evidenced by Rethinkâs round. I think there may be a fair amount of variation in these researchers though; that only a few are really the kinds who could pioneer a new area (this requires a lot of skills and special career risks).
7. Iâm also really unsure about this. Though to be fair, Iâm unsure about a lot of things. To be clear though, I think that there are probably rather few people this would be a good fit for.
Iâm really curious just how impressive the original EA founders were compared to all the new EAs. There are way more young EAs now than there were in the early days, so theoretically we should expect that some will be in many ways more competent than the original EA founders, minus in experience of course.
Part of me wonders: if we donât see a few obvious candidates for young EA researchers as influential as the founders were, in the next few years, maybe something is going quite wrong. My guess is that we should aim to resemble other groups that are very meritocratic in terms of general leadership and research.
8. Happy to discuss in person. They would take a while to organize and write up.
The very simple thing here is that to me, we really could use âfunding workâ of all types. OpenPhil still employs a very limited headcount given their resources, and EA Funds is mostly made up of volunteers. Distributing money well is a lot of work, and there currently arenât many resources going into this.
One big challenge is that not many people are trusted to do this work, in part because of the expected negative impacts of funding bad things. So thereâs a small group trusted to do this work, and a smaller subset of them interested in spending time doing it.
I would love to see more groups help coordinate, especially if they could be accepted by the major donors and community. I think thereâs a high bar here, but if you can be over it, it can be very valuable.
Iâd also recommend talking to the team at EA Funds, which is currently growing.
9. This could be worth discussing more further. RP is still quite early and developing. If you have suggestions about how it could improve, Iâd be excited to have discussions on that. I could imagine us helping change it in positive directions going forward.
10. Thanks!
Excellent comment.
Do you have a list of the top research areas youâd like to see that arenât getting done?
I agree. Forecasting is a common good to many causes, so youâd expect it not to be neglected. But in practice, it seems the only people working on forecasting are EA or EA-adjacent (Iâd count Tetlock as adjacent). Recently Iâve had many empirical questions about the future that I thought could use good forecasts, e.g., for this essay I wrote, I made some Metaculus questions and used those to help inform the essay. It would be really helpful if it were easier to get good forecasts.
Oh boy. Iâve had a bunch of things in the back of my mind. Some of this is kind of personal (specific to my own high level beliefs, but wouldnât apply to many others).
Iâm a longtermist and believe that most of the expected value will happen in the far future. Because of that, many of the existing global poverty, animal welfare, and criminal justice reform interventions donât seem particularly exciting to me. Iâm unsure what to think of AI Risk, but âunsureâ is much, much better than âseems highly unlikely.â I think itâs safe to have some great people here; but currently get the impression that a huge number of EAs are getting into this field, and this seems like too many to me on the margin.
What Iâm getting to is: when you exclude most of poverty, animal welfare, criminal justice reform, and AI, thereâs not a huge amount getting worked on in EA at the moment.
I think I donât quite buy the argument that the only long-term interventions to consider are ones that will cause X-risks in the next ~30 years, nor the argument that the only interventions are ones that will cause X-risks. I think itâs fairly likely(>20%) that sentient life will survive for at least billions of years; and that there may be a fair amount of lock-in, so changing the trajectory of things could be great.
I like the idea of building âresilienceâ instead of going after specific causes. For instance, if we spend all of our attention on bio risks, AI risks, and nuclear risks, itâs possible that something else weird will cause catastrophe in 15 years. So experimenting with broad interventions that seem âgood no matter whatâ seems interesting. For example, if we could have effective government infrastructure, or general disaster response, or a more powerful EA movement, those would all be generally useful things.
I like Philâs work (above comment) and think it should get more attention, quickly. Figuring out and implementing an actual plan that optimizes for the long term future seems like a ton of work to me.
I really would like to see more âweird stuff.â 10 years ago many of the original EA ideas seemed bizarre; like treating AI risk as highly important. I would hope that with 10-100x as many people, weâd have another few multiples of weird but exciting ideas. Iâm seeing a few of them now but would like more.
Better estimation, high-level investigation, prioritization, data infrastructure, etc. seem great to me.
Maybe one way to put it would be something like, imagine clusters of ideas as unique as those of Center on Long-Term Risk, Qualia Computing, the Center for Election Science, etc. I want to see a lot more clusters like these.
Some quick ideas:
- Political action for all long term things still seems very neglected and new to me, as mentioned in this post.
- A lot of the prioritization work, even of, âLetâs just estimate a lot of things to get expected values.â
- Iâd like to see research in ways AI could make the world much better/âsafer; the most exciting part to me is how it could help us reason in better ways, pre-AGI, and what that could lead to.
- Most EA organizations wouldnât upset anyone (are net positives for everyone), but many things we may want would. For instance, political action, or potential action to prevent bio or ai companies from doing specific things. I could imagine groups like, âslightly-secretive strategic agenciesâ that go around doing valuable things, to have a lot of possible benefit (but of course significant downsides if done poorly).
- This is close to me, but Iâm curious if open source technologies could be exciting philanthropic investments. I think the donation to Roam may have gone extremely well, and am continually impressed and surprised by how little money there is in incredible but very early or experimental efforts online. Ideally this kind of work would include getting lots of money from non-EAs.
- In general, trying to encourage EA style thinking in non-EA ventures could be great. Thereâs tons of philanthropic money being spent outside EA. The top few tech billionaires just dramatically increased their net worths in the last few months, many will likely spend those eventually.
- I really care about growing the size and improve the average experience of the EA community. I think thereâs a ton of work to be done here of many shapes and forms.
- I think many important problems that feel like they should be done in Academia arenât due to various systematic reasons. If we could produce researchers who do âthe useful things, very wellâ, either in Academia or outside, that could be valuable, even in seemingly unrelated fields like anthropology, political science, or targeted medicine (fixing RSI, for instance). âElephant and the Brainâ style work comes to mind.
- On that note, having 1-2 community members do nothing but work on RSI, back, and related physical health problems for EAs/ârationalists, could be highly worthwhile at this point. We already have a few specific psychologists and a productivity coach. Maybe eventually there could be 10-40+ people doing a mini-industry of services tailored to these communities.
- Unlikely idea: insect farms. Breed and experiment with insects or other small animals in ways that seem to produce the most well-being for the lowest cost. Almost definitely not that productive, but good for diversification, and possibly reasonably cheap to try for a few years.
- Much better EA funding infrastructure, in part for long-term funding.
- Investigation and action to reform/âimprove the UN and other global leadership structures.
- Iâm curious about using extensive Facebook ads, memes, Youtube sponsorship, and similar, to both encourage Effective Altruism, and to encourage ideas we think are net valuable. These things can be highly scalable.
Also, Iâd be curious to get the suggestions of yourself and others here.
This is a really good comment.
I would like to see more of this, and I would also like to see people be less uniformly critical of this sort of work. Iâve written a few things like this, and I inevitably get a few comments along the lines of, âThis estimate isnât actually accurate, you canât know the true expected value, this research is a waste of time.â IME I get much more strongly negative comments when I write anything quantitative than when I donât. But I might just be noticing that type of criticism more than other types.
The rate of institutional value drift is something like 0.5%. Halving this would be extremely beneficial for anyone who wants to invest their money for future generations. It seems likely that if we put more effort into designing stable institutions, we could create EA investment funds that last for much longer.
The rate of individual value drift is even higher, something around 5%. Thatâs really bad. Is there anything we can do about it? Is bringing new people into the movement more important than improving retention?
Some other neglected problems (with some shameless references to my own writings):
I like GPIâs research agenda. Right now there are only about half a dozen people working on these problems.
What is the correct âphilosophy of priorsâ? The choice of prior distribution heavily affects how we should behave in areas of high uncertainty. For example, see Will MacAskillâs post and the Toby Ordâs reply. (edit: see also this relevant post)
With a simple model, I calculated that improving our estimate of the discount rate could matter more than any particular cause. The rationale is that the we should spent our resources at some optimal rate, which is largely determined by the philanthropic discount rate. Moving our spending schedule slightly closer to the optimal rate substantially increases expected utility. This is just based on a simple model, but Iâd like to see more work on this.
In the conclusion of the same essay, I gave a list of relevant ideas for potential top causes with my rough guesses on their importance/âneglectedness/âtractability. The ideas not mentioned so far are: improving the ability of individuals to delegate their income to value-stable institutions; and making expropriation and value drift less threatening by spreading altruistic funds more evenly across actors and countries.
IMO there are some relatively straightforward ways that EAs could invest better, which I wrote about here. Improving EAsâ investments could be pretty valuable, especially for âgive laterâ-leaning EAs.
Reducing the long-term probability of extinction, rather than just the probability over the next few decades. (Iâm currently writing something about this.)
If you accept that improving the long-term value of the future is more important than reducing x-risk, is there anything you should do now, or should you mainly invest to give later? Does movement building count as investing? What about cause prioritization research? When is it better to work on movement building/âcause prioritization rather than simply investing your money in financial assets?
I havenât seen these specific examples, but there definitely seems to be a similar bias in other groups. Many organizations are afraid to make any kinds of estimates at all. At the extreme end are people who donât even make clear statements, they just speak in vague metaphors or business jargon that are easy to defend but donât actually convey any information. Needless to say, I think this is an anti-pattern. Iâd be curious if anyone reading this would argue.
It seems to me like some modeling here would be highly useful, though it can get kind of awkward. I imagine many decent attempts would include numbers like, âtotal expected benefit of one memberâ. Our culture often finds some of these calculations too âcold and calculating.â It could be worth it for someone to do a decent job at some of this, and just publicly write up the main takeaways.
I find the ideas you presented quite interesting and reasonable, Iâd love to see more work along those lines.
I think it would depend a lot on how we operationalise the stance youâre arguing in favour of.
Overall, at the margin, Iâm in favour of:
less use of vague-yet-defensible language
EAs/âpeople in general making and using more explicit, quantitative estimates (including probability estimates)
(Iâm in favour of these things both in general and when it comes to cause priorisation work.)
But Iâm somewhat tentative/âmoderate in those views. For the sake of conversation, Iâll skip stating the arguments in favour of those views, and just focus on the arguments against (or the arguments for tentativenesss/âmoderation).
Essentially, as I outlined in this post (which I know you already read and left useful comments on), I think making, using, and making public quantitative estimates might sometimes:
Cost more time and effort than alternative approaches (such as more qualitative, âall-things-consideredâ assessments/âdiscussions)
Exclude some of the estimatorsâ knowledge (which couldâve been leveraged by alternative approaches)
Cause overconfidence and/âor cause underestimations of the value of information
Succumb to the optimizerâs curse
Cause anchoring
Cause reputational issues
(These downsides wonât always occur, can sometimes occur more strongly if we use approaches other than quantitative estimates, and can be outweighed by the benefits of quantitative estimates. But here Iâm just focusing on âarguments againstâ.)
As a result:
I donât think we should always aim for or require quantitative estimates (including in cause prioritisation work)
I think it may often be wise to combine use of quantitative estimates, formal models, etc. with more intuitive /â all-things-considered /â âblack-boxâ approaches (see also)
I definitely think some statements/âwork from EAs and rationalists have used quantitative estimates in an overconfident way (sometimes wildly so), and/âor has been treated by others as more certain than it is
Itâs plausible to me that this overconfidence problem has not merely co-occurred or correlated with use of quantitative estimates, but that it tends to be exacerbated by that
But Iâm not at all certain of that. Using quantitative estimates can sometimes help us see our uncertainty, critique peopleâs stances, have reality clearly prove us wrong (well, poorly calibrated), etc.
Relatedly, I think people using quantitative estimates should be very careful to remember how uncertain they are and communicate this clearly
But Iâd say the same for most qualitative work in domains like longtermism
Itâs plausible to me that the anchoring and/âor reputational issues of making oneâs quantitative estimates public outweigh the benefits of doing so (relative to just making more qualitative conclusions and considerations public)
But Iâm not at all certain of that (as demonstrated by me making this database)
And I think thisâll depend a lot on how well thought-out oneâs estimates are, how well one can communicate uncertainty, what oneâs target audiences are, etc.
And it could still be worth making the estimates and not communicating them, or communicating them less publicly
I donât think this position strongly contrasts with your or Michaelâs positions. And indeed Iâm a fan of what Iâve seen of both your work, and overall I favour more work like that. But these do seem like nuances/âcaveats worth noting.
Nice post. I think I agree with all of that.
Iâm not advocating for âpoorly done quantitative estimates.â I think anyone reasonable would admit that itâs possible to bungle them.
Iâm definitely not happy with a local optimum of ânot having estimatesâ. Itâs possible that âhaving a few estimatesâ can be worse, but I imagine weâll want to get to the point of âhaving lots of estimates, and becoming more mature to be able to handle them.â at some point, so thatâs the direction to aim for.
I think the âlocal vs global optimaâ framing is an interesting way of looking at it.
That reminds me of some of my thinking when I was trying to work out whether itâd be net positive to make that database of existential risk estimates (vs it being net negative due to anchoring, reputational issues to EA/âlongtermists, etc.). In particular, a big part of my reasoning was something like:
With your comment in mind, Iâd now add:
Reminds me of the thing where corporations donât want to implement internal prediction markets because implementing a market isnât in the self-interest of any individual decision-maker.
Yea, I think there are similar incentives at play in both cases
I think this is a good point. A three-factor model of community building comes to mind as a prior post that had to tackle and communicate about this sort of tricky thing, and that did a good job of that, in my opinion. That post might be useful reading for other people who have to tackle and communicate about this sort of tricky issue in future. (E.g., I quoted it in a recent post of mine.)
The most relevant parts of that post are the section on âElitism vs. egalitarianismâ, and the following paragraph:
Thanks!
The basic model is really easy. Total number of community members at time
tis e(râv)t, whereris the movement growth rate andvis the value drift rate. So if the value of the EA community is proportional to the number of members, then increasingrby some number of percentage points is exactly as good as decreasingvby the same amount.Itâs less obvious how to model the tractability of changing
randv.I liked this comment.
Do you mean âIf you accept that improving the long-term value of the future is more important than reducing extinction riskâ (as distinct from existential risk more broadly, which already includes other ways of improving the value of the future)?
Or âIf you accept that improving the long-term value of the future is more important than reducing the risk of existential catastrophe in the relatively near future?â
Or something else (e.g., about smaller trajectory changes)?
I meant to distinguish between long-term efforts and reducing x-risk in the relatively near future (the second case on your list), sorry that was unclear.
Hereâs a list I came up with from thinking about this for ~30 minutes:
Better ways of measuring what matters
Better neuroimaging tech to parse out the neurological basis of desirable & undesirable subjective states
Better measures of subjective well-being
Help EAs see more clearly, unpack + resolve personal traumas, and boost their efficacy + motivation
Emotional healing as a prerequisite to rationality
CFAR, OAK, Leverage, etc.
Plus building methods to audit which projects are working, which are failing, which are stagnating
Perhaps also a data collection project that vacuums up outcomes from the object-level projects?
Strengthen EA community ties /â our sense of fellowship
More honesty about how weird effective research methods can be
More acknowledgement of the interdependent causal complex that gives rise to good research (e.g. Alex Flintâs introduction here)
More Ben Franklin-esque Juntos
Import more of Silicon Valleyâs âpay it forwardâ culture
Less reputation management /â more psychological safety
Less sniping
OAK, Bay Area group houses, EA Hotel
Again, building out (non-dominating) ways to audit & collect data from the object-level projects
Less scrupulosity
Ties into the above but deserves its own bullet given how our collective psychology skews
Compassionate fighting against the thought-pattern Scott Alexander describes here
Make EA sexier
Market to retail donors /â the broader public (e.g. Future Perfect, e.g. 80k, e.g. GiveWell running ads on Vox podcasts)
Market to impact investors (e.g. Lionheart) and big philanthropy
Cultivating more âI want to be like thatâ energy
Seems easy to walk back if it isnât working because so many interest groups are competing for mindshare
Support EA physical health
Propagate effective treatments for RSI & back problems, as above
Take the mind-body connection seriously
Propagate best practices for nutrition, sleep, exercise; make the case that attending to these is prerequisite to having impact (rather than trading off against having impact)
Advance our frontier of knowledge
e.g. GPIâs research agenda, e.g. the stuff Michael Dickens laid out in his comment
More work on how to solve coordination problems
More work on governance (e.g. Vitalikâs stuff, e.g. the stuff Palladium is exploring)
Fund many moonshots /â speculative projects
Fund projects that can be walked back if they arenât working out (which is most projects, though some tech projects may be hard-to-reverse)
Worry less about brand management
Thatâs an interesting list, especially for 30 minutes :) (Makes me wonder what you or others could do with more time.)
Much of it focused on EA community stuff. I kind of wonder if funders are extra resistant to some of this because it seems like theyâre just âgiving money to their friendsâ, which in some ways, they are. I could see some of it feeling odd and looking bad, but I think if done well it could be highly effective.
Many religious and ethnic groups spend a lot of attention helping each other, and it seems to have very positive effects. Right now EA (and the subcommunities I know of in EA) seem fairly far from that still.
https://ââwww.nationalgeographic.com/ââculture/ââ2018/ââ09/ââsouth-asia-america-motels-immigration/ââ
A semi-related point on that topic; Iâve noticed that for many intelligent EAs, it feels like EA is a competition, not a collaboration. Individuals at social events will be trying to one-up each other with their cleverness. Iâm sure Iâve contributed to this. Iâve noticed myself becoming jealous when I hear of others who are similar in some ways doing well, which really should make no sense at all. I think in the anonymous surveys 80K did a while back a bunch of people complained that there was a lot of signaling going on and that status was a big deal.
Many companies and open source projects live or die depending on the cultural health. Investments in the cultural health of EA may be difficult to measure, but pay off heavily in the long run.
Thanks!
100% agree that cultural health is very important, and that EA is under-investing in it. (The âwe donât want to just give money to our friendsâ point resonates, and other scrupulosity-related stuff is probably at play here as well.)
Thank you for talking about this!
Iâve noticed similar patterns in my own mind, especially around how I engage with this Forum. (Iâve been stepping back from it more this year because Iâve noticed that a lot of my engagement wasnât coming from a loving place.)
These dynamics may not make any sense, but there are deep biological & psychological forces giving rise to them. [insert Robin Hansonâs âeverything you do is signalingâ rant here]
Right. Last year concerns about status made a lot of heat on the Forum (1, 2, 3), but as far as I know nothing has really changed since then, perhaps other than more folks acknowledging that status is a thing.
(Status seems closely related to scrupulosity & to EA being vetting-constrained; I havenât unpacked this yet.)
(A bunch of those ideas seem interesting, but Iâll just comment on the one where I have something to say)
This does seem to me like it makes it easy to walk back efforts to make EA sexier, but it doesnât seem like it makes it easy to do it again later in a different way (without the odds of success being impaired by the first attempt).
Essentially:
I think we could make EA relatively small/ânon-prominent/âwhatever again if we wanted to
But it also seems plausible to me that EA can only make âone big first impressionâ, and that thatâll colour a lot of peopleâs perceptions of EA if it tries to make a splash again later (even perhaps 10-30 years later).
Put another way:
They might stop thinking about EA if we stop actively reminding them
But then if we start competing for their attention again later theyâll be like âWait, arenât those the people who [whatever impression they got of us the first time]?â
Posts that informed my thinking here:
Hard-to-reverse decisions destroy option value (which I see you also referenced yourself)
The fidelity model of spreading ideas
How valuable is movement growth?
Why not to rush to translate effective altruism into other languages
Your list reminds me of this thread: What EA Forum posts do you want someone to write?
I think Iâve become a bit convinced that incentive and coordination problems are so poor that many âcommon goodsâ are surprisingly neglected. The history of the slow development and proliferation of Bayesian techniques in general (up to around 20 years ago maybe, but even now I think the foundations can be improved a lot) seems quite awful.
Also, at this point, I feel quite strong about much of the EA community; like weâve gathered up many of the most [intelligent + pragmatic + agentic + high-level-optimizing] people in the world. As such I think we can compete and do a good job in many areas we may choose to focus on. So it could be that we could move up from âabsolutely, incredibly neglectedâ, to âjust somewhat neglectedâ, which could open up a whole bunch of fields.
It seems like I routinely learn about some smart and insightful person through non-EA channels and then later find out theyâre involved in EA or at least subscribe to EA principlesâmost recent example for me is Gordon Irlam, who I originally learned about through his writings on portfolio selection.
Iâve been thinking a lot about the lack of non-EA interest or focus on forecasting or related tools. I was very surprised when I made Guesstimate and there was both excitement from several people, but not that much excitement from most businesses or governments.
I think that forecasting of the GJP sort is still highly niche. Almost no one knows of it or understands the value. You can look at this as similar to specific advances in, say, type theory or information theory.
The really smart groups that have interests in improving their long term judgement seem to be financial institutions and similar. These are both highly secretive, and not interested in spending extra effort helping outside groups.
So to really advance a field like judgemental forecasting would require a combination of expertise, funding, and interest in helping the broad public, and this is a highly unusual combination. I imagine that if IARPA wasnât around in time to both be interested in and able to fund GJPâs efforts, much less would have happened there. Iâd also personally point out that Iâd expect that IARPAâs funding of it was around 1/â3rd or maybe 1/â20th as efficient as it would have been if OpenPhil would have organized a more directed effort, in terms of global benefit.
This makes me think that there are probably many other very specific technology and research efforts that also be exciting for us to focus on, but we donât have the expertise to recognize them. May may have gotten lucky with forecasting/âestimation tech, as that was something we had to get close to anyway for other reasons.
Also worth noting that the managing director of IARPAâs forecasting program was Jason Matheny, who previously founded New Harvest (which does cultured meat research, and was the first such org AFAIK) and did x-risk research at FHI.
Yep, and a few others at IARPA who worked around the forecasting stuff were also EAs or close.
Thanks for this, itâs pretty interesting to get your perspective as someone whoâs been (I presume) heavily engaged in the community for some time. I thought your other post on the All-Party Parliamentary Group for Future Generations was awesome, by the way.
You asked for comments including âsmallâ thoughts so here are some from me, for what theyâre worth. These are my current views which I can easily see changing if I were to think about this more etc.
I think I basically agree that there doesnât seem to have been much progress in cause prioritisation in say the last five years, compared to what you might have hoped for.
(mainly written to clarify my own thoughts:) It seems like you can do cause prioritisation work either by comparing different causes, or by investigating a particular cause (especially a cause thatâs relatively unknown or poorly investigated), or by doing more âfoundationalâ things like asking âwhat is moral value anyway?â, âhow should we compare options under uncertaintyâ, etc.
My impression the Effective Altruism community has invested a significant amount of resources into cause prioritisation research, and relative lack of progress is because itâs hard
The Global Priorities Institute is basically doing cause prioritisation (as far as I know, and by the vague definition of cause prioritisation I have in my head) - maybe itâs more on the foundational /â academic field building side (i.e. fleshing out and formally writing up existing arguments), but my impression is that itâs mostly stuff that seems worth working through to work out how to do the most good
I think you could give the cause prioritisation label to some of the work from the Future of Humanity Instituteâs macrostrategy team(?)
Open Philanthropy Project spends a lot of their resources doing some version of this, as you noted
Rethink Priorities is basically doing this (though I might agree with you that it would be better if they were able to compare across causes rather than investigating a particular cause)
Iâd consider work on forecasting /â understanding AI progress, as is done by e.g. AI Impacts as cause prioritisation
The above (which is probably far from comprehensive) seems like a decent fraction of the resources of the âlongtermistâ part of the community (the part Iâm familiar with). I suppose I lean towards wanting a larger fraction of resources allocated to cause prioritisation, but I donât think itâs that obvious either way. Anyway, regardless of whether the right fraction of resources have been spent on this, I think itâs just very hard and that this explains a lot of what youâre describing.
Maybe one reason thereâs not much work comparing causes in particular is that thereâs so much uncertainty, which makes it very difficult to do well enough that the output is valuable. In particular
people donât agree on empirical issues that can radically alter the relative importance of different causes (e.g. AI timelines)
people donât agree on âthe correct moral theoryâ /â whatever the ultimate objective is /â what you ~call âdifferent viewsâ
Edit: reading the above you could probably get the impression that I think youâre wrong to âraise the alarmâ about the need for more /â different cause prioritisation, but I donât think that at all. I think Iâm pretty sympathetic to most of what you wrote.
I agree that the cause prioritisation work we need to do now is far harder than the work we were doing ten years ago. I think AI Impacts provides an interesting illustration of that: It was initially set up essentially as a cause prioritisation org. But in doing that work it became clear that whereas in comparing between different global development interventions there was a large published literature to build on, when trying to compare work on AI to other areas, and compare interventions within AI safety, there was far less to go on. That led to the conclusion that the work they should do first was get a better grasp on questions like âhow fast will AI likely develop, and how discontinuously?â.
I think another thing going on is that the stakes have become higher. When Giving What We Can first started publishing recommendations eg comparing between donating to education or deworming, we only had ~30 members. Thatâs a lot of money over peopleâs lifetimes, but itâs nowhere near the resources the EA movement now commands. The huge increase in resources to allocate makes it more worth doing the foundational work that groups like AI Impacts do, and also the theoretic work GPI does. I think that makes it look like thereâs less work being done, because there are way fewer actionable results per hour spent.
Hi Ben. Thank you for this. This is exactly what I like, people replying with their impressions of the post, even if rough, so that I get some idea of how people feel and if this resonates. So thank you.
- -
That said I disagree with your claim.
You say âI think itâs just very hard and that this explains a lot of what youâre describingâ.
I think it may well be difficult but it is mostly not happening due to underinvestment and lack of coordination in this space. Hence raising a flag.
I make this case above by comparing what I would see as a good coverage of the space with what is actually happening, so donât have much to add here except that it is interesting that others see it differently.
I note a few counterexamples to the idea it is not done because it is hard (even in the âlongtermistâ area) such as: 80Kâs stated reason for doing less in this space is that they have reached a conclusion (priority paths) that they are happy with, that GPI was only created recently (research agenda is from 2019), Rethink Priorities is following funding, AI strategy is also difficult but is progressing much quicker. etc.
- -
Overall, I donât have a strong view on this, and maybe you are correct. But this is something that could be looked into more. In particular I have mostly dug into research on websites but if I (or anyone) had more time it would be great talk to people who have worked on this and see if it is difficult or underinvested in (or both). I also think you could with a bit of time somewhat address this question by writing a research agenda and looking for potential low hanging research fruit in this domain.
Hey Sam, just a very quick comment that the post you link to wasnât meant to imply we intend to do less prioritisation research than before.
The 50/â30/â20 split we mention there was for how we intend to split delivery efforts across different target audiences, rather than on research vs. delivery. And also note that this means ~50% of effort is going into non-priority paths, which will include new potential priorities & career paths (such as the lists we posted recently).
As Rob notes in another comment, we still intend to spend ~10% of team time on research, similar to the past, and more total time because the team is larger. This would include looking into whether we should add new priority paths or problem areas.
Hi Ben,
Thank you for flagging â it is super amazing to hear and very excited by that.
I looked at a lot of organisations and tried to extrapolate what they will be doing in this space from the public information rather than reaching out, so it is great to see comments saying that research along these lines will be happening, and sorry for any thing mischaracterised.
This comment below is also relevant: https://ââforum.effectivealtruism.org/ââposts/ââMSYhEatxkEfg46j3D/ââthe-case-of-the-missing-cause-prioritisation-research?commentId=RGX9f6PXvWkBvCEoK
Thank you for writing this!
I think your analysis can be specifically useful for people who want to contribute and feel like theyâre not sure where to look for neglected areas in EA.
Iâll add a small comment regarding âIt is difficult to compete with the existing organisations that are just not quite doing thisâ:
My experience with orgs in the EA community is that pretty much everyone is incredibly cooperative and genuinely happy to see others fill in the gaps that theyâre leaving.
Iâve been in talks with 80,000 hours and a few other orgs about an initiative in the careers space for a while now. Everyone weâve talked to was both open about what theyâre doing (and what they arenât doing) and ridiculously helpful with advice and support.
I think if someone is serious about trying to fill a gap in the EA body of work: Itâs important to understand from adjacent orgs how big \ real this gap is and if they have comments about your approach to it. And while I can see why someone would be worried, I think if you approach with the right attitude, the âcompetitionâ would have far more benefits than harms.
Thank you for this comment. I fully agree with this and would say that my experience of the EA community is a very positive one and that the EA community and EA organisations work very well together and are very willing to share ideas, talk and support one another. I am sure would be much support for anyone trying to fill these gaps.
Thanks for making this post, I think this sort of discussion is very important.
I disagree with this. Hereâs an alternative framing:
EAâs big ethical ideas are 1) reviving strong, active, personal moral duties, 2) longtermism, 3) some practical implications of welfarism that academic philosophy has largely overlooked (e.g. the moral importance of wild animal suffering, mental health, simulated consciousnesses, etc).
I donât think EA has had many big empirical ideas (by which I mean ideas about how the world works, not just ideas involving experimentation and observation). Weâve adopted some views about AI from rationalists (imo without building on them much so far, although thatâs changing), some views about futurism from transhumanists, and some views about global development from economists. Of course thereâs a lot of people in those groups who are also EAs, but it doesnât feel like many of these ideas have been developed âunder the banner of EAâ.
When I think about successes of âtraditionalâ cause prioritisation within EA, I mostly think of things in the former category, e.g. the things I listed above as âpractical implications of welfarismâ. But I think that longtermism in some sense screens off this type of cause prioritisation. For longtermists, surprising applications of ethical principles arenât as valuable, because by default we shouldnât expect them to influence humanityâs trajectory, and because weâre mainly using a maxipok strategy.
Instead, from a longtermist perspective, I expect that biggest breakthroughs in cause prioritisation will come from understanding the future better, and identifying levers of large-scale influence that others arenât already fighting over. AI safety would be the canonical example; the post on reducing the influence of malevolent actors is another good example. However, we should expect this to be significantly harder than the types of cause prioritisation I discussed above. Finding new ways to be altruistic is very neglected. But lots of people want to understand and control the future of the world, and itâs not clear how distinct doing this selfishly is from doing this altruistically. Also, futurism is really hard.
So I think a sufficient solution to the case of the missing cause prioritisation research is: more EAs are longtermists than before, and longtermist cause prioritisation is much harder than other cause prioritisation, and doesnât play to EAâs strengths as much. Although I do think itâs possible, and I plan to put up a post on this soon.
Aiming for maxipok doesnât mean not influencing the trajectory (if the counterfactual is catastrophe), itâs just much harder to measure impact. If measuring impact is hard, de-risking becomes more important, because of path-dependency. If we build out one or two particular longtermist cause areas really strongly with lots of certainty, theyâll have a lot of momentum (orgs and stuff) and if we find out later that they are having negative impact or not having impact (or worse, this happens and we just never find out), that will be bad.
I agree longtermist cause prioritisation is harder, even though I didnât really think your reasons were very well articulated (in particular I donât understand why youâre comparing altruism with understanding & controlling the future, seems like apples and oranges to me and surely itâs the intersection of X and altruism with the market gap), but I donât think itâs less valuable.
Thanks for writing the post! I think we need a lot more strategy research, cause prioritization being one of the most important types, and that is why we founded Convergence Analysis (theory of change and strategy, our site, and our publications). Within our focus of x-risk reduction we do cause prioritization, describe how to do strategy research, and have been working to fill the EA information hazard policy gap. We are mostly focused on strategy research as a whole which lays the groundwork for cause prioritization. Here are some of our articles:
Heuristics for cause prioritization and assessing interventions
A case for strategy research
How to find interventions and measures
Components of strategy research (one of which is cause prioritization and it is dependent upon the others)
A research agenda for longtermists
Weâre small and relatively new group and weâd like to see more people and groups do this type of research and that this field get more support and grow. There is a vast amount to do and immense opportunity in doing good with this type of research.
Iâll give a +1 for Convergence. Iâve known the team for a while and worked with Justin a few years back. Itâs a bit on the theoretical side of prioritization, but that sort of thinking often does lead to more immediate value.
My impression is also that more funding could be quite useful to them, if anyone is reading this considering.
Hey Sam â being a small organisation 80,000 Hours has only ever had fairly limited staff time for cause priorities research.
But I wouldnât say weâre doing less of it than before, and we havenât decided to cut it. For instance see Arden Koehlerâs recent posts about Ideas for high impact careers beyond our priority paths and Global issues beyond 80,000 Hoursâ current priorities.
We aim to put ~10% of team time into underlying research, where one topic is trying to figure out which problems and paths go into each priority level. We also have podcast episodes on newer problems from time to time.
All that said, I am sympathetic to the idea that as a community we are underinvesting in cause priorities research.
Super great to hear that 10% of 80000 Hours team time will go into underlying research. (Also apologies for getting things wrong, was generalising from what I could find online about what 80K plans to work on â have edited the post). If you have more info on what this research might look into do let me know.
â â
That there is an exploit explore tradeoff. Continuing to do cause prioritisation research needs to be weighed against focusing on specific cause areas.
I imply in my post that EA organisations have jumped too quickly into exploit. (I mention 80K and FHI, but l am judging from an outside view so might be wrong). I think this is a hard case to make, especially to anyone who is more certain than me about which causes matter (which may be the most EA folk). That said there are other reasons for continuing to explore, to create a diverse community, epistemic humility, game theoretic reasons (better if everyone explores a bit more), to counter optimism bias, etc.
Not sure I am explaining this well. I guess I am saying that I still think the high level point I was making stands: that EA organisations seem to move towards exploit quicker than I would like. But do let me know if you disagree.
I donât share your optimistic view of research. You write:
Thatâs because cause prioritization research is extremely difficult, not because no one has thought to do this.
Survivorship bias: what about all of the difficult subjects where we couldnât make any progress and gave up?
No, we should try if the expected returns are better than the next alternative. What if weâve already hit diminishing returns?
More generally, research isnât magic. Hiring a researcher and having them work 9-5 is no guarantee of solving a problem. You write:
Isnât it obvious that allocating researcher hours to these questions would be a waste of money? Almost by definition, we canât have good evidence that we can impact the long-run (ie. centuries) trajectory of humanity, because we havenât been collecting data for that long. And making complex decisions under high uncertainty will always be incredibly difficult; in the best case scenario, more research might yield small improvements in decision-making.
Hi Michael. Thank you for your points. It is good to hear opposing views. I have never worked in pure research so find it hard to judge and somewhat parroted Paulâs post. You may well be correct about the difficulty of research.
Let me try to draw from my own experience to elucidate why I may jumping to different intuitive conclusions on this question
My experience of research is from policy development. I think 2â3 of policy development is super easy and 1â3 is super difficult. The super easy stuff is just looking at the world and seeing if there are answers already out there and implementing them. For example on US police reform or UK tax policy or technology regulatory policy. We mostly know how to do these things well, we just need some incentive to implement best practice. The super difficult stuff is the foundational work, where a new problem emerges and no existing solutions abound, eg financial stability policy.
Now when I look at a question such as the one you quote of âmuch better research into how to make complex decisions despite high uncertaintyâ it seems to me to be a mix, but with definite areas that fall more towards the easy side. There appear to be a number of fields and domains with best practice that would be highly relevant to EAs making best decisions despite high uncertainty, that rarely seem to make it into EA circles. For example Enterprise Risk Management, economic models of Knightian uncertainty, organisational design, policy development toolkits, Robust Decision Making.
Maybe these have all been used and/âor considered not relevant (I donât work at GPI etc, I donât know). But my life experience to date leaves me with an intuition that there is still low hanging research fruit just around the next corner. This is not a well-reasoned argument or a strong case simply me sharing where I come from and how I see the challenges and the path forward.
Thanks for the reply. Iâm a jaded PhD student, but I am open to updating towards research-optimism.
I would distinguish research from implementation of research. I agree that there seems to be l0w-hanging fruit in implementing best practices, but I think implementation can be a super difficult problem in its own right. (See the state capacity literature.)
This is a great postâthanks a lot for writing it. I work at GPI, so want to add a bit of context on a couple of points, and add some of my own thoughts. Standard disclaimer that these are my personal views and not those of GPI though.
First, on GPIâs research agenda, and our progress in econ:
â(One economics student told me that when reading the GPI research agenda, the economics parts read like it was written by philosophers. Maybe this contributes to the lack of headway on their economics research plans.)â
I think this is accurate and a reflection of how the research agenda was written and has evolved. For what itâs worth, weâre currently working on refreshing the research agenda to reflect some of the âexploration researchâ weâve done in economics in the past ~18 monthsâwe should have an updated version in the next few months. More generally, weâve had very little econ research capacity to date beyond pre-doctoral researchers (very junior in academic terms). This will improve very shortlyâas Phil notes in a previous comment, weâve hired two postdocs to start in the next monthâbut as others have noted, high quality academic work is hard and takes quite a lot of time, so this may not result in a step change in actionable econ research coming out of GPI in the short run, which leads on to my second commentâŚ
Second, on theories of changeâyour point D1 is really important. Weâve actively discussed various âtheories of changeâ internally at GPI and how these should affect our strategy. A decent part of this discussion depends on what others are doing in EA and how we think GPI fits into the overall EA movement portfolio. Even within the (relatively narrow) scope of doing academic GP research in econ and philosophy, possible theories of change for GPI include (but are not limited to!) prioritising building up academic credibility for long-run influence, prioritising research that is more actionable for EAs/âphilanthropists and policymakers, prioritising influencing policymakers /â the general public, or prioritising influencing the next generation through higher education. These are not mutually exclusive, but placing different emphasis on one or the other may imply different strategy. We are still very young, and so far we have mostly been focused on laying foundations for the first of these, and have so far made much more progress on this in philosophy than econ, though I expect things will evolve in the next few years. Personally, I donât think weâll be able to effectively target all of the possible theories of change, and Iâd love to see more people and groups working on these.
For what itâs worth, Rethink Prioritiesâ research on sentience and capacity for welfare can be used to inform us how to prioritize between interventions for nonhuman animals and interventions for humans. Charity Entrepreneurship has also done research comparing animal welfare under different conditions for different species, including humans, and Founders Pledge has done a sensitivity analysis comparing the Humane League and AMF.
For what itâs worth, Christian Tarsney from GPI has looked at other aggregative views:
Average Utilitarianism Implies Solipsistic Egoism. Summary: average utilitarianism and rank-discounted utilitarianism reduce to egoism due to the possibility of solipsism. Might also apply to variable value theories, depending on the factors. See also the earlier The average utilitarianâs solipsism wager by Caspar Oesterheld.
Non-additive axiologies in large worlds. Summary: With large background (e.g. unaffected) populations, average utilitarianism, and some kinds of egalitarian and prioritarian theories reduce to additive theories, i.e. basically utilitarianism. Geometric rank-discounted utilitarianism reduces to maximin instead. (That being said, this doesnât imply we should maximize expected total utility, since it doesnât rule out risk-aversion.)
So, if your population axiology is representable by a single (continuous and impartial) real-valued function of utilities for finite populations (so excluding some person-affecting views), it seems hard to avoid totalism.
Also, I think such views (or utilitarianism) but with deontological constraints are covered by existing interventions; you can just pick among the recommended ones that donât violate any constraints, and I expect that most donât.
Suffering-focused ethics was also already mentioned.
Still, these are only slight variations of total utilitarianism or even special cases.
Some other works and authors exploring other views and their relationship to EA or EA concepts:
Teruji Thomas, âThe Asymmetry, Uncertainty, and the Long Termâ (EA Forum post)
Phil Torres (overview of focus, publications, popular media writing, EA Forum account), who works on x-risks, but I think believe in virtue ethics, and is critical of total utilitarianism, longtermism and EAâs neglect of social justice.
Roger Crisp and Theron Pummer, âEffective Justiceâ, discussing âEffective Justice, a possible social movement that would encourage promoting justice most effectively, given limited resourcesâ
Open Phil works on causes that donât receive that much attention within the rest of EA.
Johann Frick, âOn the Survival of Humanityâ (pdf), discussing the âfinal value of humanityâ, separate from the (aggregate) value of individuals.
Hilary Greaves, William MacAskill, âThe case for strong longtermismâ (discusses risk-aversion in 4.2)
GPIâs other research on decision theory and cluelessness (deep uncertainty, Knightian uncertainty), offering and analyzing alternatives and adjustments to Bayesian expected value maximization, which is usually assumed in EA. I think theyâre aiming for a more epistemically justified approach, and based on this paper and this paper, it seems like there arenât any very satisfactory approaches.
Some less formal writing:
John Halstead, âThe asymmetry and the far futureâ
Gregory Lewis, âThe person-affecting value of existential risk reductionâ
Alex HT, âIf you value future people, why do you consider near term effects?â, and the discussion there
And there are of course critiques of EA, especially by leftists, by animal rights advocates (for our welfarism) and for neglecting large scale systemic change.
On how risk- and uncertainty-aversion should arguably affect EA decisions, this was also this talk hosted by GPI, by Lara Buchak.
(Iâm mentioning that because it seems relevant, not necessarily because I agreed with the talk or with the basic idea that we should take intrinsic risk- or uncertainty-aversion seriously.)
Thanks for this list! I appreciate the Effective Justice paper because it: (1) articulates a deontological version of effective altruism and (2) shows how one could integrate the ideas of EA and justice. Iâve been trying to do the second thing for a while, although as a pure consequentialist I focus more on distributive justice, so this paper is inspiring for me.
Tangent:
What do you mean by this? Isnât risk aversion just a fact about the utility function? You can maximize expected utility no matter how the utility function is shaped.
Ah, we use utility in two ways, the social welfare function whose expected value you maximize, and the welfares of individuals on which your social welfare function depends. You can be a risk-averse utilitarian, for example, with a social welfare function like f(âiui), where the ui are the individual utilities/âwelfares and f:RâR is nondecreasing and concave.
Hm, Iâve never seen the use of $f$ like that. Can you point to an example?
An example function f, or an example where someone actually recommended or used a particular function f?
I donât know of any of the latter, but using an increasing and bounded f has come up in some discussions about infinite ethics (although it couldnât be concave towards ââ). I discuss bounded utility functions here.
An example function is 1âeâx. See this link for a graph. Itâs strictly increasing and strictly concave everywhere, and bounded above, but not below.
Yes, I meant an example of someone using f in this way. It doesnât seem to be standard in welfare economics.
Quick reaction:
I. I did spent a considerable amount of time thinking about prioritisation (broadly understood)
My experience so far is
some of the foundations /â low hanging sensible fruits were discovered
when moving beyond that, I often run into questions which are some sort of âcrucial considerationâ for prioritisation research, but the research/âunderstanding is often just not there.
often work on these âgapsâ seems more interesting and tractable than trying to do some sort of âlets try to ignore this gap and move onâ move
few examples, where in some cases I got to writing something
Nonlinear perception of happinessâif you try to add utility across time-person-moments, itâs plausible you should log-transform it (or non-linearly transform it) . sums and exponentiation do not commute, so this is plausibly a crucial consideration for part of utilitarian calculations trying to be based on some sort of empirical observation like âpain in badâ
Multi-agent minds and predictive processingâwhile this is framed as about AI alignment, super-short version of why this is relevant for prioritisation is: theories of human values depend on what mathematical structures you use to represent these values. if your prioritization depnds on your values, this is possible important
Another example could be the style of thought explained in Eliezerâs âInadequate Equillibriaâ. While you may not count it as âprioritisation researchâ, Iâm happy to argue the content is crucially important for prioritisation work on institutional change or policy work. I spent some time thinking about âhow to overcome inadequate equillibriaâ, which leads to topics from game theory, complex systems, etc.
II. My guess is there are more people who work in a similar mode, trying to basically âbuild as good world model as you canâ, dive into problems you run into, and at the end prioritise informally based on such a model. Typically I would expect such model to be in parts implicit /â be some sort of multi-model ensemble /â âŚ
While this may not create visible outcomes labeled as prioritisation, I think itâs important part of whatâs happening now
Thanks for writing this up! I think youâre raising many interesting points, especially about a greater focus on policy and going âbeyond speculationâ.
However, Iâm more optimistic than you are about the degree of work invested in cause prioritisation, and the ensuing progress weâve seen over the last years. See this recent comment of mineâIâd be curious if you find those examples convincing.
Also, speaking as someone who is working on this myself, there is quite a bit of research on s-risks and cause prioritisation from a suffering-focused perspective, which is one form of âdifferent viewsââthough perhaps this is not what you had in mind. (I think it might be good to clarify in more detail what sort of work you want to see, because the term âcause prioritisation researchâ may mean very different things to different people.)
Hi Tobias, Thank you for the comment. Yes very glad for CLR ect and all the s-risk research.
An interesting thing I noted when reading through your recent comment is that all 3 of the examples of progress involve a broadening of EA, expanding horizons, pushing back on the idea that we need to be focusing right now on AI risk now. They suggest that to date the community has perhaps gone too quickly gone towards a specific case area (AI /â immediate x-risk mitigation) rather than continued to explored.
I donât really know what to make of that. Do you examples weaken the point I am making or strengthen it? Is this evidence that useful research is happening or is this evidence that we as a community under-invests in exploration?
Maybe there is no universal answer to this question and it depends on the individual reader and how your examples affects their current assumptions and priors about the world.
Yeah, I would perhaps say that the community has historically been too narrowly focused on a small number of causes. But I think this has been improving for a while, and weâre now close to the right balance. (There is also a risk of being too broad, by calling too many causes important and not prioritising enough.)
The post Tobias was commenting on requested ânovel majorâ insights specifically. This guarantees that the examples provided would be ones that broadened EA, expanded its horizons, and pushed back on whatever priorities EA had before 2015. So I donât think we should read anything into the fact that a high proportion of the examples were of that kind, rather than e.g. refinements of existing ideas or object-level work within particular cause areas (since the question excluded such things).
(That said, I do think that the number and nature of examples we can come up with in answering that question is relevant to how useful further cause prioritisation research would be. In particular, the fact that commenters came up with some examples rather than 0 examples seems to be evidence that some cause prioritisation research occurred and was useful over the last 5 years. And the fact they came up with relatively few examples is evidence that relatively little such research occurred or was useful. And this could perhaps inform our predictions about the future.)
Iâm doing a series of recordings of EA Forum posts on my âfound in the struceâ podcast, also delving into the links and with my own comments.
Iâve just done an episode on the present post HERE
I also did one on Ben Toddâs post HERE
Next Iâll do one on the comments section on this post, I think
Let me know your thoughts, and if its useful. I think you can also engage directly with the Anchor app leaving a voice response or something.
I agree wholeheartedly with this! Strong upvote from me.
I agree that cause prioritization research in EA focuses almost entirely on utilitarian and longtermist views. Thereâs substantial diversity of ethical theories within this space, but I bet that most of the worldâs population are not longtermist utilitarians. Iâd like to see more research trying to apply cause prioritization to non-utilitarian worldviews such as ones that emphasize distributive justice.
Fully agree, but I think itâs ironic (in a good way) that your proposed solution is âmore global priorities research.â When I see some of 80Kâs more recent advice, I think, âDude, I already sank 4 years of college into studying CS and training to be a software engineer and now you expect me to shift into research or public policy jobs?â Now I know they donât expect everyone to follow their priority paths, and Iâm strongly thinking about shifting into AI safety or data science anyway. But I often feel discouraged because my skill set doesnât match what the community thinks it needs most.
I wouldnât know how to assess this claim, but this is a very good point. Iâm glad youâre writing a paper about this.
Finally, I love the style of humor you use in this post.
Hi evelynciara, Thank you so much for your positivity and for complementing my writing.
Also to say do not feel discouraged. It is super unclear exactly what the community needs and I we should each be doing what we can with the skills we have and see what form that takes.
Thanks very much for writing this up Sam. Two points from my perspective at the Happier Lives Institute, who you kindly mention and is a new entrant to cause prioritisation work.
First, you say this on theories of change:
I think this nails the difficulty for new cause prioritisation research (where ânewâ means ânot being done by an existing EA organisationâ). The existing organisations are the âgatekeepersâ for resources but doing novel cause prioritsation work requires, of necessity, doing work those organisations themselves consider low-priority (otherwise they would do it themselves). This creates a tension: funders often want potential entrants to show they have âbuy-inâ from existing orgs. But the more novel the project, the less âbuy-inâ it will have, and so the less chance it gets off the ground. I confess I donât have a solution for this, other than that, if funders want to see new research, they need to be prepared to back it themselves.
Second, you say youâd like to see research on
Iâm pleased to say HLI is working on both those areasâsee our April update.
I agree that setting up new orgs is really challenging. I think this maybe oversells the difficulty of getting buy in from existing orgs in a way that might unduly put people off trying to set up new projects though.
My main experience with this is setting up the Global Priorities Institute. GPI does fairly different work from other EA orgs (though some overlap with FHI), and is much more foundational/âtheoretic than typical ones. You might expect that to get extra push back from EAs, given that the theory of change is of necessity less direct than for orgs like openphil. I was in the fortunate position of already working with CEA, which ofc made things easier. And getting funding from OpenPhil was definitely a long process. But I actually found it really helpful. The kind of docs etc they asked for were ones that it was useful for us to produce (for example pinning down our vision going forward, including milestones that would indicate we were or werenât on track), and their comments on our strategy and work was helpful for improving them.
I think some things that helped, and that others might find useful, were:
Doing a bunch of consultation early on in the process. That improved the idea and the project from the start, and (I expect) meant that others who I hoped would support the project had a better sense of what it was trying to achieve, and that we would be open and responsive to their feedback. This latter seems like it could go some way to allaying peopleâs worries about new projects, by giving people a sense that if they see a project going wrong in a way they think could end up net negative, the people running it will be keen to hear that and to pivot.
For docs I sent people asking for input, spending time to make sure they were as concise and clear as possible. I find this pretty challenging, and definitely more time consuming than writing longer docs. But it really increases peopleâs willingness to give comments . I also think it can improve their understanding of the project (because they get a better snapshot for a given time reading) and therefore the usefulness of comments.
Linked to the above, asking for help from people in a really targeted way: trying to find the people who would be most helpful for answering specific questions and improving specific aspects of the project, and then making concrete asks which made clear why they in particular would be helpful for answering this. Using that approach, I was surprised how helpful total strangers were (though this may be partly because academics are used to collaborating with strangers, so are particularly helpful). I think that also had useful knock on effects, because others were happy we were getting (and acting on!) advice from experts.
Something I still find hard, but am trying to do more in my current role, is get input and advice from people who are sceptical as well as those who are broadly supportive. It seems useful to try to really flesh out the strongest versions of concerns with your project, and how to mitigate those. It also seems likely to increase buy in for your project because it shows youâre keen to consider different worldviews and to act on rather than minimise concerns.
Sorry for digging up this old post. But it was mentioned in the Jan 2021 EA forum Prize report published today and that is how I got here.
This comment assumes that Cause Prioritization (CP) is a cause area that requires people with width(worked across different cause areas) rather than depth(worked on a single cause area) of knowledge. That is, they need to know something about several cause areas instead of deeply understanding one of them. Would love to hear from CP researchers or others who would disagree.
Maybe CP is an excellent path for some people in mid/âlate career. I think there could be some people in the middle of their career who have width rather than depth of knowledge. I might be wrong but it feels like the current advice for mid-career folks from 80k hours (See this 80k hours podcast episode discussion for example) seems to focus on people with skill depth alone. Further, I also think 80k hours may actually be creating people who have skill width by encouraging people to experiment with working on different cause areas until they find the best personal fit. What if we could tell themââExperimented a lot? Have a lot of width? Try CP!â
I also feel like it would be difficult for people in their early career to rationalize working on CP. Personally, as someone in their early career, I feel like I donât fully understand even one of the cause areas of interest to EAs properly. How can I then hope to understand multiples of them, find those not yet unknown and on top of it prioritize them all!? Now, there is good reason to believe EA is a relatively young movement (majority age between 25-34) and since young people canât rationalize working on CP, we are seeing relatively lesser research on this.
Maybe as EAs grow older eventually CP research will gain steam. Maybe their depth could also give them some width. At a later stage, current EAs working on a specific cause area could feel, âHaving done specialized work all these years, I am beginning to see some ways I can generalize this stuff. Maybe this generalization is the next big impactful thing I can doâ and then get into CP. Maybe some EAs already realized this and have even planned their career so that they can do CP at a later stage. So this whole thing could just be a matter of time. But that doesnât mean we should not worryâwhat if at the stage when EAs want to generalize we donât have the structures in place for them to pursue it?
I think GPI is doing research on this, under cluelessness. See, for example:
Andreas Mogensen, âMaximal Cluelessnessâ. Publication, GPI page (pdf), EA Forum post.
David Thorstad and Andreas Mogensen, âHeuristics for clueless agents: how to get away with ignoring what matters most in ordinary decision-makingâ.
I think you did a really good job nailing the emotional tenor of this post and I think itâs great.
I think the EA animal space is going beyond RCTs out of necessity, since RCTs have been hard to come by other than for diet change interventions (although their quality was previously quite poor, but better recently). Humane League Labs is researching the causal effects of corporate campaigns from observational data.
And youâve already pointed out OPIS and the Happier Lives Institute, but HLI was incubated by Charity Entrepeneurship, which I think is generally looking beyond RCTs. They just put out their next round of recommended charities to incubate.
Would it be possible for you to share a link to this, or at least the name of the report so that I can find it?
https://ââwww.cser.ac.uk/ââmedia/ââuploads/ââfiles/ââRisk_Management_in_the_UK_Final1.pdf
See also other related work:
https://ââforum.effectivealtruism.org/ââposts/ââwyHjpcCxuqFzzRgtX/ââa-practical-guide-to-long-term-planning-and-suggestions-for
https://ââwww.longtermresilience.org/ââfutureproof (P31-42)
https://ââforum.effectivealtruism.org/ââposts/ââznaZXBY59Ln9SLrne/ââhow-to-think-about-an-uncertain-future-lessons-from-other (old)
>I like the idea of building âresilienceâ instead of going after specific causes.
>For instance, if we spend all of our attention on bio risks, AI risks, and nuclear risks, itâs possible that something else weird will cause catastrophe in 15 years.
>So experimenting with broad interventions that seem âgood no matter whatâ seems interesting. For example, if we could have effective government infrastructure, or general disaster response, or a more powerful EA movement, those would all be generally useful things.
Thanks for writing this post! :-)
Two points:
i. On how we think about cause prioritization, and what comes before
Itâs not quite clear to me what this means. But it seems related to a broader point that I think is generally under-appreciated, or at least rarely acknowledged, namely that cause prioritization is highly value relative.
The causes and interventions that are optimal relative to one value system are unlikely to be optimal relative to another value system (which isnât to say that there arenât some causes and interventions that are robustly good on many different value systems, as there plausibly are, and identifying novel such causes and interventions would be a great win for everyone; but then it is also commensurately difficult to identify new such causes and have much confidence in them given both our great empirical uncertainty and the necessarily tight constraints).
I think it makes sense that people do cause prioritization based on the values, or the rough class of values, that they find most plausible. Provided, of course, that those values have been reflected on quite carefully in the first place, and scrutinized in light of the strongest counterarguments and alternative views on offer.
This is where I see a somewhat mysterious gap in EA, more fundamental and even more gaping than the cause prioritization gap highlighted here: there is surprisingly little reflection on and discussion of values (something I also noted in this post, along with some speculations as to what might explain this gap).
After all, cause prioritization depends crucially on the fundamental values based on which one is trying to prioritize (a crude illustration), so this is, in a sense, the very first step on the path toward thoroughly reasoned cause prioritization.
ii. On the apparent lack of progress
As hinted in Zoeâs post, it seems that much (most?) cutting edge cause prioritization research is found in non-public documents these days, which makes it appear like there is much less research than there in fact is.
This is admittedly problematic in that it makes it difficult to get good critiques of the research in question, especially from skeptical outsiders, and it also makes it difficult for outsiders to know what in fact animates the priorities of different EA agents and orgs. It may well be that it is best to keep most research secret, all things considered, but I think itâs worth being transparent about the fact that there is a lot that is non-public, and that this does pose problems, in various ways, including epistemically.
This postâwhich I found interesting and usefulâfeels relevant in relation to your first point. A relevant excerpt:
(I added two line breaks and changed where the diagram was, compared to the original text.)
(That post was written on behalf of my former employer, but not by me, and before I was aware of them.)
Great post! I laid down a variety of comments and suggestions within your post using hypothes.is. If you want to check it out (you need to install the browser ad-in and get a free account to see these.
I prefer to comment within the text rather than here at the bottom, cutting and pasting quotes. Anyone else here tried hypothes.is?
(By the way, Iâm an academic economist. I donât have any stake in hypothes.is. I just like it.)
I fully agree with this!
Ideas coming through my mind, not too well refined:
Reading this post, I came to think of this old joke:
So, how could this be applied to cause prioritisation? For one, I think the area where the keys could be lost is quite large.
My second thought would be, that âHow do we prioritise what to do, to achieve the most good?â sounds to me partly like an existential question, a bit like âWhat is the meaning of life?â Perhaps this goes back a bit to the dropped keys, with the GP research being done focusing on the visible area of what can be done concretely. Trying to answer the question of global priorities without a grand narrative of what the globe is to become, seems incomplete to me.
Insofar as the EA moment wants to answer the concrete question of how to do create change according to oneâs values instead of discussing values as such, I would expect the different branches to remain interested in their respective agendas and not into how to compare them to one another. That would be contra-productive.
Also, despite EAâs philosophical roots, I think perhaps not enough different parts of philosophy is being used. For example, if value and meaning is created by ourselves, what implications does that hvae on GPR? Has the subconscious been considered when it comes to increasing well-being? To me, the EA movement seems to be in a humanistic, individualistic or such worldview, and if a new grand narrative, like that outlined in Homo deus or Digital libido were to come, and the EA movement stays in the old paradigm, it could very well end up looking to outsiders that the primary question of concern is akin to how many angels can dance on a needleâs point.
Thank you very much for writing this up. However, I am not sure I understand your point, the things you are referring to in:
3. Policy and beyond â not happening â 2â10. Are you referring to your explanation within the subsection on The Parliament? Then, this would make sense for me.
Yes that is correct. I have made some edits to clarify.