A lot of EA-orientated research doesn’t seem sufficiently focused on impact

Cross posted from: https://​​open.substack.com/​​pub/​​gamingthesystem/​​p/​​a-lot-of-ea-orientated-research-doesnt?r=9079y&utm_campaign=post&utm_medium=web

NB: This post would be clearer if I gave specific examples but I’m not going to call out specific organisations or individuals to avoid making this post unnecessarily antagonistic.

Summary: On the margin more resources should be put towards action-guiding research instead of abstract research areas that don’t have a clear path to impact. More resources should also be put towards communicating that research to decision-makers and ensuring that the research actually gets used.

Doing research that improves the world is really hard. Collectively as a movement I think EA does better than any other group. However, too many person-hours are going into research that doesn’t seem appropriately focused on actually causing positive change in the world. Soon after the initial ChatGPT launch probably wasn’t the right time for governments to regulate AI, but given the amount of funding that has gone into AI governance research it seems like a bad sign that there weren’t many (if any) viable AI governance proposals that were ready for policymakers to take off-the-shelf and implement.

Research aimed at doing good could fall in two buckets (or somewhere inbetween):

  1. Fundamental research that improves our understanding about how to think about a problem or how to prioritise between cause areas

  2. Action-guiding research that analyses which path forward is best and comes up with a proposal

Feedback loops between research and impact are poor so there is a risk of falling prey to motivated reasoning as fundamental research can be more appealing for a couple of different reasons:

  1. Culturally EA seems to reward people for doing work that seems very clever and complicated, and sometimes this can be a not-terrible proxy for important research. But this isn’t the same as doing work that actually moves the needle on the issues that matter. Academic research far worse for this and rewards researchers for writing papers that sound clever (hence why a lot of academic writing is so unnecessarily unintelligible), but EA shouldn’t be falling into this trap of conflating complexity with impact.

  2. People also enjoy discussing interesting ideas, and EAs in particular enjoy discussing abstract concepts. But intellectually stimulating work is not the same as impactful research, even if the research is looking into an important area.

Given that action-guiding research has a clearer path to impact, arguably the bar should be pretty high to focus on fundamental research over action-guiding research. If it’s unlikely that a decision maker would look at the findings of a piece of research and change their actions as a result of it then there should be a very strong alternative reason why the research is worthwhile. There is also a difference between research that you think should change the behaviour of decision makers, and what will actually influence them. While it might be clear to you that your research on some obscure form of decision theory has implications for the actions that key decision makers should take, if there is a negligible chance of them seeing this research or taking this on board then this research has very little value.

This is fine if the theory of change for your research having an impact doesn’t rely on the relevant people being convinced of your work (e.g. policymakers), but most research does rely on important people actually reading the findings, understanding them, and being convinced that they should take an alternative action to what they would have taken otherwise. This is especially true of research in areas like AI governance where the research findings being implemented requires governments to take action.

Doing this successfully doesn’t just rely on doing action-guiding research, you also have to communicate it to the relevant people. Some groups do this very well, others do not. It might not be very glamorous work to try to fight for the attention of politicians, but if you want legislative change this is what you have to do. It therefore seems odd to spend such a high proportion of time on research and then not put effort into making the research actionable for policymakers and communicating it to them.

Some counterarguments in favour of fundamental research:

  1. We are so far away from knowing having recommendations for decision makers that we need to do fundamental research that will then let us work towards more action-guiding recommendations in the future. This is necessary in some areas, but the longer a casual chain to impact is the more you should discount the likelihood of it occurring.

  2. Fundamental research is more neglected in some areas so you can have more impact by trying to cover new ground than by trying to compete for the attention of decision-makers. The counter-counterpoint to this is that there are plenty of areas where there just isn’t much good action-guiding research so there is a wealth of action-relevant pieces of research to choose from that are neglected.

  3. Fundamental research has a longer time to payoff but it can become relevant in the future, and by that point an individual who has focused on this area will be the expert who gets called upon by decision makers. This is a fair justification, but in these cases you should still have a preference for a research area that is likely to become mainstream.

Putting more resources into fundamental research made sense when EA cause areas were niche and weird, although I think funding and talent were still too skewed towards fundamental research than was optimal. But now that multiple cause areas have become more mainstream, decision makers are more likely to be receptive to research findings in these areas.

It seems like EA think tanks are becoming more savvy and gradually moving in the direction of action-guiding research and focusing on communicating to decision makers, especially in AI governance. There is some inertia here and I would argue groups have been too slow to respond. If you can’t clearly articulate why someone would look at your research and take a different set of actions, you probably shouldn’t be doing it.