In your past experiences, what are the biggest barriers to getting your research in front of governmentalorganisations? (ex: official development aid grantmakers or policy-makers)
I would break this down into a) the methods for getting research in front of government orgs and b) the types of research that gets put in front of them.
In general I think we (me for sure) haven’t been optimising for this enough to even know the barriers (unknown unknowns). I think historically we’ve been mostly focused on foundations and direct work groups, and less on government and academia. This is changing so I expect us to learn a lot more going forward.
As for known unknowns in the methods, I still don’t know who to actually send my research to in various government agencies, what contact method they respond best to (email, personal contact, public consultations, cold calling, constituency office hours?), or what format they respond best to (a 1 page PDF with graphs, a video, bullet points, an in person meeting? - though this public guide Emily Grundy made on UK submissions while at RP has helped me). Anecdotally it seems remarkably easy to get in front of some: I know of one small animal advocacy organization that managed to get a meeting with the Prime Minister of their country, and I myself have had 1-1 meetings with more than two dozen members of the UK and Irish parliaments and United Nations & European Union bureaucrats (non RP work) with relative ease (e.g. an email with a prestigious sounding letterhead).
My assumption is government orgs are swamped with requests and petitions from NGOs, industry, peers, constituents. So we need some way to stand out from the crowd like representing a core constituency of theirs, being recommended to them by someone they deem credible such as an already established NGO, being affiliated with some already credible institution like a prestigious university, and proving to them we can provide them with policy expertise and legislative intelligence better than most others can.
On b) I think have a better sense of what content would be more likely to get in front of them. Niel Bowerman had some good insights on this in 2014, and the “legislative subsidy” approach Matthew Yglesias favours in the US context seems useful.There was an interesting study from Nakajima (2021) (twitter thread) which looked at what kinds of research evidence do policymakers prefer (bigger samples, external validity extends to the population in their jurisdictions, no preference on observational-v-experimental) so I think we can explore whether the topics on our research agenda fit within those designs.
Happily, evidence does seem to affect policy, but in a diffuse and indirect way. The aforementioned researcher Carol Weiss finds that large majorities (65%-89%) of policymakers report being influenced by research in their work, and roughly half of them strongly (Weiss 1980; Weiss 1977). It’s rare that policymakers pick up a study and implement an intervention directly. Instead, officials gradually work evidence into their worldviews as part of a gradual process of what Weiss calls “enlightenment” (Weiss 1995). Evidence also influences policy in more political but potentially still benign ways by justifying existing policies, warning of problems, suggesting new policies or making policymakers appear self-critical (Weiss 1995; Weiss 1979; Weiss 1977).
There are a few methods that seem to successfully promote evidence-based policy in health care, education, and government settings where they have been tested. The top interventions are:
2a) Education—Workshops, courses, mentorship, and review processes change decision makers’ behavior with regard to science in a few studies (Coburn et al. 2009; Matias 2017; Forman-Hoffman et al. 2017; Chinman et al. 2017; Hodder et al. 2017).
2b) Organizational structural changes—If an organization has evidence built into its structure, such as having a research division and hotline, encouraging and reviewing employees based on their engagement with research, and providing funding based on explicit evidence, this seems to improve the use of evidence in the organization (Coburn and Turner 2011; Coburn 2003; Coburn et al. 2009; Weiss 1980; Weiss 1995; Wilson et al. 2017; Salbach et al. 2017; Forman-Hoffman et al. 2017; Chinman et al. 2017; Hodder et al. 2017).
A few other methods for promoting research-backed policies seem promising based on a bit less evidence:
2c) Increasing awareness of evidence-based policy—Sending employees reminders or newsletters seems to increase research-based medicine based on two high-quality review papers (Murthy et al. 2012; Grimshaw et al. 2012). . Similarly, all-around advocacy campaigns to promote evidence-based practices among practitioners achieves substantial changes in one randomized controlled trial (Schneider et al. 2017).
2d) Access—Merely giving people evidence on effectiveness does not generally affect behavior, but when combined with efforts to motivate use of the evidence, providing access to research does improve evidence-based practice (Chinman et al. 2017; Wilson et al. 2017).
2e) External motivation and professional identities— Two recent RCTs and a number of reviews and qualitative research find that rewarding people for using evidence and building professional standards around using research are helpful (Chinman et al. 2017; Schneider et al. 2017;Hodder et al. 2017; Forman-Hoffman et al. 2017; Weiss et al. 2005; Weiss 1995; Wilson et al. 2017; Weiss 1980; Weiss 1977; Matias 2017; Coburn 2005; Coburn 2003).
Interestingly, a few methods to promote evidence-based practices that policymakers and researchers often promote do not have much support in the literature. The first is building collaboration between policymakers and researchers, and the second is creating more research in line with policymakers’ needs One of the highest-quality write-ups on evidence-based policy, Langer et al. 2016 finds that collaboration only works if it is deliberately structured to build policymakers’ and researchers’ skills. When it comes to making research more practical for policymakers, it seems that when policymakers and researchers work together to come up with research that is more relevant to policy, it has little impact. This may be because, as noted in point (1), research seems to influence policy in important but indirect ways, so making it more direct may not help much.
There is surprisingly and disappointingly little research on policymakers’ cognition and judgment in general. The best research is familiar to the effective altruism community from Philip Tetlock (1985; 1994; 2005; 2010; 2014; 2016) and Barbara Mellers (2015), and it gives little information on how decision-makers respond to scientific evidence, but suggests that they are not very accurate at making predictions in general. Other research indicates that extremists are particularly prone to overconfidence and oversimplification, and conservatives somewhat more prone to these errors than liberals (Ortoleva and Snowberg 2015; Blomberg and Harrington 2000; Kahan 2017; Tetlock 1984; Tetlock 2000). Otherwise, a little research suggests that policymakers in general are susceptible to the same cognitive biases that affect everyone, particularly loss aversion, which may make policymakers irrationally unwilling to end ineffective programs or start proven but novel ones (Levy 2003; McDermott 2004). On the whole, little psychological research studies how policymakers react to new information.
If anyone reading this works at a governmental organization, we’d love to chat!
In your past experiences, what are the biggest barriers to getting your research in front of governmental organisations? (ex: official development aid grantmakers or policy-makers)
Biggest barriers in getting them to act on it?
I would break this down into a) the methods for getting research in front of government orgs and b) the types of research that gets put in front of them.
In general I think we (me for sure) haven’t been optimising for this enough to even know the barriers (unknown unknowns). I think historically we’ve been mostly focused on foundations and direct work groups, and less on government and academia. This is changing so I expect us to learn a lot more going forward.
As for known unknowns in the methods, I still don’t know who to actually send my research to in various government agencies, what contact method they respond best to (email, personal contact, public consultations, cold calling, constituency office hours?), or what format they respond best to (a 1 page PDF with graphs, a video, bullet points, an in person meeting? - though this public guide Emily Grundy made on UK submissions while at RP has helped me). Anecdotally it seems remarkably easy to get in front of some: I know of one small animal advocacy organization that managed to get a meeting with the Prime Minister of their country, and I myself have had 1-1 meetings with more than two dozen members of the UK and Irish parliaments and United Nations & European Union bureaucrats (non RP work) with relative ease (e.g. an email with a prestigious sounding letterhead).
My assumption is government orgs are swamped with requests and petitions from NGOs, industry, peers, constituents. So we need some way to stand out from the crowd like representing a core constituency of theirs, being recommended to them by someone they deem credible such as an already established NGO, being affiliated with some already credible institution like a prestigious university, and proving to them we can provide them with policy expertise and legislative intelligence better than most others can.
On b) I think have a better sense of what content would be more likely to get in front of them. Niel Bowerman had some good insights on this in 2014, and the “legislative subsidy” approach Matthew Yglesias favours in the US context seems useful.There was an interesting study from Nakajima (2021) (twitter thread) which looked at what kinds of research evidence do policymakers prefer (bigger samples, external validity extends to the population in their jurisdictions, no preference on observational-v-experimental) so I think we can explore whether the topics on our research agenda fit within those designs.
Update: wanted to add in this post from Zach Groff:
If anyone reading this works at a governmental organization, we’d love to chat!
@Neil_Dullaghan we should chat.
Thank you for the well-researched response :-) Excited to maybe ask again in a year and see any changes in your practical lessons!