Help, Please: Integrating EA Ideas into Large Research Organization
Greetings!
I’m an Economist (read: low-level analyst) at RTI International, a non-profit research institute whose mission and vision (pasted below) seem very EA-aligned. In fact, I joined RTI because of this language. However, I’ve been here now for over two years, and am growing increasingly convinced that our actions don’t align with our words. We may say we’re addressing the world’s most critical problems, but much of our research and recommendations seem to just sit on dusty shelves or in untouched directories of governmental agencies once we pass our research off to our clients. Or worse, sometimes we work with clients whose missions seem antithetical to our “make the world a better place” ideals (e.g., the U.S. Department of Defense, ExxonMobil).
I’m currently volunteering my time to try to better align RTI’s actions with their mission and vision in order to significantly increase RTI’s level of positive impact.
The short of my request: do you all have any connections to folks who might be helpful to talk to as I work to integrate EA ideas and metrics into this large research organization? Are you one of those people yourself? (If so, I would love to talk.) I would additionally be interested in any resources you think might be helpful.
I’ve provided much more context below. Thank you for any thoughts, connections, or recommendations you’re able to send my way!
Warmest of wishes,
Lauren Zitney (she/her/hers)
------------------
RTI Mission and Vision:
Mission: To improve the human condition by turning knowledge into practice
Vision: We address the world’s most critical problems with science-based solutions in pursuit of a better future. Through innovation and technology, we deliver exemplary outcomes for our partners. We support one another in an environment grounded in integrity and respect.
---
Potential Labor Hour Contributions to High-Impact Causes:
In 2020, RTI employed 5,881 people.
Short/medium term goal (within 5-10 years): I think it is somewhere between 10% and 60% likely that RTI could contribute 903,321.6 labor hours per year to high-impact causes. (Meeting this goal would probably take 5-10 years, but then should be sustainable every year after.)
This means that every year (after the 5-10 year achievement window), RTI could contribute the labor-hour equivalent of 11.3 people spending entire, 80,000 hour careers in high-impact problem areas.
If RTI achieves this goal, that would increase the likelihood that RTI could contribute even more high-impact hours than the short/medium term goal. The biggest part of the challenge is going to be finding funding for high-impact work. But if we can scale it, I think RTI would be very receptive to doing as much high-impact work as we can find funding for.
Here is a more detailed breakdown of that prediction:
---
If you’re curious about RTI’s financial scale:
RTI’s Revenue:
Fiscal Year 2018: $957 Million
Fiscal Year 2019: $963 Million
Fiscal Year 2020: $912 Million – I think this drop in revenue was due, at least in part, to COVID-19
Average Revenue over Past 3 years: $944 Million
---
These are specific areas I’d love help thinking through:
Evaluating the level of criticality of projects within our portfolio: I would love thoughts on how granular one should be when evaluating a given problem’s level of criticality. (For now, I’m using the scale + level of neglect + tractability as a definition for what I mean by “critical.”)
RTI is currently very interested in pursuing “climate change” work because the Biden administration seems to be poised to spend a lot of money on “climate change.” But, it’s really hard to evaluate how critical climate change work is without being more specific. (Are we talking about carbon tax policies? Energy production? Accommodating expected climate-caused migration? etc...)
But at the same time, if we get too granular, then trying to evaluate RTI’s portfolio by level of criticality starts feeling very time-consuming and unwieldy. (When we talk about energy production, do we mean solar, hydro-electric, nuclear? And then within each of those, are we talking about making those types of energy production more efficient, or are we trying to subsidize them to make them more easily accessible, or are we doing public relations campaigns to try to make them more politically popular?)
Impact Evaluation: How can we measure the impact of any given project or deliverable to ensure our work has a life after the research phase ends?
RTI’s work spans a wide variety of fields. (International Development, Medicine, Tobacco Control, and Education are just a few examples…) Based on my experience, and the experiences of colleagues, our research is very rarely turned into practice. But, if we can quantify the types of projects and clients where our research is or is not impactful, then there may be steps we can take to work with clients more likely to generate impact, or to design deliverables to encourage implementation.
We often use “number of peer-reviewed journal articles published” as a metric for how impactful we are. However, most peer-reviewed research is published behind paywalls and are cited by few people in a niche subject area. (case in point: Suleski, 2009)
Maybe we should measure media attention per published article. It would be great if getting media attention for our research was the rule, rather than the exception. Or if it already is the rule, it’d be great to know that.
Impact Strategy: There are probably some very easy ways we can increase the impact of the work we’re already doing.
For instance, we could write better. Here, Helen Sword, author of Stylish Academic Writing talks about how awful most academic writing is, and how we can make it better. I imagine having a rubric based off of Helen Sword’s work could be really valuable both for researchers writing academic literature and editors within RTI.
We could also invest a bit of time in distribution campaigns. (e.g., print research along with a summary of key takeaways and send them to policymakers we think may be interested)
Institutional Decision-Making: I’m aware of flash-forecasting as a norm for organizational communication.
Are there people who could teach me how to pilot flash-forecasting on a small scale? Or who may be able to give me some tips and tricks based off their experience trying to integrate it into their organizations?
Are there other similar ideas that could improve institutional decision-making that I should be learning or championing within my organization?
Randomized Control Trials: The most compelling evidence I could have for convincing RTI to adopt some of these ideas is scientific research.
Are there studies that have tried to measure the effect of any of the above interventions?
If not, it seems like RTI might be a good place to start. Are there any organizations that might be interested in funding this type of research?
For instance, I think a Randomized Control Trial measuring the impact of flash-forecasting on the amount of fruitful high-risk organizational spending could be really interesting. Or maybe measuring the citation rates of academics who use a writing-style-guide rubric during the preparation of academic articles vs. academics who don’t.
- Proposal: alternative to traditional academic journals for EA-relevant research (multi-link post) by 3 Nov 2021 20:16 UTC; 28 points) (
- EA Speaker Repository? by 20 May 2022 16:31 UTC; 10 points) (
- An 80k for organisations? by 29 Oct 2021 10:06 UTC; 6 points) (
- 1 Nov 2021 7:20 UTC; 4 points) 's comment on Most research/advocacy charities are not scalable by (
Hi Lauren and welcome to the Forum! I and a few of my colleagues at Rethink Priorities (a research consultancy maybe not too far away from RTI, except ~all of our clients are EAs, and we’re much smaller) are informally interested in this.
I am somewhat confused about this post, and would be interested in asking some clarifying questions. Apologies in advance if the questions seem overly blunt/aggressive. Please read me being willing to ask these questions as signals that I’m interested in your success (rather than negative because of the tone).
You mention you’re pretty junior at the org. What’s your degree of actual power or institutional buy-in at RTI to change the way decisions are made?
You say “I’m currently volunteering my time to try to better align RTI’s actions with their mission and vision in order to significantly increase RTI’s level of positive impact. ” Does this mean that your manager/other higher-ups at the company are actively supportive of your work? Or is this more of a personal passion project without much/any institutional buy-in (whether official or informal?)
I’m interested in this because I think it’s very hard for junior people changing high-level institutional decisions unless they have an unusual degree of soft power/institutional buy-in or if they’re unusually intrapreneurial.
Which is not to say this is impossible, tbc, or not worth trying even if there’s a low probability of success.
Why are clients willing to pay your institution if they’re not willing to use the outputs of your research?
This part is the most bizarre to me. Given the market dynamics involved, I’m surprised that your research does not change decisions.
One claim/critique I’ve heard about strategy or management consultancies is that they aren’t hired to discover true things or help make new decisions, but to justify existing decisions (eg by giving the stamp of approval and legitimacy to help solve internal principal-agent problems, or for PR reasons).
But you say “Based on my experience, and the experiences of colleagues, our research is very rarely turned into practice” Which sounds like the opposite problem!
High-quality research not changing decisions is certainly one of my larger fears about work at RP.
My proposed solution (in progress) for RP is to charge EA clients more, especially for work we’re inside-view less excited about, since presumably clients are less willing to pay large sums of money for research if they don’t think the research will plausibly affect their behavior.
To the extent you face similar dynamics, one possible solution is for you guys to also charge much more. I can imagine many companies/government agencies being willing to pay (say) $50/hour for random research outputs that may not affect real decisions much, but to be laser-focused on questions that actually matter (to them) if you’re charging >$500/hour.
(This may have unfortunate implications for job security)
(I think it’s very unlikely you can pull off such a large institutional change tbc)
What institutional incentives or individual incentives does your institution have to change to be aligned with their stated mission/vision? To quote one of my favorite blog posts about management, “Real values aren’t what you talk about, they’re what you do when times get tough.” To the extent that the real crux is that the real values of your institution are just pretty far away from the stated mission, I’m curious what your theory of change/story of winning looks like.
Are there ways you can start small and demonstrate impact to your organization via eg changing the decision-making process of a small team and expanding outwards? This seems more viable to me than trying to get a large bureaucracy to change en masse (even with small marginal changes).
Hi Linch,
My apologies for the delayed response.
I appreciate your questions, and I didn’t find the tone off-putting at all! Please read my frank tone as honesty and (attempted) clarity rather than a sign that I’m ungrateful for your input. :)
Your thoughts raised some new questions for me. Here are some responses, but for what it’s worth, I would not categorize all of them as “answers.”
I’d like to start with your final point because I think it will help contextualize both my original post and the rest of my thoughts that follow.
Linch re: starting small and expanding outwards into the organization --
Lauren: In short, yes! That’s my only plan for implementation. I plan to pilot a few of these ideas in small(ish) groups of willing people, and then bring the results to management. At this point, I would also enumerate all of the reasons I can think of for why implementing them more broadly would be helpful from a business perspective. (For example, there is an initiative fairly high-up in the organization centered on employee retention. This tells me RTI is nervous about our retention rates. Based on anecdotal evidence, I think better aligning our actions with our mission/vision would be really helpful in keeping our folks fulfilled.)
Now onto the rest of your questions, in the order you posed them:
Linch: You mention you’re pretty junior at the org. What’s your degree of actual power or institutional buy-in at RTI to change the way decisions are made?
Lauren: I don’t have a lot of power at the moment. However, I have reasons to believe my voice will be heard if I come with compelling evidence. (Some evidence for my beliefs: we have basically a “club of young, entry-level professionals” who have successfully pitched and implemented medium/large sized programs that have been enthusiastically supported by upper management.)
Linch: You say “I’m currently volunteering my time[…] ” Does this mean that your manager/other higher-ups at the company are actively supportive of your work? Or is this more of a personal passion project without much/any institutional buy-in (whether official or informal?)
Lauren: My direct manager is supportive, although I’m not getting paid for this work. He’s also skeptical anything will change. That said, he’s not particularly “high-up” in the grand scheme of things, and I haven’t talked him through my actual plans. I’ve basically only told him “I’ve got problems with the way we run things,” and he’s been like “That’s fair. Go be loud about it, if you want.” That said, I’m in the process of implementing one idea designed to measure the impact of our work, and I’ve received really enthusiastic responses from high-up people who matter a lot to the success and viability of the project’s pilot.
Linch: I’m interested in this because I think it’s very hard for junior people changing high-level institutional decisions unless they have an unusual degree of soft power/institutional buy-in or if they’re unusually intrapreneurial.
Lauren: It makes sense to be skeptical! I’m very excited about this project, at large (and am therefore very motivated). I also think I have a decent personality for politics/building consensus. So, I’m hoping to fall into the unusually intrapreneurial bucket (given the caveat that I’m definitely starting small).
Linch: Which is not to say this is impossible, tbc, or not worth trying even if there’s a low probability of success.
Lauren: I think there’s between a 5%-95% chance that all of this crumbles. (That is to say, I’m very unsure how much confidence I should have in this endeavor.) However, I think I will learn a lot about the challenges the EA community may face in mapping their ideas and metrics onto non-EA organizations, and I think I have a lot to learn from failing. (Even though success is obviously the much preferred outcome.)
Linch: Why are clients willing to pay your institution if they’re not willing to use the outputs of your research?
Lauren: I ask myself this question all the time.
Linch: This part is the most bizarre to me. Given the market dynamics involved, I’m surprised that your research does not change decisions.
Lauren: I may have been misleading in my original post. My current hypothesis is something like “our research does not change most decisions.” My real answer is, “I have a lot of thoughts on why it might not be impacting decisions, but we don’t collect that data right now, so I can’t speak with any authority. But I’m currently working on piloting a way to measure the impact of our work.” Some possible reasons we’re not impacting decisions:
“bureaucratic check boxes”: e.g., maybe the CDC is required, annually, to evaluate a public relations campaign they do. While this, in theory, impacts budgeting decisions, it mostly just justifies a line item so that they can back up their spending with research if anyone asks.
No political will: maybe the agency loves the research, but they don’t have the power to pass the relevant law
Lack of institutional power: this seems to be especially problematic for some of our international work
Lack of resources: maybe a great idea just doesn’t have the funding, or folks don’t have the bandwidth to act on the science
Technical issues: sometimes we build tools and they don’t seem to get used. Perhaps this is a technical expertise problem on the client-side; perhaps this is us making bad products; or maybe it’s some of each.
There are also cases where we do have impact: we were involved in producing a cool new medicine that treats “extensively drug-resistant tuberculosis”, and RTI research helped inform a new FDA regulation for cigarette packaging. (However, even within that, we’re pretty sure the FDA is going to get sued by the tobacco industry, so the regulation will, at best, be delayed for a while.)
Maybe my hypothesis is wrong: it’s important to note that my hypothesis is based completely on personal experience, and stories from colleagues. Perhaps we’ll measure it and find that we’re super impactful. I just don’t think that’s true.
Linch: One claim/critique I’ve heard about strategy or management consultancies is that they aren’t hired to discover true things or help make new decisions, but to justify existing decisions (eg by giving the stamp of approval and legitimacy to help solve internal principal-agent problems, or for PR reasons).
Lauren: Yeah, this might be most similar to the “bureaucratic check boxes” issue I note above.
Your questions are making me realize the word “change” may be really important in the hypothesis I stated above. Namely, is it important research to do if we don’t change anything—if we don’t “move the needle”? Or should we just seek to “impact” or “inform” decisions? The latter options seem like a lower bar. I’m just not sure I’m not currently convinced that we should spend our resources and brain power on just checking boxes for people when we aren’t really changing decisions.
Perhaps we should be really enthusiastic about evaluating programs the first [insert reasonable number of] times to really understand the impact of an intervention. However, maybe after 5 years of fairly stable numbers, we should deprioritize that kind of work, or recommend it only be done once every two years?
Linch: But you say “Based on my experience, and the experiences of colleagues, our research is very rarely turned into practice” Which sounds like the opposite problem!
Lauren: I think a response here would be redundant, given my two previous responses.
Linch: High-quality research not changing decisions is certainly one of my larger fears about work at RP.
Lauren: I don’t think “turning knowledge into practice” is a current strength of humanity’s right now. I think there’s a ton of great research that comes out of universities that very few people read, let alone use/implement. And I think implementing research (outside of private industry) is somewhat rare.
These opinions are all based off of conversations with professors, so there might be evidence showing this is totally false. But, it seems to feel salient for professors all the way up to Harvard-level prestige. (Michael Porter of Harvard Business School expressed this sentiment in the beginning of a book he co-authored called The Politics Industry.)
“Research not impacting policy” is a larger question I think about a lot, and I don’t think RTI would be able to “fix” the problem—at least not in the short/medium term. But I think RTI may be able to better understand why knowledge is hard to turn into practice, and we may be able to come up with ways to lessen the gap between research and policy.
Linch: My proposed solution (in progress) for RP is to charge EA clients more, especially for work we’re inside-view less excited about, since presumably clients are less willing to pay large sums of money for research if they don’t think the research will plausibly affect their behavior.
Lauren: This point makes sense, but (my understanding) is that RTI is already quite expensive.
Your question makes me curious about what “pot of money” organizations use to pay for RTI research. For instance, does the FDA just have a “research and development” fund that they use to pay us? Is it because they have an annual “evaluate our advertisements” fund? How do the agencies justify the money they currently spend on our research? This could be an easy/medium question to answer for people more familiar with governmental funding processes and norms.
Linch: To the extent you face similar dynamics, one possible solution is for you guys to also charge much more. I can imagine many companies/government agencies being willing to pay (say) $50/hour for random research outputs that may not affect real decisions much, but to be laser-focused on questions that actually matter (to them) if you’re charging >$500/hour.
Linch: (This may have unfortunate implications for job security)
Linch: (I think it’s very unlikely you can pull off such a large institutional change tbc)
Lauren: All great thoughts. I think I would want to get a better understanding of why our research doesn’t really change (or “meaningfully impact”? I need to think about what word makes the most sense there…) decisions before suggesting anything like this. I don’t think I would be successful with this kind of suggestion without having a lot of solid evidence.
Linch: What institutional incentives or individual incentives does your institution have to change to be aligned with their stated mission/vision? To quote one of my favorite blog posts about management, “Real values aren’t what you talk about, they’re what you do when times get tough.” To the extent that the real crux is that the real values of your institution are just pretty far away from the stated mission, I’m curious what your theory of change/story of winning looks like.
Lauren: The strongest evidence I have on this point is that, in the summer of 2020, RTI embarked on a long-term, evidence-based mission to pursue racial justice both within our organization, and within the research we do. We seem to be investing a fair amount of time, money, and bandwidth (especially at the upper management levels). This mission has also been pretty constantly discussed throughout the year, and management just released a new set of specific changes they’re making throughout the organization to improve on metrics they’ve set for themselves. (And they really do seem to be done in good faith, and based in evidence, rather than “just fool people into thinking we care about this.”) This makes me think that when someone or something makes RTI leadership see a gap between our mission/vision and our actions, there is substantial drive to change our behavior.
In addition to considering what you can do as one person, I recommend using a classic EA community building move and trying to multiply yourself.
If there are ~6,000 people at your organization, it seems very likely that you aren’t the only person there who is somewhat familiar with effective altruism, and interested in applying it to RTI’s work.
Have you tried doing anything to search for other people at RTI with this interest? Some ideas for doing so:
Posting on a casual Slack channel if the org has one
Looking for existing groups/mailing lists that seem relevant (this is how we got initial interest in the EA group Linch and I co-founded at Epic)
Searching LinkedIn for people who work at RTI and have (a) mentioned EA on their profiles, or (b) listed 80,000 Hours or Giving What We Can or GiveWell as an “interest” (I’d guess those are the three biggest EA-shaped things on LinkedIn, but I haven’t checked)
I only spent a couple of minutes thinking about this, so given your inside knowledge, you could probably think of other methods!
I love this idea! I somewhat recently realized it would be helpful to try to just build an EA community within RTI, more generally. But you’re right that it’s very unlikely I’m alone right now, and I could really use the extra hands and brains to make progress on these initiatives more quickly.
Some off-the-cuff thoughts (I don’t have any expertise in this area so this might be totally off-base):
Founders Pledge might have relevant research regarding the impact of focusing on climate policy in the US: https://founderspledge.com/stories/the-implications-of-bidens-victory-for-impact-focused-climate-philanthropy, https://founderspledge.com/stories/climate-change-executive-summary
Somehow the field of machine learning has developed a norm of putting articles on arxiv.org (open access pre-print paper archive) before submitting them to be published to journals with a paywall. Apparently this is because researchers want to publish their idea/results as soon as possible, lest other researchers hit upon that idea first. Would it be possible to have a norm at RTI of publishing the preprint version of the article freely? Publishing preprints is allowed by Elsevier, at least.
Would it be possible to collaborate with Vox Future Perfect and see if they would be interested in publishing an article about new research from RTI?
Thank you for these suggestions!
It looks like Founders Pledge could be useful for thinking, specifically, about climate change. At the moment, I’m really unsure about whether, practically, it would make more sense for me to try to implement a general framework for evaluating the criticality of our research portfolio vs. trying to rank the criticality of potential interventions within one small sub-section of our research portfolio to give the organization an example of what ranking criticality looks like (e.g., climate change). The answer is probably that I’ll need to do both. I will definitely keep Founders Pledge in mind as a resource.
Re: open access publications—Thank you for raising this point! This touches on a larger, tangential problem I’ve been thinking about: namely, lack of meaningful, public access to academic research, and how that relates to the gap between research and policy. I think there is certainly room for ideas like these as I get further along in the implementation process. I will add that idea to my idea tracker which includes all the things I need to create momentum around until they’re implemented. (It’s amazing how long each small idea will probably take to implement.)
As an aside: I recently learned that the word “scooped” is used to refer to when someone else publishes similar results to yours first. Like, “Oh no! We got scooped!” I think it’s a funny word to use, so I thought I’d share it in case it brings joy to others as well.
I will look into it! Off the top of my head, RTI may not want research published there as I think Vox is perceived as somewhat “left” leaning, and RTI fancies itself a deeply non-partisan organization.
On the ‘publishing’ and peer-review front, I’d like to propose a move to a different model. We can do very strong
peer review, feedback, rating, filtering,
curating and ‘publishing’ of research
...without needing to go through the traditional ‘frozen 0⁄1 pdf-prison for-profit publication houses’
We can use our new platform to subtly move the agenda to consider EA ideas and metrics.
I discuss this here [link fixed]. I’d love to get a critical mass together.
But the ‘gitbook’ link may actually be better going forward as a project planning and info-aggregation space.
Thanks for your post!
Would an open access repository plus an open peer review system like PREreview or the Open Peer Review Module meet your needs?
Also, is there a need to create an open access multidisciplinary repository (green open access) for effective altruism researchers? Or is the existing network of repositories enough?
Not sure if this was meant for me or Lauren. Anyways, I’ve been in touch with the people at PREreview and I think it ticks the right boxes.
I propose this in my “let’s do this already action plan HERE”
I think the crucial steps are
Set up an “experimental space” on PREreview allowing us to include additional, more quantitative metrics (they have offered this as a possibility)
Most important: Get funding and support (from Open Phil etc) and commitments (from GPI, RP, etc)
for people to do reviewing, rating, and feedback activities in our space PREreview
for ‘editorial’ people to oversee which research projects are relevant and assign relevant reviewers
Link arms with Cooper Smout and the “Free our Knowledge” pledges and initiatives like this one as much as possible
I don’t think setting up an OA journal with an impact factor is necessary. I think “credible quantitative peer review” is enough, and in fact the best mode. (But I am also supportive of open access journals with good feedback/rating models like SciPost, and it might be nice to have an EA-relevant place like this).
This doesn’t answer your (good) question, but people who have good answers here may also have useful answers to my question on having large impact in non-EA orgs for people without rare skillsets like ladder-climbing ability or entrepreneurship and commenting on Buck’s thread on non-standard EA career pathways.
Thank you for linking this post to those other posts! Definitely interesting, and I can see some overlap.