We’re Rethink Priorities. Ask us anything!
Hi all,
We’re the staff at Rethink Priorities and we would like you to Ask Us Anything! We’ll be answering all questions starting Friday, November 19.
About the Org
Rethink Priorities is an EA research organization focused on helping improve decisions among funders and key decision-makers within EA and EA-aligned organizations. You might know of our work on quantifying the number of farmed vertebrates and invertebrates, interspecies comparisons of moral weight, ballot initiatives as a tool for EAs, the risk of nuclear winter, or running the EA Survey, among other projects. You can see all of our work to date here.
Over the next few years, we’re expanding our farmed animal welfare and moral weight research programs, launching an AI governance and strategy research program, and continuing to grow our new global health and development wing (including evaluating climate change interventions).
Team
You can find bios of our team members here. Links on names below go to RP publications by the author (if any are publicly available at this point).
Leadership
Marcus Davis — Co-CEO — Focus on animal welfare and operations
Peter Wildeford — Co-CEO — Focus on longtermism, global health and development, surveys, and EA movement research
Animal Welfare
Dr. Kim Cuddington — Senior Ecologist — Wild animal welfare
Dr. William McAuliffe — Senior Research Manager — Wild animal welfare, farmed animal welfare
Jacob Peacock — Senior Research Manager — Farmed animal welfare
Dr. Jason Schukraft — Senior Research Manager — Moral weight, global health and development
Daniela Waldhorn — Senior Research Manager — Invertebrate welfare, farmed animal welfare
Dr. Neil Dullaghan — Senior Researcher — Farmed animal welfare
Dr. Samara Mendez — Senior Researcher — Farmed animal welfare
Saulius Šimčikas — Senior Researcher — Farmed animal welfare
Meghan Barrett — Entomology Specialist — Invertebrate welfare
Dr. Holly Elmore — Researcher — Wild animal welfare
Michael St. Jules — Associate Researcher — Farmed animal welfare
Longtermism
Michael Aird — Researcher — Nuclear war, AI governance and strategy
Linch Zhang — Researcher — Forecasting, AI governance and strategy
Surveys and EA movement research
David Moss — Principal Research Director — Surveys and EA movement research
Dr. David Reinstein — Senior Economist — EA Survey, effective giving research
Dr. Jamie Elsey — Senior Behavioral Scientist — Surveys
Dr. Willem Sleegers — Senior Behavioral Scientist — Surveys
Global Health and Development
Dr. Greer Gosnell — Senior Environmental Economist — Climate change, global health interventions
Ruby Dickson — Researcher — Global health interventions
Jenny Kudymowa — Researcher — Global health interventions
Bruce Tsai — Researcher — Climate change, global health interventions
Operations
Abraham Rowe — COO — Operations, finance, HR, development, communications
Janique Behman — Director of Development — Development, communications
Dr. Dominika Krupocin — Senior People and Culture Coordinator — HR
Carolina Salazar — Project and Hiring Manager — HR, project management
Romina Giel — Operations Associate — Operations, finance
Ask Us Anything
Please ask us anything — about the org and how we operate, about the staff, about our research… anything!
You can read more about us in our 2021 Impact and 2022 Strategy update or visit our website: rethinkpriorities.org.
If you’re interested in hearing more, please subscribe to our newsletter.
Also, we’re currently raising funds to continue growing in 2022. We consider ourselves funding constrained — we continue to get far more qualified applicants to our roles than we are able to hire, and have scalable infrastructure to support far more research. We accept and track restricted funds by cause area if that is of interest.
If you’d like to support our work, visit https://www.rethinkpriorities.org/donate, give on Giving Tuesday via Facebook to potentially secure matching funds, or email Janique Behman at janique@rethinkpriorities.org.
We’ll be answering all questions starting Friday, November 19.
In your yearly report you mention:
This surprised me, because I fairly often hear the advice of “donate to EA Funds” as the optimal thing to do, but it seems that if everybody did that, RP would not get funded. Do you have any thoughts on this?
I think donating to the EA Funds is a very good thing to do, but I don’t think every donor should do this. I think for donors who have the time and personal fit, it would be good to do some direct donations on your own and support organizations to help those organizations hedge against idiosyncratic risk from particular funders and help give them more individual support (which matters for showing proof to other funders and also matters for some IRS stuff).
I don’t think any one funder likes to fund the entirety of an organization’s budget, especially when that budget is large. But between the different institutional funders (EA Funds, Survival and Flourishing Fund, OpenPhil, etc.), I still think there is a strong (but not guaranteed) chance we will be funded (at least enough to meet somewhere between our “Low” and “High” budget amounts). Though if everyone assumed we were not funding constrained, than we definitely would be.
My other pitch is that I’d like RP, as an organization, to have some direct financial incentive and accountability to the EA community as a whole, above and beyond our specific institutional funders who have specific desires and fund us for specific reasons that don’t always match what the community as a whole wants or needs.
Lastly, if you trust us, we also value unrestricted funds highly (probably 1.5x-2x per dollar) because this allows us to start new research areas and programs that have less pre-existing proof/traction and get them to a point where they are ready to show bigger funders.
A couple of years it seemed like the conventional wisdom was that there were serious ops/management/something bottlenecks in converting money into direct work. But now you’ve hired a lot of people in a short time. How did you manage to bypass those bottlenecks and have there been any downsides to hiring so quickly?
So there are a bunch of questions in this, but I can answer some of the ops related one:
We haven’t had ops talent bottlenecks. We’ve had incredibly competitive operations hiring rounds (e.g. in our most recent hiring round, ~200 applications, of which ~150 were qualified at least on paper), and I’d guess that 80%+ of our finalists are at least familiar with EA (which I don’t think is a necessary requirement, but the explanation isn’t that we are recruiting from a different pool I guess).
Maybe there was a bigger bottleneck in ~2018 and EA has grown a lot since or reached people with more ops skills since?
We spend a lot of time resources on recruiting, and advertise our jobs really widely, so maybe we are reaching a lot more potential candidates than some other organizations were?
Management bottlenecks are probably our biggest current people-related constraint on growth (funding is a bigger constraint).
We’ve worked a lot on addressing this over the summer, partially by having a huge internship program, and getting a lot of current staff management experience (while also working with awesome interns on cool projects!) and sending anyone who wants it through basic management training.
My impression is that we’ve gotten many more qualified applications in recent manager hiring pools.
Bypassing bottlenecks
In general, I think we haven’t experienced these as much as other groups (at least so far)
We tend to hire ops staff prior to growth, as opposed to hiring them when we need them to take on work immediately (e.g. we hire ops staff when things are fine, but we plan to grow in a few months, so the infrastructure can be in place for expansion, as opposed to hiring ops staff when the current ops staff has too much on their plate, or something).
We do a ton of prep to ensure that we are careful while scaling, thinking about how processes would scale, etc.
The above mentioned intern program really stress-tested a lot of processes (we doubled in size for 3 months), and has been really helpful for addressing issues that come with scaling.
Downsides to hiring quickly
I’d say that we’ve seen a mild amount to the downsides to growing in general, though it hasn’t necessarily been related to speed of hiring—e.g. mildly more siloing of people, people not sure what other people are working on, etc. and we’ve been taking a lot of steps to try to mitigate this, especially as we get larger.
Here’s some parts of my personal take (which overlaps with what Abraham said):
I think we ourselves feel a bit unsure “why we’re special”, i.e. why it seems there aren’t very many other EA-aligned orgs scaling this rapidly & gracefully.
But my guess is that some of the main factors are:
We want to scale rapidly & gracefully
Some orgs have a more niche purpose that doesn’t really require scaling, or may be led by people who are more skilled and excited about their object-level work than about org strategy, scaling, management, etc.
RP thinks strategically about how to scale rapidly & gracefully, including thinking ahead about what RP will need later and what might break by default
Three of the examples I often give are ones Abraham mentioned:
Realising RP will be be management capacity constrained, and that it would therefore be valuable to give our researchers management experience (so they can see how much they like it & get better at it), and that this pushes in favour of running a large internship with 1-1 management of the interns
(This definitely wasn’t the only motivation for running the internship, but I think it was one of the main ones, though that’s partly guessing/vague memory.)
Realising also that maybe RP should offer researchers management training
Expanding ops capacity before it’s desperately urgently obviously needed
RP also just actually does the obvious things, including learning and implementing standard best practices for management, running an org, etc.
And that all seems to me pretty replicable!
OTOH, I do think the people at RP are also great, and it’s often the case that people who are good at something underestimate how hard it is, so maybe this is less replicable than I think. But I’d guess that smart, sensible, altruistic, ambitious people with access to good advisors could have a decent chance at making their org more like that or starting a new org like that, and that this could be quite valuable in expectation.
(If anyone feels like maybe they’re such a person and maybe they should do that, please feel free to reach out for advice, feedback on plans, pointers to relevant resources & people! I and various other people at RP would be excited to help it be the case that there are more EA-aligned orgs scaling rapidly & gracefully.
Some evidence of that is that I have in fact spent probably ~10 hours of my free time over the last few months helping someone work towards possibly setting up an RP-like org, and expect to continue helping them for at least several months. Though that was an unusual case, and I’d usually just quickly offer my highest-value input.)
I have private information (e.g. from senior people at Rethink Priorities and former colleagues) that suggests operations ability at RP is unusually high. They say that Abraham Rowe, COO, is unusually good.
The reason why this comment is useful is that:
This high operations ability might be hard to observe from the inside, if you are that person (Rowe) who is really good. Also, high ability operations people may be attracted to a place where things run well and operations is respected. There may be other founder effects from Rowe. This might add nuance to Rowe’s comment.
It seems possible operations talent was (is) limited or undervalued in EA. Maybe RP’s success is related to operations ability (allows management to focus, increases org-wide happiness and confidence).
I appreciate it, but I want to emphasize that I think a lot of this boils down to careful planning and prep in advance, a really solid ops team all around, and a structure that lets operations operate a bit separately from research, so Peter and Marcus can really focus on scaling the research side of the organization / think about research impact a lot. I do agree that overall RP has been largely operationally successful, and that’s probably helped us maintain a high quality of output as we grow.
I also think a huge part of RP’s success has been Peter, Marcus, and other folks on the team being highly skilled at identifying low-hanging fruit in the EA research space, and just going out and doing that research.
To the extent that you think good operations can emerge out of replicable processes rather than singularly talented ops managers, do you think it would be useful to write a longer article about how RP does operations? (Or perhaps you’ve already written this and I missed it)
This potentially sounds useful, and I can definitely write about it at some point (though no promises on when just due to time constraints right now).
I definitely think that we are very lucky to have Abraham working with us. I think another thing is that there are at least three people (Abraham, Marcus, me, and probably other people too if given the chance) each capable of founding and running an organization all focused instead on making just one organization really great and big.
I definitely think having Abraham be able to fully handle operations allows Marcus and me to focus nearly entirely on driving our research quality, which is a good thing. Marcus and I also have clear subfocuses (Marcus does animals and global health / development, whereas I focus on longtermism, surveys, and EA movement building) which allow us to further focus our time specifically on making things great.
This comment sounds like it’s partly implying “RP seems to have recently overcome these bottlenecks. How? Does that imply the bottlenecks are in general smaller now than they were then?” I think the situation is more like “The bottlenecks were there back then and still are now. RP was doing unusually well at overcoming the bottlenecks then and still is now.”
The rest of this comment says a bit more on that front, but doesn’t really directly answer your question. I do have some thoughts that are more like direct answers, but other people at RP are better placed to comment so I’ll wait till they do so and then maybe add a couple things.
(Note that I focus mostly on longtermism and EA meta; maybe I’d say different things if I focused more on other cause areas.)
In late 2020, I was given three quite exciting job offers, and ultimately chose to go with a combo of the offer from RP and the offer from FHI, with Plan A being to then leave FHI after ~1 year to be a full-time RP employee. (I was upfront with everyone about this plan. I can explain the reasoning more if people are interested.)
The single biggest reason I prioritised RP was that I believe the following three things:
“EA indeed seems most constrained by things like ‘management capacity’ and ‘org capacity’ (see e.g. the various things linked to from scalably using labor).
I seem well-suited to eventually helping address that via things like doing research management.
RP seems unusually good at bypassing these bottlenecks and scaling fairly rapidly while maintaining high quality standards, and I could help it continue to do so.”
I continue to think that those things were true then and still are now (and so still have the same Plan A & turn down other exciting opportunities).
That said, the picture regarding the bottlenecks is a bit complicated. In brief, I think that:
The EA community overall has made more progress than I expected at increasing things like management capacity, org capacity, available mentorship, ability to scalably use labor, etc. E.g., various research training programs have sprung up, RP has grown substantially, and some other orgs/teams have been created or grown.
But the community also gained a lot more “seriously interested” people and a lot more funding.
So overall the bottlenecks are still strong in that it still seems quite high-leverage to find better ways of scalably using labor (especially “junior” labor) and money. But it also feels worth recognising that substantial progress has been made and so a bunch more good stuff is being done; there being a given bottleneck is not in itself exactly a bad thing (since it’ll basically always be true that something is the main bottleneck), but more a clue about what kind of activities will tend to be most impactful on the current margin.
To what extent do you think a greater number of organisations conducting similar research to RP would be useful to promote healthy dialogue? Compared to having one specialist organisation in a field who is the go-to for certain questions.
I’ll let Peter/Marcus/others give the organizational answer, but speaking for myself I’m pretty bullish about having more RP-like organizations. I think there are a number of good reasons for having more orgs like RP (or somewhat different from us), and these reasons are stronger at first glance than reasons for consolidation (eg reduced communication overhead, PR).
The EA movement has a strong appetite for research consultancy work, and RP is far from sufficient for meeting all the needs of the movement.
RP clones situated slightly differently can be helpful in allowing the EA movement to unlock more talent than RP will be able to.
For example, we are a remote-first/remote-only organization, which in theory means we can hire talent from anywhere. But in practice, many people may prefer working in an in-person org, so an RP clone with a physical location may unlock talent that RP is unable to productively use.
We have a particular hiring bar. It’s plausible to me that having a noticeably higher or lower hiring bar can result in a more cost-effective organization than us.
For example, having a higher hiring bar may allow you to create a small tight-knit group of supergeniuses pursuing ambitious research agendas
Having a lower hiring bar may allow you to take larger chances on untapped EA talent, is maybe better for scalability, and also I have a strong suspicion that a lot of needed research work in EA “just isn’t that hard” and if it’s done by less competent people, this frees up other EA researchers to do more important work.
More generally, RP has explicitly or implicitly made a number of organizational decisions for how a research org can be set up, and it’s plausible/likely to me that greater experimentation at the movement level will allow different orgs to learn from each other.
Having RP competitors can help keep us on our toes, and improve quality via the normal good things that come from healthy competition.
Having an RP competitor can help spot-check us and point out our blindspots.
I’m pretty excited about an EA red-teaming institute, and maybe a good home for it is at RP. But even if it is situated at RP, who watches the watchmen? I think it’d be really good for there to be external checks/red-teaming/evaluation of RP research outputs.
Right now, the only org I trust to do this well is Open Phil. But Open Phil people are very busy, so I’d be really excited to see a different org spring up to red-team and evaluate us.
AFAICT when doing very rough BOTECs on the expected impact of RP’s research, the EV of RP work is massively cost-effective (flag: bias). If true, I think there’s a very simple economics argument that marginal cost (including opportunity cost) should equal marginal revenue (expected impact), so in theory we should be excited to see many competitors to RP until marginal cost-effectiveness becomes much lower.
I agree with that suspicion, especially if we include things like “Just collect a bunch of stuff in one place” or “Just summarise some stuff” as “research”. I think a substantial portion of my impact to date has probably come from that sort of thing (examples in this sentence from a post I made earlier today: “I’m addicted to creating collections”). It basically always feel like (a) a lot of other people could’ve done what I’m doing and (b) it’s kinda crazy no one had yet. I also sometimes don’t have time to execute on some of my seemingly-very-executable and actually-not-that-time-consuming ideas, and the time I do spend on such things does slow down my progress other work that does seem to require more specialised skills. I also think this would apply to at least some things that are more classically “research” outputs than collections or summaries are.
But I want to push back on “this frees up other EA researchers to do more important work”. I think you probably mean “this frees up other EA researchers to do work that they’re more uniquely suited for”? I think (and your comment seems to imply you agree?) that there’s not a very strong correlation between importance and difficulty/uniqueness-of-skillset-required—i.e., many low-hanging fruit remain unplucked despite being rather juicy.
Strongly agree with this. While I was working on LEAN and the EA Hub I felt that there were a lot of very necessary and valuable things to do, that nobody wanted to do (or fund) because they seemed too easy. But a lot of value is lost, and important things are undermined if everyone turns their noses up at simple tasks. I’m really glad that since then CEA has significantly built up their local group support. But it’s a perennial pitfall to watch out for.
I think this is probably true. One thing to flag here is people’s counterfactuals are not necessarily in research. I think one belief that I recently updated towards but haven’t fully incorporated in my decision-making is that for a non-trivial subset of EAs in prominent org positions (particularly STEM-trained risk-neutral Americans with elite networks), counterfactuals might be more like expected E2G earnings more in the mid-7 figures or so* than the low- to mid- 6 figures I was previously assuming.
*to be clear, almost all of this is EV is in the high upside things, very few people make 7 figures working jobby jobs.
I agree on all points (except the nit-pick in my other comment).
A couple things I’d add:
I think this thread could be misread as “Should RP grow a bunch but no similar orgs be set up, or should RP grow less but other similar orgs are set up?”
If that was the question, I wouldn’t actually be sure what the best answer would be—I think it’d be necessary to look at the specifics, e.g. what are the other org’s specific plans, who are their founders, etc.?
Another tricky question would be something like “Should [specific person] join RP with an eye to helping it scale further, join some org that’s not on as much of a growth trajectory and try to get it onto one, or start a new org aiming to be somewhat RP-like?”Any of those three options could be best depending on the person and on other specifics.
But what I’m more confident of is that, in addition to RP growing a bunch, there should also be various new things that are very/somewhat/mildly RP-like.
Somewhat relatedly, I’d guess that “reduced communication” and “PR” aren’t the main arguments in favour of prioritising growing existing good orgs over creating new ones or growing small potentially good ones. (I’m guessing you (Linch) would agree; I’m just aiming to counter a possible inference.)
Other stronger arguments (in my view) include that past performance is a pretty good indicator of future performance (despite the protestation of a legion of disclaimers) and that there’s substantial fixed costs to creating each new org.
See also this interesting comment thread.
But again, ultimately I do think there should be more new RP-like orgs being started (if started by fitting people with access to good advisors etc.)
One other thing I’d add to Linch’s comments, adapting something I wrote in another comment in this AMA:
If anyone feels like maybe they’re the right sort of person to (co-)found a new RP-like org, please feel free to reach out for advice, feedback on plans, pointers to relevant resources & people! I and various other people at RP would be excited to help it be the case that there are more EA-aligned orgs scaling rapidly & gracefully.
Some evidence that I really am keen on this is that I’ve spent probably ~10 hours of my free time over the last few months helping a particular person work towards possibly setting up an RP-like org, and expect to continue helping them for at least several months. (Though that was an unusual case and I’d usually just quickly offer my highest-value input.)
Quick clarifying question: is
referring to RP, or more field-specific organizations like e.g. CSET or an (AFAIK, hypothetical) organization focused on answering questions on medical approaches to existential biosecurity.
Put another way, is your question asking about larger RP vs RP + several RP clones, or RP + several RP clones vs. RP + several specialist organizations?
Thanks for the clarifying question. I meant larger RP vs RP+ several RP clones (basically new EA research orgs that do cause/intervention/strategy prioritisation).
The case of larger RP vs RP + several specialist organisations is also interesting though—slightly analogous to the scenario of 80K and Animal Advocacy Careers. I wonder in a hypothetical world where 80K was more focused on animal welfare, would/should they defer all animal interested people to AAC as they have greater domain expertise or should they advise some animal people themselves as they bring a slightly different lens to the issue? The relevant comparison might be RP and Wild Animal Initiative for example.
Do you also feel funding constrained in the longtermist portion of your work? (Conventional wisdom is that neartermist causes are more funding constrained than longtermist ones.)
Mostly yes. It definitely is the case that, if we were given more cash than the cash that we already have, we could meaningfully accelerate our longtermism team in a way that we cannot do with the cash we currently have. Thus funding is still an important constraint to scaling our work, in addition to some other important constraints.
However, I am moderately confident that between the existing institutional funders (OpenPhil, Survival and Flourishing Fund, Long-Term Future Fund, Longview, and others) that we could meet a lot of our funding request—we just haven’t asked yet. But (1) it’s not a guarantee that this would go well so we’d still appreciate money from other sources, (2) it would be good to add some diversity from these sources, (3) money from other sources could help us spend less time fundraising and more time accelerating our longtermism plans, (4) more funding sooner could help us expand sooner and with more certainty, and (5) its likely we could still spend more money than these sources would give.
This comment matches my view (perhaps unsurprisingly!).
One thing I’d add: I think Peter is basically talking about our “Longtermism Department”. We also have a “Surveys and EA Movement Research Department”. And I feel confident they could do a bunch of additional high-value longtermist work if given more funding. And donors could provide funding restricted to just longtermist survey projects or even just specific longtermist survey projects (either commissioning a specific project or funding a specific idea we already have).
(I feel like I should add a conflict of interest statement that I work at RP, but I guess that should be obvious enough from context! And conversely I should mention that I don’t work in the survey department, haven’t met them in-person, and decided of my own volition to write this comment because I really do think this seems like probably a good donation target.)
Here are some claims that feed into my conclusion:
Funding constraints: My impression is that that department is more funding constrained than the longtermism department
(To be clear, I’m not saying the longtermism department isn’t at all funding constrained, nor that that single factor guarantees that it’s better to fund RP;s survey and EA movement research department than RP’s longtermism department.)
Skills and comparative advantage:
They seem very good at designing, running, and analysing surveys
And I think that that work gains more from specialisation/experience/training than one might expect
And there aren’t many people specialising for being damn good at designing, running, and/or analysing longtermism-relevant surveys
I think the only things I’m aware of are RP, GovAI, and maybe a few individuals (e.g., Lucius Caviola, Stefan Schubert, Vael Gates)
And I’d guess GovAI wouldn’t scale that line of work as rapidly as RP could with funding (though I haven’t asked them), and individual people are notably harder to scale...
There’s good work to be done:
We have a bunch of ideas for longtermism-relevant surveys and I think some would be very valuable
(I say “some” because some are like rough ideas and I haven’t thought in depth about all of them yet)
I/we could probably expand on this for potential donors if they were interested
I think I could come up with a bunch more exciting longtermism-relevant surveys if I spent more time doing so
I expect a bunch of other orgs/stakeholders could as well, at least if we gave them examples, ideas, helped them brainstorm, etc.
Assume you had uncapped funding to hire staff at RP from now on. In such a scenario, how many more staff would you expect RP to have in 5 years from now? How much more funding would you expect to attract? Would you sustain your level of impact per dollar?
For instance, is it the case that you think that RP could be 2x as large in five years and do 3x as much funded work at a 1.5x current impact per dollar? Or a very different trajectory?
I ask as an attempt to gauge your perception of the potential growth of RP and this sector of EA more generally.
It’s been hard for me to make five year plans, given that we’re currently only a little less than four years old and the growth between 2018 when we started and now has already been very hard to anticipate in advance!
I do think that RP could be 2x as large in five years. I’m actually optimistic that we could double in 2-3 years!
I’m less sure about how much funded work we’d do—actually I’m not sure what you mean by funded work, do you mean work directly commissioned by stakeholders as opposed to us doing work we proactively identify?
I’m also less sure about impact per dollar. We’ve found this to be very difficult to track and quantify precisely. Perhaps as 80,000 Hours talks about “impact-adjusted career changes”, we might want to talk about “impact-adjusted decision changes”—and I’d be keen to generate more of those, even after adjusting for our growth in staff and funding. I think we’ve learned a lot more about how to unlock impact from our work and I think also there will have been more time for our past work to bear fruit.
One additional point I’ll note is that most (though not all ) of our impact comes from having a multiplier effect on the EA movement. Unlike say a charity distributing bednets, or an academic trying to answer ML questions in AI safety, our impact is inherently tied with the impact of EA overall. So an important way we’ll have a greater impact per dollar (without making many changes ourselves) is via the movement growing a lot in quantity, quality, or both.
Put another way, RP is trying to have a multiplier effect on the EA movement, but multiplication is less valuable than addition if the base is low.
A third way in which we rely on the EA movement (the second one is money) is that almost all of our hires comes from EA, so if EA outreach to research talent dries up (or decreases in quality), we’d have a harder time finding competent hires.
Thanks, that’s exciting to hear!
For funded work, I wanted to know how much funding you expect to receive to do work for stakeholders.
This is a little hard to tell, because often we receive a grant to do research, and the outcomes of that research might be relevant to the funder, but also broadly relevant to the EA community when published, etc.
But in terms of just pure contracted work, in 2021 so far, we’ve received around $1.06M of contracted work, (compared to $4.667M in donations and grants (including multi-year grants)), though much of the spending of that $1.06M will be in 2022.
In terms of expectations, I think that contracted work will likely grow as a percentage of our total revenue, but ideally we’d see growth growth in donations and grants too.
How valuable do you think your research to date has been? Which few pieces of your research to date have been highest-impact? What has surprised you or been noteworthy about the impact of your research?
I think we cover this in our 2021 Impact and 2022 Strategy update!
By its reputation, output, and the quality and character of management and staff, Rethink Priorities seems like an extraordinarily good EA org.
Do you have any insights that explain your success and quality, especially that might inform other organizations or founders?
Alternatively, is your success due to intrinsically high founder quality, which is harder to explain?
Thanks Charles for your unprompted, sincere, honest, and level-headed assessment.
Your check will be in the mail in 3-7 business days.
Yes, thank you, kind sir.
Thanks for the question and the kind words. However, I don’t think I can answer this without falling back somewhat on some rather generic advice. We do a lot of things that I think has contributed to where we are now, but I don’t think any of them are particularly novel:
We try to identify really high quality hires, bring them on, train them up and trust them to execute their jobs.
We seek feedback from our staff, and proactively seek to improve any processes that aren’t working.
We try to follow research and management best practices, and gather ideas on these fronts from organizations and leaders that have previously been successful.
We try to make RP a genuinely pleasant place to work for everyone on our staff.
As to your ideas about the possibility of RP’s success being high founder quality, I think Peter and I try very hard to do the best we can but I think in part due to survivorship bias it’s difficult for me to say that we have any extraordinary skills others don’t possess. I’ve met many talented, intelligent, and driven people in my life, some of whom have started ventures that have been successful and others who have struggled. Ultimately, I think it’s some combination of these traits, luck, and good timing that has lead us to be where we are today.
(This other comment of mine is also relevant here, i.e. if answering these questions quickly I’d say roughly what I said there. Also keen to see what other RP people say—I think these are good questions.)
What are the top 2-3 issues Rethink Priorities is facing that prevent you from achieving your goals? What are you currently doing to work on these issues?
I think to better expand Rethink Priorities, we need Rethink Priorities to be bigger and more efficient.
I think the relevant constraints for “why aren’t we bigger?” are:
(1): sufficient number of talented researchers that we can hire
(2): sufficient number of useful research questions we can tackle
(3): ability to ensure each employee has a positive and productive experience (basically, people management constraints and project management constraints)
(4): ops capacity—ensuring our ops team is large enough to support the team
(5): Ops and culture throughput—giving the ops enough time to onboard people (regardless of ops team size), giving people enough time to adapt to the org growth …that is, even if we were otherwise unconstrained I still think we can’t just 10x in one year because that would just feel too ludicrous
(6): proof/traction (to both ourselves and to our external stakeholders/funders) that we are on the right path and “deserve” to scale (this also just takes time)
(7): money to pay for all of the above
~
It doesn’t look like (1) or (2) will constrain us anytime soon.
My guess is that (3) is our current most important constraint but that we are working by experimenting with directly hiring managers and by promoting people into management internally. We rolled out management training this summer and also used our internship program, in part, to train management capacity. From a project management perspective, we recently hired a manager and have rolled out Asana across the team and we will continue to focus on the Asana processes we’ve built and make sure they are working before scaling more.
For (4), this will occasionally become a constraint from time-to-time but we solve this by proactively identifying ops bottlenecks and hiring for them well in advance. So far this has gone well.
For (5), I think this will be our next biggest constraint once we solve (3). I think this is best solved just with time to let the current level of growth become normal as well as listening to staff and their concerns. We just launched our biannual staff survey and we are awaiting important staff feedback before hiring more.
For (6), I think also comes with time and probably can be seen in combination with (5).
For (7), I do think we are funding constrained right now—we have room for more funding and definitely need to get money from somewhere in order to continue our work. I’m optimistic that we can get money from our current institutional sources because we haven’t tried too recently to ask them for money and I think they still like us and want us to continue to succeed. But I think, as I’ve mentioned elsewhere, we’d still like other people to support our work to enable us to diversify our funding sources, give us more flexible unrestricted funding that is 1.5x-2x as valuable per dollar to us, and to build us more sustainability / flexibility in the face of idiosyncratic risk.
Sorry that was seven things instead of 2-3, but I think it helps to communicate the full picture.
This is very well-communicated! Thank you for taking the time to type all that out and label the responses :-)
Regarding (3) - making each employee happy and productive
Are there any examples of organisations that you aspire to model RP’s practices after? Ie. Exemplars of how to “be bigger and more efficient” while making each employee happy and productive?
*I ask because I’d love to learn about real-life management cultures/tools to grow my skillset :-)
I’ve seen Peter, our Co-CEO, highlight Netflix culture as something that inspired him: https://jobs.netflix.com/culture
I’d clarify that I was inspired by that particular document—especially for the large employee ownership—but I’m much less inspired by the culture at Netflix as I hear from some employees that it is actually practiced.
What lessons would you pass onto other EA orgs from running an internship program?
Thanks so much for this question!
We have learned a lot during our Fellowship/Internship Program. Several main considerations come to mind when thinking about running a fellowship/internship program.
Managers’ capacity and preparedness – hosting a fellow/intern may be a rewarding experience. However, working with fellows/interns is also time-consuming. It seems to be important to keep in mind that managers may need to have a dedicated portion of time to:
Prepare for their fellows/interns’ arrival, which may include drafting a work plan, thinking about goals for their supervisees, and establishing a plan B, in case something unexpected comes up (for example, data is delayed, and the analysis cannot take place)
Explain tasks/projects, help set goals, and brainstorm ideas on how to achieve these goals
Regularly meet with their fellows/interns to check in, monitor progress, as well as provide feedback and overall support/guidance throughout the program
Help fellows/interns socialize and interact with others to make them feel included, welcomed, and a part of the team/organization.
Operations team capacity and preparedness – there are many different tasks associated with each stage of the fellowship/internship program. It’s crucial to ensure that the Operations Team has enough capacity and time to hire, onboard, support, and offboard fellows/interns, especially when the program is open to candidates worldwide. For example, we work with an international employment organization that acts as a proxy employer in each of the countries our staff and fellows/interns are based. Taking into account the amount of coordination needed between international employment organization – staff internally – fellows/interns is important (the amount will vary significantly between adding 2-3 vs. 10 fellows/interns to the team).
Internal processes – capacity is one thing, but having strong, internal processes developed beforehand appears to be equally vital. This refers to hiring and candidate selection procedures, establishing reasonable timelines, setting up check-in structures with both fellows/interns and managers, as well as organizing relevant professional development and social opportunities.
Hiring internationally and remotely – it may be worth considering where most of the team members are located. If most of the staff are in the US time zones, then it may make sense to think how that could affect candidates from completely different time zones (e.g., Australia and Oceania). Will they be able to communicate with their managers easily? Will they have enough opportunities to interact with other fellows/interns and colleagues?
In summary, any fellowship and internship program may be truly beneficial to the organization running it. Most importantly, however, the questions are how to make the program beneficial to fellows/interns, and how will it impact their future education paths and careers.
Two things I’d add to the above answer (which I agree with):
RP surveyed both interns and their managers at the end of the program, which provided a bunch of useful takeaways for future internships. (Many of which are detailed or idiosyncratic and so will be useful to us but aren’t in the above reply.) I’d say other internship programs should do the same.
I’d personally also suggest surveying the interns and maybe managers at the start of the internship to get a “baseline” measure of things like interns’ clarity on their career plans and managers’ perceived management skills, then asking similar questions at the end, so that you can later see how much the internship program benefitted those things. Of course this should be tailored to the goals of a particular program.
What lessons we should pass on to other orgs / research training programs will vary based on the type of org, type of program, cause area focus, and various other details. If someone is actually running or seriously considering running a relevant program and would be interested in lessons from RP’s experience, I’d suggest they reach out! I’d be happy to chat, and I imagine other RP people might too.
Good question! Please enjoy me not answering it and instead lightly adapting an email I sent to someone who was interested in running an EA-aligned research training program, since you or people interested in your question might find this a bit useful. (Hopefully someone else from RP will more directly answer the question.)
“Cool that you’re interested in doing this kind of project :)
I’d encourage you to join the EA Research Training Program Slack workspace and share your plans and key uncertainties there to get input from other people who are organizing or hoping to organize research training programs. [This is open only to people organizing or seriously considering organizing such programs; readers should message me if they’d like a link.]
You could also perhaps look for people who’ve introduced themselves there and who it might be especially useful to talk to.
Resources from one of the pinned posts in that Slack:
You might also find these things useful:
Michael’s quick notes on RP’s internship, RP’s processes, how RP picks research projects, etc. [for SERI etc.]
Collection of collections of resources relevant to (research) management, mentorship, training, etc.
Improving the EA-aligned research pipeline
I’d also encourage you to seriously consider applying for funding, doing so sooner than you might by default, and maybe even applying for a small amount of funding to pay for your time further planning this stuff (if that’d be helpful). Basically, I think people underestimate the extent to which EA Funds are ok with unpolished applications, with discussing and advising on ideas with applicants after the application is submitted, and with providing “planning grants”. (I haven’t read anything about your plans and so am not saying I’m confident you’ll get funding, but applying is very often worthwhile in expectation.) More info here:
https://forum.effectivealtruism.org/posts/DqwxrdyQxcMQ8P2rD/list-of-ea-funding-opportunities
https://forum.effectivealtruism.org/posts/4tsWDEXkhincu7HLb/things-i-often-tell-people-about-applying-to-ea-funds
[...] …caveat to all of that is that I know very little about your specific plans—this is basically all just the stuff I think it’s generically worth me mentioning to people interested in EA running research training programs.
Best of luck with the planning, and feel free to send through specific questions where I could perhaps be useful :)
Best,
Michael”
Why do you have the distribution of focus on health/development vs animals vs longtermism vs meta-stuff that you do? How do you feel about it? What might make you change this distribution, or add or remove priority areas?
Thanks for the question! I think describing the current state will hint at a lot on what might make us change the distribution, so I’m primarily going to focus on that.
I think the current distribution of what we work on is dependent on a number of factors, including but not limited to:
What we think about research opportunities in each space
What we think about the opportunity to exert meaningful influence in the space
Funding opportunities
Our ability to hire people
In a sense, I think we’re cause neutral in that we’d be happy to work on any cause provided the good opportunities arise to do so. We do have opinions on high level cause prioritization (though I know there’s some disagreement inside RP about this topic) but I think given the changing nature of marginal value of additional work in any given the above considerations, and others, we meld our work (and staff) to where we think we can have the highest impact.
In general, though this is fairly generic and high level, were we to come to think our in a given area wasn’t useful or the opportunity cost were too high to continue to work on it, we would decide to pursue other things. Similarly, if the reverse was true for some particular possible projects we weren’t working on, we would take them on
Thanks for your reply. I think (1) and (2) are doing a ton of work — they largely determine whether expected marginal research is astronomically important or not. So I’ll ask a more pointed follow-up:
Why does RP think it has reason to spend significant resources on both shorttermist and longtermist issues (or is this misleading; e.g., do all of your unrestricted funds go to just one)? What are your “opinions on high level cause prioritization” and the “disagreement inside RP about this topic”? What would make RP focus more exclusively on either short-term or long-term issues?
[This is not at all an organizational view; just some thoughts from me]
tl;dr: I think mostly RP is able to grow in multiple areas at once without there being strong tradeoffs between them (for reasons including that RP is good at scaling & that the pools of funding and talent for each cause area are somewhat different). And I’m glad it’s done so, since I’d guess that may have contributed to RP starting and scaling up the longtermism department (even though naively I’d now prefer RP be more longtermist).
I think RP is unusually good at scaling, at being a modular collection of somewhat disconnected departments focusing on quite different things and each growing and doing great stuff, and at meeting the specific needs of actors making big decisions (especially EA funders; note that RP also does well at other kinds of work, but this type of work is where RP seems most unusual in EA).
Given that, it could well make sense for RP to be somewhat agnostic between the major EA causes, since it can meet major needs in each, and adding each department doesn’t very strongly trade off against expanding other departments.
(I’d guess there’s at least some tradeoff, but it’s possible there’s none or that it’s on-net complementary; e.g. there are some cases where people liking our work in one area helped us get funding or hires for another area, and having lots of staff with many areas of expertise in the same org can be useful for getting feedback etc. One thing to bear in mind here is that, as noted elsewhere in this AMA, there’s a lot of funding and “junior talent” theoretically available in EA and RP seems unusually good at combining these things to produce solid outputs.)
I would personally like RP to focus much more exclusively on longtermism. And sometimes I feel a vague pull to advocate for that. But RP’s more cause-neutral, partly demand-driven approach has worked out very well from my perspective so far, in that it may have contributed to RP moving into longtermism and then scaling up that team substantially.[1] (I mean that from my perspective this is very good for the world, not just that it let me get a cool job.) So I think I should endorse that overall decision procedure.
This feels kind-of related to moral trade and maybe kind-of to the veil of ignorance.
That’s not to say that I think we shouldn’t think at all about what areas are really most important in general, what’s most important on the current margin within EA, where our comparative advantage is, etc. I know we think at least somewhat about those things (though I’m mostly involved in decisions about the longtermism department rather than broader org strategy so I don’t bother trying to learn the details). But I think maybe the tradeoffs between growing each area are smaller than one might guess from the outside, such that that sort of high-level internal cause area priority-setting is somewhat less important than one might’ve guessed.
This doesn’t really directly answer your question, since I think Peter and Marcus are better placed to do so and since I’ve already written a lot on this semi-tangent...
[1] My understanding (I only joined in late 2020) is that for a brief period at its very beginning, RP had no longtermist work (I think it was just global health & dev and animals?). Later, it had longtermism as just a small fraction of its work (1 researcher). RP only made multiple hires in this area in late 2020, after already having had substantial successes in other areas. At that point, it would’ve been unsurprising if people at the org thought they should just go all-in on their existing areas rather than branching out into longtermism. But they instead kept adding additional areas, including longtermism. And now the longtermism team is likely to expand quite substantially, which again could’ve been not done if the org was focusing more exclusively on its initial main focus areas.
What is your process for identifying and prioritizing new research questions? And what percentage of your work is going toward internal top priorities vs. commissioned projects?
[This is like commentary on your second question, not a direct answer; I’ll let someone else at RP provide that.]
Small point: I personally find it useful to make the following three-part distinction, rather than your two-part distinction:
Academia-like: Projects that we think would be valuable although we don’t have a very explicit theory of change tied to specific (types of) decisions by specific (types of) actors; more like “This question/topic seems probably important somehow, and more clarity on it would probably somehow inform various important decisions.”
E.g., the sort of work Nick Bostrom does
Think-tank-like: Projects that we think would be valuable based on pretty explicit theories of change, ideally informed by actually talking to a bunch of relevant decision-makers to get a sense of what their needs and confusions are.
Consultancy-like: Projects that one specific stakeholder (or I guess maybe one group of coordinated stakeholders) have explicitly requested we do (usually but not necessarily also paying the researchers to do it).
I think RP, the EA community, and the world at large should very obviously have substantial amounts of each of those three types of projects / theory of change.
I think RP specialises mostly for the latter two models, whereas (for example) FHI specialises more for the first model and sometimes the second. (But again, I’ll let someone else at RP say more about specific percentages and rationales.)
(See also my slides on Theory of Change in Research, esp. slide 17.)
Is there any particular reason why biosecurity isn’t a major focus? As far as I can see from the list, no staff work on it, which surprises me a little.
The short answer is that a) none of our past hires in longtermism (including management) had substantive biosecurity experience or biosecurity interest and b) no major stakeholder has asked us to look into biosecurity issues.
The extended answer is pretty complicated. I will first go into why generalist EA orgs or generalist independent researchers may find it hard to go into biosecurity, explain why I think those reasons aren’t as applicable to RP, and then why we haven’t gone into biosecurity anyway.
Why generalist EA orgs or generalist independent researchers may find it hard to go into biosecurity
My personal impression is that EA/existential biosecurity experts currently believe that it’s very easy for newcomers in the field to do more harm than good, especially if they do not have senior supervision from someone in the field. This is because existential biosecurity in particular is rife with information hazards, and individual unilateral actions can invoke the unilateralist’s curse.
Further, all the senior biosecurity people are very busy, and are not really willing to take the chance with someone new unless they a) have experience (usually academic) in adjacent fields or b) are credibly committed to do biosecurity work for a long period of time if they’re a good fit.
Since most promising candidates are understandably not excited to commit to doing biosecurity work for a long period of time without doing some work on it first, this creates a chicken-and-egg problem.
(Note again this is my own impression. Feel free to correct me, any biosecurity experts reading this!)
Why RP in particular may be a good place to start a biosecurity career anyway.
I think RP is institutionally trusted by the major groups enough to be careful if we were able to wade into biosecurity. In particular, we would be careful to not publish things that we think are potentially dangerous without running it by a few more experienced people first, and also we are credibly very willing to take down things quickly if we get a “cease and desist” from more experienced parties first (and then carefully reassess offline whether this was the correct move to do).
On an individual level, I have a number of contacts with some of the key biosecurity people in EA, both through covid forecasting before joining RP, and socially. In addition, I believe I can credibly pull off “non-expert making useful and not-dangerous contributions to biosecurity” as my covid forecasting and cultured meat analysis experiences have at least somewhat demonstrated an ability to provide value via reading, disseminating, and evaluating fairly technical work in adjacent domains (as a non-expert).
So I’d maybe be excited to do biosecurity projects within my range of capabilities if stakeholders reached out to us with sufficiently important/interesting projects, or (more plausibly) advise colleagues/interns/contractors who can provide enough technical expertise while I provide the less technical guidance.
Why we haven’t gone into biosecurity anyway
As you may have already inferred from past sentences, the biggest reason* is that none of our hires have had biosecurity experience or even strong interest. This is another chicken-and-egg problem. We haven’t done biosecurity work because we don’t have strong biosecurity hires, but we don’t have strong (enough) biosecurity candidates applying because they don’t see us doing biosecurity work.
One of my planned ways around this was trying to get a biosecurity intern last summer, in the hopes that having public outputs in biosecurity by an intern would be a smooth way for us to both scale up our institutional biosecurity knowledge and also demonstrate our interest in this arena. The idea is that interns with the relevant backgrounds (eg math bio, or epidemiology) can provide the technical backgrounds while RP complements their skillsets with the relevant EA contacts, discretion, and analytical ability.
I did try nontrivially hard to make this happen smoothly. I asked some promising biosecurity people to apply.I got verbal agreement from some FHI bio people to be a co-advisor to our biosecurity-interested interns if we had any. And some of the questions in our (blinded) intern assessment process should have differentially been easier for people with bio backgrounds.
But ultimately our strongest intern candidates last round neither had the relevant academic backgrounds nor were particularly interested in biosecurity.
Next steps
RP’s longtermism team is currently going through a hiring round. It seems plausible we might just have a strong biosecurity hire this round, in which case they’d lead our future biosecurity efforts in 2022 and this discussion is moot.
It also seems plausible to me if unlikely (~20% in the next 6 months?) for us to end up prioritizing biosecurity even without a strong biosecurity hire, whether due to internal cause prioritization or external stakeholder requests.
At any rate, if you or others reading this want to support future RP biosecurity efforts, the best way to do this is encouraging strong biosecurity people you know to apply in future rounds! Funder interest is also helpful, but substantially less so.
*we also have internal disagreements about whether it makes sense for us to be more proactive about doing biosecurity work, a) given that we’re already scattered pretty thin on many projects, b) focus is often good, and c) we internally disagree about how important marginal biosecurity work by people without technical expertise is anyway. I’m just presenting my own view.
That all sounds basically right to me, except that my impression is that the cruxes in internal (mild) disagreements about this are just about “a) given that we’re already scattered pretty thin on many projects, b) focus is often good” and not “c) we internally disagree about how important marginal biosecurity work by people without technical expertise is anyway”.
Or at least, I personally think I see (a) and (b) as some of the strongest arguments against us doing biosecurity stuff, while I’m roughly agnostic on (c) but I’d guess that there are some high-value things RP could do even if we lack technical backgrounds, and if some more senior biosecurity person said they really wanted us to do some project then I’d probably guess that they’re right that we could be very useful on that.
(And to be clear, my bottom line would still be pretty similar to Linch’s, in that if we get a person who seems a strong fit for biosecurity work, they seem especially interested in that, and some senior people in that area seem excited about us doing something in that area, I’d be very open to us doing that.)
What is your comparative advantage?
As much as I like to imagine it’s my own work (in longtermism), I think the clearest institutional comparative advantage of RP relative to the rest of the EA movement is the quality of our animal-welfare focused research. To the best of my knowledge, if you want to focus on doing research that directly improves the welfare of many animals, and you don’t have a long-chain theory/plan of impact (e,g. by shifting norms in academia or having an influential governmental position), RP’s the best place to do this. This is just my impression, but my guess is that this is broadly shared among animal-focused EAs.
The main exception I could think of is Open Phil, but they’re not hiring.
I also get the impression that our survey team is very good, probably the best in EA, but I have less of an inside view here than for the animal welfare research.
Our longtermism and global health work are comparatively more junior and less proven, in addition to having fairly stiff competition.
Research, especially EA-aligned research done based on an explicit theory of change.
I’d also note things about scaling (as mentioned elsewhere in the AMA)
Asked differently, why are you so cool, both at the RP level and personally?
That’s very kind of you to say Nuno.
Surprising, I know
What have you been intentional about prioritising in the workplace culture at Rethink Priorities? If you focus on making it a great place for people to work, how do you do that?
This is a great question! Thank you so much!
At Rethink Priorities we take an employee-focused approach. We do our best to ensure that our staff have relevant tools and resources to do their best work, while also having enough flexibility to maintain their work-life balance. Staff happiness is a high priority for us and one of our strategic goals.
Some aspects of our employee-centered approach include:
Competitive benefits and perks – we offer unlimited time off, flexible work schedule, professional development opportunities, stipends etc., which are available to full- and part-time staff, as well as our fellows/interns.
Opportunities to socialize, make decisions, and take on new projects – for example, we have monthly social meetings, we run random polls to solicit opinions/ideas from staff, and create opportunities for employees to participate in various initiatives, like leading a workshop.
Biannual all staff surveys – we collect feedback from our staff twice a year. The survey asks a series of questions about leadership, management, organizational culture, benefits and compensation, psychological safety, amongst others. The results are thoroughly analyzed and guide our decisions about how to improve our culture, moving forward.
Positive environment – we foster an inclusive and welcoming environment in which we encourage individuals to pose their questions, provide feedback, share thoughts, and raise concerns; additionally, we practice transparency at RP with regards to all aspects of our operations (e.g., decision-making, salary).
Internal processes – we continuously revise and/or develop internal processes and practices to ensure equity across the entire organization (e.g. we have recently audited our hiring procedures to increase equity and reduce bias when selecting candidates).
Reflection – we reflect on how we do our work, how we interact with one another, what culture we aspire to develop, and implement necessary changes.
I really appreciate your structured response :-) Would you happen to have any documents about the actionables behind each of these? Like this handbook at Valve? :D
*I ask because I’d be curious to learn about the actionable tips that others can replicate from your experience :-)
We’re working right now on a values and culture setting exercise where we are figuring out intentionally what we like about our culture and what we want to specifically keep. I appreciate Dominika’s comment but I want to add a bit more of what is coming out of this (though it isn’t finished yet).
Four things I think are important about our culture that I like and try to intentionally cultivate:
Work-life balance and sustainability in our work. Lots of our problems are important and very pressing and it is easy to burn yourself out working hard on it. We have deliberately tried to design our culture for sustainability. Sure, you might get some more hours of work this year if you work harder but it isn’t worth burning out just a few years later. We want our researchers here for the long haul. We’re invested in their long-term productivity.
Rigor and calibration. It’s very easy to do research poorly and unfortunately easy to do bad research that misleads people because it is hard to see how the research is bad. Thus a lot of work must be done by our researchers to ensure that our work is accurate and useful.
Ownership. In a lot of organizations, managers want their employees to do exactly what they are told and follow processes to the letter. At Rethink Priorities, we think the ideal employee instead seeks to understand the motivation behind the assignment and how it fits into our goals and notices if there is a better way to achieve the same goals or even if the project shouldn’t be done.
Working on the right things. There are a lot of problems that we need to solve, so we must prioritize them. Selecting the right research question can often be more impactful than answering it.
We’ll have something more finished at a later date!
Your work-life balance and ownership points remind me of the culture at Valve!
Here are some notes I took on their culture if you’d be interested in ideas to implement. The points highlighted in orange are the actionables to implement :-)
What kinds of research questions do you think are better answered in an organisation like RP vs. in academia, and vice versa?
One major factor that makes some research questions more suited to academia is requiring technical or logistical resources that would be hard to access or deploy in a generalist EA org like RP (some specialist expertise also sometimes falls into this category). Much WAW research is like this, in that I don’t think it makes sense for RP to be trying to run large-scale ecological field studies.
Another major factor is if you want to promote wider field-building or you want the research to be persuasive as advocacy to certain audiences in the way that sometimes only academic research can. This also applies to much WAW research.
Personally, I think in a most other cases academia is typically not the best venue for EA research, although the latter considerations about field-building and the prestige/persuasiveness of academic research recurs sufficiently commonly that I think the question of whether a given project is worth publishing academically recurs fairly commonly even within RP.
Thanks a lot for the response—can I just ask what WAW stands for? Google is only showing me writing about writing, which doesn’t seem likely to be it...
And how often does RP decide to go ahead with publishing academia?
“WAW” = Wild Animal Welfare (previously often referred to as “WAS” for Wild Animal Suffering).
I’d say a small minority of our projects (<10%).
Are there any ways that the EA community can help RP that we might not be aware of? Or any that we do already that you would like more of?
Commenting on our public output, particularly if you have specialized technical expertise, can often be somewhere from mildly to really helpful. RP has a lot of knowledge, but so does the rest of the EA community and extended EA network, so if you can route our reports to the relevant connections, this can be really valuable in improving the quality of our reasoning and epistemics.
One thing the EA community can help us with is by encouraging suitable candidates to apply to our jobs. (New ones will be posted here and announced in our newsletter.) Some of our most recent hires have transitioned from fields which, at first sight, would seem unlikely to produce typical applicants. But we’re open to anyone proving us they can do the job during the application process (we do blinded skills assessments). I think we’re really not credentialist (i.e. we don’t care much about formal degress if people have gained the skills that we’re looking for). So whenever you read a job ad and think “Oh, this friend could actually do that job!”, do tell them to apply if they’re interested.
More importantly, I think EA community builders in all geographies and fields can greatly help us by training people to become good at the type of reasoning that’s important in EA jobs. I particularly think of reasoning transparency, expressing degrees of (un)certainty and clarifying the epistemic status of what you write. Furthermore, probabilistic thinking and Bayesian updating. Also learning to build models and getting familiar with tools like Guesstimate and Causal. Forecasting also seems to be a valuable skill to train (e.g. on Metaculus). I think EAs anywhere in the world can set up groups where people train such skills together.
I like this answer.
Some additional possible ideas:
Letting us know about or connecting us to stakeholders who could use our work to make better decisions
E.g., philanthropists, policy makers, policy advisers, or think tanks who could make better funding, policy, or research decisions if guided by our published work, by conversations with our researchers, or by future work we might do (partly in light of learning that it could have this additional path to impact)
Letting us know if you have areas of expertise that are relevant to our work and you’d be willing to review draft reports and/or have conversations with us
Letting us know about or connecting us to actors who could likewise provide us with feedback, advice, etc.
Letting us know if there are projects you think it might be very valuable for us to do
We (at least the longtermism department) are already drowning in good project ideas and lacking capacity to do them all, but I think it costs little to hear an additional idea, and it’s plausible some would be better than our existing ideas or could be nicely merged with one of our existing ideas.
Testing & building fit for research management
See also Collection of collections of resources relevant to (research) management, mentorship, training, etc.
Testing & building fit for ops roles
Donating
(In all cases, I mean either doing this thing yourself or encouraging other people to do so.)
To any staff brave enough to answer :D
You’re fired tomorrow and replaced by someone more effective than you. What do they do that you’re not doing?
I recently spent ~2 hours reflecting on RP’s longtermism department’s wins, mistakes, and lessons learned from our first year[1] and possible visions for 2022. I’ll lightly adapt the “lessons learned for Michael specifically” part of that into a comment here, since it seems relevant to what you’re trying to get at here; I guess a more effective person in my role would match my current strengths but also already be nailing all the following things. (I guess hopefully within a year I’ll ~match that description myself.)
(Bear in mind that this wasn’t originally written for public consumption, skips over my “wins”, etc.)
“Focus more
Concrete implications:
Probably leave FHI (or effectively scale down to 0-0.1 FTE) and turn down EA Infrastructure Fund guest manager extension (if offered it)
Say no to side things more often
Start fewer posts, or abandon more posts faster so I can get other ones done
Do 80⁄20 versions of stuff more often
Work on getting more efficient at e.g. reviewing docs
Reasons:
To more consistently finish things and to higher standards (rather than having a higher number of unfinished or lower quality things)
And to mitigate possible stress on my part, [personal thing], and to leave more room for things like exercise
And to be more robust against personal life stuff or whatever
(I mean something like: My current slow-ish progress on my main tasks is even with working parts of each weekend, so if e.g. I had to suddenly fly back to Australia because a family member was seriously ill, I’d end up dropping various balls I’ve somewhat committed to not dropping.)
Maybe trust my initial excitement less regarding what projects/posts to pour time into and what ideas to promote, and relatedly put more effort into learning from the views and thinking of more senior people with good judgement and domain expertise
E.g., focus decently hard on making the AI gov stuff go well, since that involves doing stuff Luke thinks is useful and learning from Luke
E.g., it was good that I didn’t bother to finish and post my research question database proposal
Maybe pay more attention to scale and relatedly to whether an important decision-maker is likely to actually act on this
Some people really do have a good chance of acting in very big ways on some stuff I could do
But by default I might not factor that into my decisions enough, instead just being helpful to whoever is in front of me or pursuing whatever ideas seem good to me and maybe would get karma
Implement standard productivity advice more, or at least try it out
I’ll break this down more in the habits part of my template for meetings with Peter
[I’m also now trying productivity coaching]
Spend less time planning projects in detail, and be more aware things will change in unexpected ways
Be more realistic when making plans, predictions, and timelines
(No, really)
Including assuming management will take more time than expected, at least given how I currently do it
Spend more time, and get better at, forming and expressing hot takes
Spend less time/words comprehensively listing ideas/considerations/whatever
More often organise posts/docs conceptually or at least by importance rather than alphabetically or not at all
Be more strict with myself regarding exercise and bedtime
Indeed optimise a fair bit for research management careers rather than pure research careers
This was already my guess when I joined, but I’ve become more confident about it”
[1] I mean the first year of the current version of RP’s longtermism department; Luisa Rodriguez previously did (very cool!) longtermism work at RP, but then there was a gap between her leaving (as a staff member; she’s now on the board) and the current staff joining.
Thank you for being vulnerable enough to share this!
It sounds like you’re focusing a lot on working on the right things (and by extension, fewer things)? And then becoming more efficient at the underlying skills (ex: explaining, writing, etc.) involved?
Yeah, though I’m also aiming to work on fewer things as “a goal in itself”, not just as a byproduct of slicing off the things that are less important or less my comparative advantage. This is because more focus seems useful on order to become really excellent at a set of things, ensure I more regularly actually finish things, and reduce the inefficiencies caused by frequent task/context-switching.
Some ways someone can be more effective than me:
I’m not as aggressive at problem/question/cause prioritization as I could be. I can see improvements of 50-500% for someone who’s (humanly) better at this than me.
I’m not great at day-to-day time management either. I can see ~100% improvement in that regard if somebody is very good at this.
I find it psychologically very hard to do real work for >30h/week, so somebody with my exact skillset but who could productively work for >40h/week without diminishing returns would be >33% more valuable.
I pride myself of the speed and quantity I write, but I’m slower than eg MichaelA, and I think it’s very plausible that a lot of my outputs are still bottlenecked by writing speed. 10-50% effectiveness improvement seems about right.
I don’t have perfect mental health and I’m sometimes emotional. (I do think I’m above average at both). I can see improvements of 5-25% for people who don’t have these issues.
I’m good at math* but not stellar at it. I can imagine someone who’s e.g. a Putnam Fellow be 3-25% more effective than me if they chose to work on the same problems I work on (though plausibly they’d be more effective because they’d gravitate towards much mathier problems; otoh ofc not all/most mathy problems are very important)
Relatedly, obviously I’m not the smartest person in the world. I don’t have a good sense of how much e.g. being half a standard deviation smarter than me would make someone a better researcher, anything from “not a lot” to “very high” seems plausible to me. ??? for quantitatively how much effectiveness this adds.
*Concretely, I did a math major in a non-elite liberal arts college, which wasn’t too hard for me. I perceived both my interns last summer as probably noticeably better at math than me (one was a math major at Columbia and the other at MIT). Certainly they know way more math.
Thank you for the specific estimates and the wide variety of factors you considered :-) It may be that @MichaelA is also working primarily on improving cause prioritisation. I guess maybe you’ve both discussed that :D
The person who replaces me has all my same skills but in addition has many connections to policymakers, more management experience, and stronger quantitative abilities than I do.
I’ve adjusted imperfectly to working from home, so anyone who has that strength in addition to my strengths would be better. I wish I knew more forecasting and modeling, too.
(less helpful answer, will think of a better one later)
Hmm Rethink follows pretty reasonable management practices, and is maybe on the conservative side for things like firing unproductive employees.
So I can’t really imagine being fired for ineffectiveness without warning on a Saturday. The only way this really happens is if I’m credibly accused of committing a pretty large crime or sexually harassing a RP colleague or maybe faking data or something like that.
To the best of my knowledge I have not done these things.
Hmm since I haven’t done these things, I must be set up to be falsely accused for a crime in a credible way. So the most likely way someone can replace me and be more effective on this dimension is by not making any enemies who’s motivated enough to want to set them up for murder or something.
Quick clarifying question:
Is the most important part of your question the “fired” part or the “more effective” part? Like would you rather I a) answer by generating stories of how I might be fired and how somebody can avoid that, or b) answer what can people do to be more effective than me?
Part b) is more important. Part a) is just to make the question more real to the person answering.
Are there any skills and/or content expertise that you expect to particularly want from future hires? Put differently, is there anything that you think aspiring hires might want to start working on to be better suited to join/support RP over the next few years?
I’ll let my colleagues answer the object-level question/might answer it myself later if I get better ideas later, but broadly I would somewhat caution against having a multi-year plan to be employed at Rethink Priorities specifically (or at any specific organization). RP hiring is pretty competitive now and has gotten more competitive over time[1], and also our hiring processes are far from perfect so even very good researchers (by our lights) may well be missed by our hiring process.
That said, some of the answers to James Ozden’s question might be relevant here as well.
[1] We’re also scaling pretty quickly to hire more people, but EA community building/recruitment at top universities have also really scaled up since 2020, and it’s unclear how these things shake out in terms of how competitive our applications will be in a few years.
I agree, but would want to clarify that many people should still apply and very many people should at least consider applying. It’s just that people shouldn’t optimise very strongly for getting hired by one specific institution that’s smaller than, say, “the US government” (which, for now, we are 😭).
Thanks for the clarification! Definitely encourage people to apply.
We’ve also moved paid work trials to earlier and earlier on in the process, so hopefully applying is not a financial hardship for people.
What percentage of your work/funding comes from non-EA aligned sources?
I once told people in a programmer group chat what I was doing when I got my new job at RP. One of them looked into the website and gave like a $10 donation.
To the best of my limited knowledge, this might well be our largest non-EA aligned donation in longtermism.
It’s a little hard to say because we don’t necessarily know the background / interests of all donors, but my current guess is around 2%-5% in 2021 so far. It’s varied by year (we’ve received big grants from non-EA sources in the past). So far, it is almost always to support animal welfare research (or unrestricted, but from a group motivated to support us due to our animal welfare research).
One tricky part of separating this out—there are a lot of people in the animal welfare community who are interested in impact (in an EA sense), but maybe not interested in non-animal EA things.
Minor nit:
should be
As discussed in this comment thread (by you :P), an increasingly high percentage of our work is targeted towards specific decision-makers, and whether we choose to publish is due to a combination of researcher interest, decision-maker priorities, and the object-level of what the research entails.
I’m particularly glad you note this since the survey team’s research in particular is almost exclusively non-public research (basically the EA Survey and EA Groups Survey are the only projects we publish on the Forum), so people understandably get a very skewed impression of what we do.
If you can share, what are some other projects or research that the survey team works on? If you can’t give specifics, it would be useful to know broadly what they were related to. I’m intrigued by the mystery!
Thanks for asking. We’ve run around 30 survey projects since we were founded. When I calculated this in June we’d run a distinct survey project (each containing between 1-7 surveys), on average, every 6 weeks.
Most of the projects aren’t exactly top secret, but I err on the side of not mentioning the details or who we’ve worked with unless I’m certain the orgs in question are OK with it. Some of the projects, though, have been mentioned publicly, but not published: for example, CEA mentioned in their Q1 update that we ran some surveys for them to estimate how many US college students have heard of EA.
An illustrative example of the kind of project a lot of these are would be an org approaching us saying they are considering doing some outreach (this could be for any cause area) and wanting us to run a study (or studies) to assess what kind of message would be most appropriate. Another common type of project is just polling support for different policies of interest and testing the robustness of these results with different approaches. Both these kinds of projects are the most common but generally take up proportionately less time.
There are definitely a lot of other things that we can do and have done. For example the ‘survey’ team has also used focus groups before and would be interested in doing so again (which we think would be useful for a lot of EA purposes), and much of David Reinstein’s work is better described as behavioural experiments (usually field experiments), rather than surveys.
Another aspect of our work that has increased a lot recently to a degree that was slightly surprising is what Peter refers to here as “ad hoc analysis requests” and consulting (e.g. on analysis and survey design), without us actually running a full project ourselves. I’d say we’ve provided services like this to 8-9 different orgs/researchers (sometimes taking no more than a couple of hours, sometimes taking multiple days) in the last few weeks alone. As Peter mentions in that post, these can be challenging from a fund-raising perspective, although I strongly encourage people not to not reach out to us on that basis.
The projects we did used to be more FAW leaning, but over time the composition has changed a bit and, perhaps unsurprisingly, now contains more longtermist projects. Because the things we work on are pretty responsive to requests coming from other orgs, the cause-composition can change unexpectedly in a short space of time. Right now the projects we’re working on are roughly evenly split between animals, movement building and meta, but it wouldn’t be that surprising if it became majority longtermism over the next 6 months.
Thanks! We’ll make sure to get this changed going forward.
In your past experiences, what are the biggest barriers to getting your research in front of governmental organisations? (ex: official development aid grantmakers or policy-makers)
Biggest barriers in getting them to act on it?
I would break this down into a) the methods for getting research in front of government orgs and b) the types of research that gets put in front of them.
In general I think we (me for sure) haven’t been optimising for this enough to even know the barriers (unknown unknowns). I think historically we’ve been mostly focused on foundations and direct work groups, and less on government and academia. This is changing so I expect us to learn a lot more going forward.
As for known unknowns in the methods, I still don’t know who to actually send my research to in various government agencies, what contact method they respond best to (email, personal contact, public consultations, cold calling, constituency office hours?), or what format they respond best to (a 1 page PDF with graphs, a video, bullet points, an in person meeting? - though this public guide Emily Grundy made on UK submissions while at RP has helped me). Anecdotally it seems remarkably easy to get in front of some: I know of one small animal advocacy organization that managed to get a meeting with the Prime Minister of their country, and I myself have had 1-1 meetings with more than two dozen members of the UK and Irish parliaments and United Nations & European Union bureaucrats (non RP work) with relative ease (e.g. an email with a prestigious sounding letterhead).
My assumption is government orgs are swamped with requests and petitions from NGOs, industry, peers, constituents. So we need some way to stand out from the crowd like representing a core constituency of theirs, being recommended to them by someone they deem credible such as an already established NGO, being affiliated with some already credible institution like a prestigious university, and proving to them we can provide them with policy expertise and legislative intelligence better than most others can.
On b) I think have a better sense of what content would be more likely to get in front of them. Niel Bowerman had some good insights on this in 2014, and the “legislative subsidy” approach Matthew Yglesias favours in the US context seems useful.There was an interesting study from Nakajima (2021) (twitter thread) which looked at what kinds of research evidence do policymakers prefer (bigger samples, external validity extends to the population in their jurisdictions, no preference on observational-v-experimental) so I think we can explore whether the topics on our research agenda fit within those designs.
Update: wanted to add in this post from Zach Groff:
If anyone reading this works at a governmental organization, we’d love to chat!
@Neil_Dullaghan we should chat.
Thank you for the well-researched response :-) Excited to maybe ask again in a year and see any changes in your practical lessons!
In your yearly review you mention that Rethink may significantly expand its Longtermism research group in the future, including potentially into new focus areas and topics. Do you have any ideas of what these might be (beyond the mentioned AI governance), and how you might choose (i.e. looking for a niche where Rethink can play a major role, following demand of stakeholders, etc.)?
If in 5 and/or 10 years time you look back on RP and feel its been a major success, what would that look like? What kind(s) of impact would you consider important, and by what bar would you measure your attainment/progress towards that?
The first part I answered here.
I think a major success for us would look like having achieved a large and sustainably productive research organization tackling research in a variety of disciplines and cause areas. I think we will have made a major contribution to unlocking funding in effective altruism by figuring out to fund with more confidence as well as increasing our influence across a larger variety of stakeholders, including important stakeholders outside of the effective altruism movement..
How have you or would you like to experiment with your organisational structure or internal decision making to improve your outputs?
One recent experiment has been trying to get better at project management, especially at a larger scale. We’ve rolled out Asana for the entire organization and have hired a project manager.
Another recent experiment has been whether we can directly hire for “Senior Research Managers” (SRMs), instead of having to develop all our senior research talent in-house. We’ve hired two external SRMs and it has been going well so far, but it is too early to tell. We may try to hire another external SRM in our current hiring process.
If both these two experiments go well, it will unlock a lot of future scalability for our organization and for other organizations that can follow suit.
Our next experiment will likely involve hiring research and/or executive assistants to see if they can help our existing researchers achieve more productivity in a more sustainable way.
Any advice for researchers who want to conduct research similar to Rethink Priorities? or useful resources that you point your researchers towards when they join?
It has been said before elsewhere by Peter, but worth stating again:read and practice Reasoning Transparency . Michael Aird compiled some great resources recently here.
I’d also refer people to Michael and Saulius’ replies to arushigupta’s similar subquestion in last year’s RP AMA.
One thing I’d add is that I think several people at RP and elsewhere would be very excited if someone could:
Find existing resources that work as good training for improving one’s reasoning transparency, and/or
Create such a resource
As far as I’m aware, currently the state of the art is “Suggest people read the post Reasoning Transparency, maybe point them to a couple somewhat related other things (e.g., the compilation I made that Neil links to, or this other compilation I made), hope they absorb it, give them a bunch of feedback when they don’t really (since it’s hard!), hope they absorb that, repeat.” I.e., the state of the art is kinda crappy. (I think Luke’s post is excellent, but just reading it is not generally sufficient for going from not doing the skill well to doing the skill well.)
I don’t know exactly what sort of resources would be best, but I imagine we could do better than what we have now.
Oh, and some other resources I’d often point people towards after they join are:
Giving and receiving feedback (including the top comments)
Countering imposter syndrome and anxiety about work
My collections on how to do high-impact research and get useful input from busy people
For longtermist work, I often point people to Holden Karnofsky’s impressions on career choice, particularly the section on building aptitudes for conceptual and empirical research on core longtermist topics .
I’ve also personally gained a lot from arguing with People Wrong on the Internet, but poor application of this principle may be generally bad for epistemic rigor. In particular, I think it probably helps to have a research blog and be able to do things like spot potential holes in (EA social media, EA forum, research blogs, papers, etc). That said, I think most EA researchers (including my colleagues) are much less Online than I am, so you definitely don’t need to develop an internet argument habit to be a good researcher.
Making lots of falsifiable forecasts about short-term conclusions of your beliefs may be helpful. Calibration training is probably less helpful, but lower cost.
Trying to identify important and tractable (sub)questions is often even more important than the ability to answer them well. In particular, very early on in a research project, try to track “what if I answered this question perfectly? Does it even matter? Will this meaningfully impact anyone’s decisions, including my own? Will this research build towards something else that will meaningfully impact decisions later?”
“Politely disagreeable” seems like a pretty important disposition. You benefit epistemically from being nice and open enough to other people’s ideas that you a) deliberately seek out contrarian opinions and b) don’t reject them outright, but also you need to be disagreeable enough that you in general shouldn’t update on beliefs just because other (smart, respected, experienced, etc) people confidently believe it.
Being very aggressively truth-seeking is a really important disposition. My belief is that most people are by default bad at this, including people who may otherwise make great EA researchers.
I also endorse Neil’s comment.
Let’s say your research directly determined the allocation of $X of funding in 2021.
Let’s say you have to grow that amount by 10 times in 2022, but keep the same number of staff, funding, and other resources.
What would you change first in your current campaigns, internal operations, etc.?
I don’t think it is actually possible to 10x our impact with the same staff, funding, and other resources—hence our desire to hire and fundraise more. If it was possible, we’d certainly try to do that!
The best answer I can think of is Goodharting—we certainly could influence more total dollars if we cared less about the quality of our influence and the quality of those dollars. We also could exaggerate our claims about what “influence” means, taking credit for decisions that likely would’ve been made the same anyway.
What are the bottlenecks to using forecasting better in your research?
Lazy semi-tangential reply: I recently gave a presentation that was partly about how I’ve used forecasting in my nuclear risk research and how I think forecasting could be better used in research. Here are the slides and here’s the video. Slides 12-15 / minutes 20-30 are most relevant.
I also plan to, in ~1 or 2 months, write and publish a post with meta-level takeaways from the sprawling series of projects I ended up doing in collaboration with Metaculus, which will have further thoughts relevant to your question.
(Also keen to see answers from other people at RP.)
We at Rethink Priorities definitely have made an increasingly large effort to include forecasting in our work. In particular, we just recently have been running a large Nuclear Risks Tournament on Metaculus. My guess is that the reasons we don’t have even more forecasting relates to not all of our researchers being experienced forecasters and it hasn’t been a sufficient priority to generate sufficiently useful and decision-relevant forecasting questions for every research piece.
Will you have some kind of internship/fellowship oppurtunities next summer?
We have not yet decided whether we will have internships / fellowships this summer—assuming you are referring to the Northern Hemisphere here. If we launch these internships, I imagine they will open in 2022 March. We are continuing to consider launching internships / fellowships for summer in each Hemisphere (as we launched an AI Governance and Fellowship for 2022 Jan-March for summer in the Southern Hemisphere).
Another thing we are considering in addition to, or in replacement of, internships this year is Research/Executive Assistant positions that focus more on supporting and learning the work of a particular researcher on the RP team. These roles would likely be permanent/indefinite in length rather than a few months like our internships have been.
I am also interested in future internship plans. Specifically, how flexible are the dates and time commitments?
As someone based in Australia, seasonal descriptors (presumably from the Northern hemisphere) aren’t ideal though I can convert them—specific months would be preferable :) Also our university holiday periods are different, so I will need to work around that too.
What are some key research directions/topics that are not currently being looked into enough by the EA movement (either at all or in sufficient depth)?
Longtermism in its nascent form relies on a lot guesstimates and abstractions that I think could be made more empirical and solid. Personally, I am very interested in asking whether people at x time in the past had the information they needed to avoid later disasters that occurred. What kinds of catastrophes have humans been able to foresee, and when we were able to but didn’t, what obstacles were in the way? History is the only evidence available in a lot of longtermist domains and I don’t see EA exploiting it enough.
As is probably the case with many researchers, I have a bunch of thoughts on this, most of which aren’t written up in nice, clear, detailed ways. But I do have a draft post with nuclear risk research project ideas, and a doc of rough notes on AI governance survey ideas, so if someone is interested in executing projects like that please message me and I can probably send you links.
(I’m not saying those are the two areas I think are most impactful to do research on on the current margin; I just happen to have docs on those things. I also have other ideas less easily shareable right now.)
People might also find my central directory for open research questions useful, but that’s not filtered for my own beliefs about how important-on-the-margin these questions are.
Interesting that you’ve got climate change in your global health and development work rather than with longtermism. What are the research plans for the climate change work at RP?
A note on why climate change is currently in our global health and development work rather than longtermism—the main reasons for this is that while we could consider longtermist work on climate change we do not think marginal longtermist climate change work makes sense for us relative to the importance and tractability of other longtermist work we could do. However, global health and development funders and actors are also interested in climate change in a way that does not funge much against longtermist money or talent, and the burden of climate change is felt heavily on lower and middle income countries. Therefore we think climate change work makes sense to explore relative to other global health and development opportunities.
Hi James, thanks for your question. The climate change work currently on our research calendar includes:
A look at how climate damages are accounted for in various integrated assessment models
A cost effectiveness analysis of anti-deforestation interventions
A review of the landscape of climate change philanthropy
An analysis of how scalable different carbon offsetting programs are
I’m interested in your current and future work on longtermism.
One of your plans for 2022 is to:
Build a larger longtermist research team to explore longtermist work and interventions more broadly
Have you decided the possible additional research directions you are hoping to explore? When you’re figuring this out, are you more interested in spotting gaps or do you feel the field in young enough that investigating areas others are working on/have touched is still likely to be beneficial? Perhaps both!
One thing we know for certain is that we are definitely doing AI Governance and Strategy work. We have not decided these other avenues yet—I think we will decide them in large part based on who we hire for our roles and in consulting with the people we hire once they are hired and come to agreements as a team. I definitely think that there is a lot to contribute in every field, but we will weigh neglectedness and our comparative advantage in figuring out what to work on.
I expect we’ll also talk a lot to various people outside of RP who have important decisions to make and could potentially be influenced by us and/or who just have strong expertise and judgement in one or more relevant domains (e.g., major EA funders, EA-aligned policy advisors, strong senior researchers) to get their thoughts on what it’d be most useful to do and the pros and cons of various avenues we might pursue.
(We sort-of passively do this in an ongoing way, and I’ve been doing a bit more recently regarding nuclear risk and AI governance & strategy, but I think we’d probably ramp it up when choosing directions for next year. I’m saying “I think” because the longtermism department haven’t yet done our major end-of-year reflection and next-year planning.)
What should one do now if one wants to be hired by Rethink Priorities in the next couple years? Especially in entry-level or more junior roles.
I realize this is a general question; you can answer in general terms, or specify per role.
James Ozden’s question above might be sufficiently similar to yours that the answers there address your question?
From a talk at EAG in 2019, I remembered that your approach could be summarized as empirical research in neglected areas (please correct me if I’m wrong here). Is this still the case? Do you still have a focus on empirical research (Over, say, philosophy)?
Yes, it is still our approach, broadly speaking, to focus on empirical research, though certainly not to the exclusion of philosophy research. And we’ve now done a lot of research that combines both, such as our published work on invertebrate sentience and our forthcoming work on the relative moral weight of different animals.
Answered here and here and here.
About funding overhang:
Peter wrote a comment on a recent post:
You also wrote in your plans for 2022:
In which cause areas do you expect to identify the most funding opportunities? Will the funding gaps be big enough to resolve a significant part of the funding overhang?
We’d expect to find new funding opportunities in each cause area we work in. Our work is aspirational and inherently about exploring the unknown though, so it’s very difficult to know in advance how large the funding gaps we uncover will be. But hopefully our work will contribute to a part of work that overall shifts EA from not having a funding overhang but instead having substantial room for more funding in all cause areas. This will be a multi-year journey.
Sorry if the answer for this is readily available elsewhere, but are there recommended times of the year to donate if you are based in the UK, e.g. to make use of matching opportunities? My understanding is that the Giving Tuesday facebook matching is only for US donors.
Thanks!
Thanks for considering to support us!
Basically anyone can donate to the Giving Tuesday fundraiser and participate, but only for US donors it’s tax-deductible.
From the EA Giving Tuesday FAQ:
>Donors from a large number of countries are eligible to donate through Facebook and get matched. However, in both 2019 and 2020 most non-U.S. donors faced significantly lower donation limits. We expect the same to be true in 2021. [This year, the donation limit for US donors is USD 20,000.] Additionally, please be aware that donors outside the United States will likely lose out on any tax benefits they’d receive from donating to a nonprofit registered in their own country.
International donors can give to RP through the EA Funds. As a UK donor, your gift is eligible for Gift Aid and would typically be tax-deductible. We explain all of this on our donation page: https://rethinkpriorities.org/donate
Regarding other matching opportunities: Check out https://www.every.org/rethink-priorities. They still seems to have some funds available from their FallGivingChallenge for a 100% match!
We don’t regularly run matching campaigns ourselves, but it’s not excluded we may set up one in the course of the next year.
The best way to stay informed about upcoming opportunities is our newsletter.
Your gift is welcome at any time of the year!