Hi Peter. I’ll be joining GPP in January. Niel and Rob have both said exactly what I’d say on the point of GPP. I’d perhaps add that GPP has been experimenting with a number of avenues towards impact using the outcome of its research. We’ll be deciding exactly what approach seems most promising early in the new year, and that will be really important for shaping the organisation. My current hypothesis is that out biggest comparative advantage as an EA org is in tools for policy rather than for EAs, though obviously many things useful for one can be made useful for the other. From your comment it sounds like you had some specific ideas for things you thought GPP could be bringing to EAs, PM me and I’d love to chat about it.
Sebastian_Farquhar
This is a neat approach, Rob, and some form of it seems likely to be one of the best ways of thinking about this. I think the emphasis on putting yourself in the shoes of those you’re trying to help rather than acting for yourself is particularly valuable. I think there is one extra difficulty that you haven’t mentioned, though, which is to do with people having other preferences than yours.
Even if I’m able to work out that, given a random chance of being one of the participants I would prefer 2 to 3, it doesn’t necessarily follow that 2 is preferable to 3 in an objective sense. It is interesting to imagine what the participants themselves would choose behind your veil (if they were fully informed about the tradeoffs etc.).
In many cases, one finds that people tend to think that their own condition is less bad than people who don’t have the condition do. (That is, if you ask sighted people how bad it would be to be blind they say it would be much worse than blind people do when asked.) This suggests that, behind a veil of ignorance where self-interest is not at play, those at risk of malaria but not worms might regard treating worms as most important and those at risk of worms but not malaria would treat malaria. It seems hard to know whom to prioritise then.
There’s also the eternal problem with imagining what one would choose—people often choose poorly. I assume you’re making some sort of assumptions choosing under the best possible conditions. It may be, though, that your values depend on your decision-making conditions.
Of course, you still have to choose and like you say it’s clear that 2 and 3 are both preferable to 1. I think this tool will get you answers most of the time, and can focus your mind on important questions, but there’s a intrinsic uncertainty (or maybe indeterminateness) about the ordering.
“Ideally try to avoid telling people that they are obliged to do any particular action. Especially try not to tell people that what they are currently doing is bad.”
A few of my non-EA friends have had similar experiences talking with EAs which backs this up. The most common is some variant of
“Have you considered doing something more effective than what you’re doing now.”
There may well be good ways and times to ask this question—but it’s probably one for a close friend with a great deal of trust, not someone you just met.
This is a very interesting idea!
I wonder about how the mechanism leads to behaviour-change though. In particular, each prize is relatively small compared to the cost of launching an activity—the expected value of the reward is significantly below start-up costs unless one was planning to basically do that anyhow. This means you might end up just funding the activities that were most effective in the last cycle but which would have happened anyway (which may be fine, they might be most likely to do well in the future with that money).
An opposite end of the spectrum would be to offer larger chunks that were big enough to motivate people to invest their own resources chasing it. I.e., an EA X-Prize. I think there will be some really cool lessons from varying the size of the award.
Yes, although engaging with existing policymakers too soon is a good way to lose credibility. There is definitely more room to talk to friendly policy experts though!
I’m not sure that doing lobbying ‘just for practice’ is a good idea. It would be fairly easy to accidentally lobby for something bad, and equally the reputational consequences of lobbying can be complicated if you don’t know an area.
What do you mean by science/tech lobbying? Lobbying for what?
I think it’s not so much that it’s crowded as that it’s often unclear what the actual thing you’d lobby for is: is more research funding better research funding? Maybe. What exactly would better patent law be? Better education? These are all things where it is easy to come to views, and even to be quite confident about them, but where the realities are often much more complicated than they seem.
I don’t mean that in a nihilistic way—I’m currently working on building a much more informed view of safe biological research funding in order to lobby for a specific policy—it’s just that there’s quite a lot of work to be done to be sure something is good before you advocate for it.
Thanks, fixed.
This is a really important topic that we aren’t discussing enough in the EA community. At the moment, Owen is working on a paper on modelling the marginal value of different research topics. It seems very likely that we will build on that paper by estimating the marginal value of a range of promising technology areas to compare against each other (a DCP for technology, as it were). This work wouldn’t address sequencing issues, and those are really important and something we should address as a society. Owen has some preliminary ideas in this direction and GPP may investigate this further. This work is, however, part of a very full pipeline of other work.
This highlights another important point—we aren’t the first to face these issues. People have been dealing with, and making predictions about, radical future-changing technologies for centuries. GPP has already applied for funding to hire a researcher to investigate the historical track record of such predictions, and predictions of mitigation strategies, to make us smarter about estimating which sorts of ex-risks and future challenges we are best placed to act to mitigate. We’ve also had interest from some donors to part-fund such activities. If anyone is interested in matching that contribution we may be able to speed up that hire.
There are a lot of different audiences. Political decision-makers, the public at large, and academics are three.
Decision-makers in government are often (at least in the UK) very well intentioned and keen to use the right models and assumptions. But they are also very busy and have little time to do research and learn. We believe the best way to influence them is to engage with their work, understand what they are struggling with, and then produce really concise and useable frameworks for them. It’s really important to physically get paper copies into their offices. This is the approach we used while engaging with the National Risk Assessment, and one we will continue to use. For example, a contact in central government has suggested that, despite extensive academic work on the topic, decision-makers still do not really understand discount rates and could use a very clear ‘how-to’ note that can be passed around.
Influencing the public at large is going to take interaction with journalists and branding experts. It is a regrettable accident that the EA movement so far has been light on these skills—we hope that will change and are reaching out to journalists, marketing experts and PR workers (I spoke yesterday with a worker at a PR agency for academic public impact).
EAs may want to influence academics. Potential routes for this include doing impressive direct work (publishing, attending conferences etc.) to encourage others to build on the work. But an alternative strategy is to ‘pull side-ways’ (by offering prizes, hosting conferences, persuading top researchers etc.).
We currently want to hire a researcher or policy specialist with skills complementary to those of Owen and me. We had some very strong expressions of interest when we invited them in December, and expect to be able to hire excellent staff. We are looking to build an impressive team—and are open to flexing our work-plan in order to get good people.
Experience drafting policy or working on policy evaluation framework would be very helpful, as would a background in welfare economics. Our next employee will need to be able to do independent research, but also to communicate that research to non-technical audiences. We are also interested in hiring a researcher with expertise relevant to the history of technology and technology forecasting as a separate project (as mentioned in other answers).
We are also looking for volunteers. We have a couple projects where we have chunks of fairly repeatable work which can be easily split over multiple workers. Most of these require strong research skills, because they involve critiquing other work, but limited time commitment. We would also be interested in volunteers with journalism experience.
- 17 Mar 2015 22:26 UTC; 3 points) 's comment on We are Seb Farquhar and Owen Cotton-Barratt from the Global Priorities Project, AUsA! by (
How many people work full-time and part-time on GPP? What are sustainable growth predictions?
I and Owen effectively work full-time on GPP (Owen has some teaching commitments as well). Toby Ord, Rob Wiblin, and Niel Bowerman all contribute irregularly to GPP projects, averaging a couple hours a week each. We aim to hire 1-2 new staff this year depending on fundraising.
Do you model yourself as a think-tank?
Somewhat, although think tanks have a wide variety of models and the type is not that well-defined (some have barely any staff while others have hundreds; some mostly lobby while others mostly do research). We are similar to many think tanks in that our goal is to influence policy and academic work without being a formal part of either system. Some of the future models of GPP look less like a think-tank.
What think-tanks have you looked at, spoken to, or modelled yourself upon?
We’ve spoken to people at a few think tanks, about specific issues like fundraising rather than their general approach, but have not modelled ourselves on any particular one. I think this is a good point though, and we may have underinvested in this area. Would be great to have a conversation with you about this some time.
Have you reached out to e.g. RUSI, BASIC, etc? Do you plan to?
We have not and do not currently have plans to, although it might make sense in the future. Our current focus has been less on topics related to defense (our current work in existential risk, for example, is focused on civilian biosafety risks).
What are your plans for the next a) 6 months b) year c) 5 years?
For the next 6 months we plan to test out models for impact. At around that point we aim to use what we’ve learned to focus our work onto the model which appears most effective, while continuing to evaluate and explore options. We plan to review that decision periodically with the possibility of future ‘pivots’ (drawing on the best-practice start-up literature). Some of our work has natural timescales which are shorter than other parts, so we will be able to reach conclusions earlier.
Models we are considering have strong commonalities and build off of our skills and current work, but might look different operationally. They include, for example, a focused policy think-tank, a policy evaluation think-tank, a policy evaluation consultancy, an academic organisation trying to seed ‘prioritisation’ as an academic discipline, or a cause comparison meta-charity organisation.
In what ways are you experimenting and iterating?
In our work-plan we divide activities around impact strategies. For example, one work-stream is to produce a really focused policy proposal worked through at a very detailed level and to get lobby groups in that field to push it forward. Another is to engage with an existing policy evaluation framework and suggest specific improvements. Once we do one, for example by producing a ‘topic primer’ on Unprecedented Technological Risks, we deprioritise similar activities to try to get more information about other routes to impact. By doing this, and evaluating the impact of each approach, we plan to focus down to a small number of effective and synergistic mechanisms for impact.
We are very aware that some of our approaches will have a high intrinsic variance, and are trying to correct for that in how we assess progress. Clearly, however, this will not be easy since we can never get a satisfactory sample size.
We are also ramping up the work we do to measure impact, both by getting better at tracking our inputs and by asking for more feedback on our outputs. Our recent push to increase engagement with our work is also partly in order to increase the quality of the feedback we get from producing it.
I couldn’t agree more with Seth’s emphasis on the importance of stakeholder engagement. I would add, and I’m sure he would agree, that one of the most important parts of it is to learn from stakeholders. Everyone’s background offers insights that are really hard to imagine from other perspectives. One doesn’t just want to understand which of one’s ideas they can convinced to implement—they should be part of the process of developing the ideas. They should also be part of picking the questions.
Stakeholder engagement is something that GPP has set itself a particularly tough challenge on. Because we are trying to be a ‘broad’ cause comparison organisation, we do not slot naturally into an existing community of decision-makers. At the moment, this means that we have the capacity to build a small number of strong relationships in many different communities. This makes us good at the learning part of stakeholder engagement. It might end up making us too weak to push new policy on our own. That is why, for example, our current strategy for pushing specific policies is to sell focused policy to organisations that focus on that space and let them carry the idea forward. It remains to be seen how well this will work. It may be that the difficulty of stakeholder engagement with such a broad range of activities will force us to narrow our work, but this is also a factor which we think may make the area neglected.
How many people have read your most popular content?
One of the many reasons we moved to our new website is that our analytics set up when we were using part of the FHI page was not everything we could have wanted. This makes it hard to give a confident answer to your question. Our top post got around 1000 page views over the last year, but some of our high-quality material such as the report on Unprecedented Technological Risks, was released as pdf and we do not have tracking numbers.
However, it is worth noting that monthly traffic to our website is up 5x between the month to today and the previous month, which makes the historic numbers less relevant. This is mostly because we now have a dedicated website, a mailing list, a facebook page, and a twitter account. As we continue to build up the base of subscribers, we expect this to grow.
What are your next few marginal hires?
If a reader wants to help GPP, what should they do?
At the moment GPP is funding constrained. We have an enormous pipeline of work—at one end we have literally hundreds of ideas we would love to pursue, but we also have several person-years of work on the table which is simply adapting our existing research to a particular audience to have impact. Anyone who is either able to donate or knows someone who might be able to would be enormously helpful. Based on the experience of other EA organisations, it is possible that we will become talent-constrained within the next year or two.
Beyond that, we continue to value introductions to individuals in governments or foundations. We also have more of these introductions available than we can currently pursue all of, but this is something where variety and quality of the lead is important. Knowing we could access a particular type of individual is useful, even when we do not pursue the lead immediately. We have a good system for tracking these opportunities to pursue later. We would also love to be able to help academics focus their research directions with an eye to impact. Introductions to academics who may be receptive and are in a position to choose their research direction would therefore be great.
Lastly, we really value challenge to our ideas. This AMA has already thrown up some questions that will change how we plan and think about our work. Anyone is welcome to send me critiques either as a PM or emailing seb[at]prioritisation-dot-org. I have had some extremely productive follow-on conversations with EAs who sent me feedback like that.
What would you do with a) £2,000 b) £10,000 c) £20,000?
At the moment, additional funding goes towards making sure we have a sustainable foundation for the organisation. Best-practice is to have 12 months of reserves, which at this point means raising an additional £20-25k (this is a rough number and does not include some pledged donations not yet received). Once we have raised that level, we would like to hire an additional member of staff. We expect, counting overhead costs like office space, HR, finance etc. that an additional staff member would cost us £35-40k. In order to offer credible job-security to a new hire, we would like to have at least a full year of reserves set aside to fund that hire.
All this means that, in order to comfortably hire a new staff member in the next CEA recruitment cycle we are raising towards a target of £100,000.
A picture of the historical unit costs of some of our outputs (to be distinguished from outcomes) is available in our strategy document, although these are very rough estimates. You can also find more details of our funding needs.
What do you think your room-for-more-funding is?
I think we could comfortably absorb £150,000 (which would build 12 months of reserves and allow us to hire two researchers, and possibly an intern). Funds beyond that could be put to creative use (for example, hiring researchers qua the University is more expensive, but might let us get better talent) but might be better directed at other organisations.
You’re based in the UK—there’s about to be an election, then five years of a new government. How does that affect your plans?
At the moment, individuals in government are largely distracted by the upcoming elections, so we have deprioritised outreach to UK policy-makers. We plan to spend the time until the election (May 7th) preparing policy briefs and fundraising so that we can focus on policy outreach in the months following the election. Conventional wisdom is that this is the best time to pursue policy objectives.
We have probably not devoted enough resources to developing contacts in the Opposition. The election is too close to call, so this may not end up being a problem, but we are open to pursuing strong leads in this period despite the attention of politicians being elsewhere.
Who are the key decision-makers/stakeholders in your area? Have you mapped them out—how they relate, what their responsibilities are? What Government Departments are you mainly interested in? Which are you monitoring? Are there any consultations open at the moment that you are submitting to? Same question for Parliamentary Committees.
Because we are trying to appeal to such a broad range of communities and enable comparison between them, there are a very large number of stakeholders. Within the UK government, we have the most to say to similarly broad organisations (Cabinet Office and Treasury) as well as departments like DFID or DoH (similarly PHE) where we have specific interests that overlap. Similarly, within foundations, we see many existing metacharity organisations as stakeholders to engage with (including GiveWell, Copenhagen Consensus, DCP, WHO and others).
Consultations and parliamentary committees are an excellent point—this is something that I’ve been monitoring since I joined the team. In that period (just under two months) we have not seen any for which we felt we had sufficiently valuable things to contribute (which were also a priority for us). It is too early to say, though, whether that avenue will prove effective in the long run.
Thanks, Ryan, and thanks to everyone who asked a question. Owen and I will be coming back here every now and then this week to answer any more questions that come up.
If you have any further thoughts or questions you can also PM me or email me at seb[at]prioritisation-dot-org
Goodnight!
Generally announced the week in advance, with some extra coverage in the FB group. But feel free to drop me a PM if you have any other questions!
That’s right. It would seem extremely unlikely that one should have a multi-billion dollar industry with no-one thinking about what happens if it succeeds at its aim.
It’s very important for EAs to recognise that there probably isn’t a single best cause (and that even if there is, the uncertainties are too big to allow us to identify it). Even if there was an identifiable best cause, it is likely to change, so it’s bad for EAs to identify too strongly with any one cause.
There’s a broader risk in focusing on marginal cost-effectiveness—that it leads to local rather than global optimisation. It’s a good heuristic, but bad to rely on too much.
This is a mega-important point.
Especially re 2, whenever I use QALY as an example I immediately follow it up by talking about the difficulty of comparing QALYs to other things that are really good to increase, like improved education or better access to political institutions for marginalised people. This helps undermine both the ‘you only care about QALYs’ attack as well as the ‘you don’t care about systemic change’ attack. It makes it clear we do care about those things, even if we don’t have great ways to assess effectiveness there yet.
(For the benefit of others interested, I can share a little bit but not very much in person/on phone.)
I remember one of my favourites for the name of CEA as the Federation for Effective Altruism Research. Or the Society for the Progress of Empathetic Consequentialism Through Reasoned Evaluation. I think the first may have been yours, Will. ;)