Research community. We plan to host research workshops, make grants to support work relevant to our priorities, present our work to other research groups, and advise people who are interested in reducing s-risks in their careers and research priorities.
Rebranding. We plan to rebrand from “Effective Altruism Foundation” to a name that better fits our new strategy.
2019 review
Research. In 2019, we mainly worked on s-risks as a result of conflicts involving advanced AI systems.
Research workshops. We ran research workshops on s-risks from AI in Berlin, the San Francisco Bay Area, and near London. The participants gave positive feedback.
Location. We moved to London (Primrose Hill) to attract and retain staff better and collaborate with other researchers in London and Oxford.
Fundraising target. We aim to raise $185,000 (stretch goal: $700,000). If you prioritize reducing s-risks, there is a strong case for supporting us. Make a donation.
About us
We are building a global community of researchers and professionals working on reducing risks of astronomical suffering (s-risks). (Read more about us and our values.)
We are a London-based nonprofit. Previously, we were located in Switzerland (Basel) and Germany (Berlin). Before shifting our focus to s-risks from artificial intelligence (AI), we implemented projects in global health and development, farm animal welfare, wild animal welfare, and effective altruism (EA) community building and fundraising.
Background on our strategy
For an overview of our strategic thinking, see the following pieces:
The best work on reducing s-risks cuts across a broad range of academic disciplines and interventions. Our recent research agenda, for instance, draws from computer science, economics, political science, and philosophy. That means (a) we must work in many different disciplines and (b) find people who can bridge disciplinary boundaries. The longtermism community brings together people with diverse backgrounds who understand our prioritization and share it to some extent. For this reason, we focus on making reducing s-risks a well-established priority in that community.
Strategic goals
Inspired by GiveWell’s self-evaluations, we are tracking our progress with a set of deliberately vague performance questions:
Building long-term capacity. Have we made progress towards becoming a research group that will have an outsized impact on the research landscape and relevant actors shaping the future?
Research progress. Has our work resulted in research progress that helps reduce s-risks (both in-house and elsewhere)?
Research dissemination. Have we communicated our research to our target audience, and has the target audience engaged with our ideas?
Organizational health. Are we a healthy organization with an effective board, staff in appropriate roles, appropriate evaluation of our work, reliable policies and procedures, adequate financial reserves and reporting, and so forth?
Our team will answer these questions at the end of 2020.
Plans for 2020
Research
Note: We currently carry out some of our research as part of the Foundational Research Institute (FRI). We plan to consolidate our activities related to s-risks under one brand and website in early 2020.
“S-risks might arise by malevolence, by accident, or in the course of conflict. (…) We believe that s-risks arising from conflict are among the most important, tractable, and neglected of these. In particular, strategic threats by powerful AI agents or AI-assisted humans against altruistic values may be among the largest sources of expected suffering. Strategic threats have historically been a source of significant danger to civilization (the Cold War being a prime example). And the potential downsides from such threats, including those involving large amounts of suffering, may increase significantly with the emergence of transformative AI systems.”
Topics covered by our research agenda include:
AI strategy and governance. What does the strategic landscape at the time of transformative AI (TAI) development look like? For example, will it be unipolar or multipolar, and how will offensive and defensive capabilities scale? What does this imply for cooperation failures? How can we shape the governance of AI to reduce the chances of catastrophic cooperation failures?
Credibility. What might the nature of credible commitment among TAI systems look like, and what are the implications for improving cooperation? Can we develop new theories (e.g., program equilibrium) to account for relevant features of AI?
Peaceful bargaining mechanism. Can we further develop bargaining mechanisms that do not lead to destructive conflict (e.g., by implementing surrogate goals)?
Contemporary AI architectures. How can we make progress on reducing cooperation failures using contemporary AI tools (e.g., learning to solve social dilemmas among deep reinforcement learners)?
Humans in the loop. How do we expect human overseers or operators of AI systems to behave in interactions between humans and AI systems?
Foundations of rational agency, including bounded decision-making and acausal reasoning.
We did not list some topics in the research agenda because they did not fit its scope, but we consider them very important:
reducing the likelihood of s-risks from hatred, sadism, and other kinds of malevolence,
research on whether and how we should advocate rights for (sentient) digital minds,
reducing potential risks from genetic enhancement (especially in the context of TAI development),
AI strategy topics not captured by the research agenda (e.g., near misses),
AI governance topics not captured by the research agenda (e.g., the governance of digital minds),
foundational questions relevant to s-risk (e.g., metaethics, population ethics, and the feasibility and moral relevance of artificial consciousness), and
other potentially relevant areas (e.g., great power conflict, space governance, or promoting cooperation).
In practice, our publications and grants will be determined to a large extent by the ideas and motivation of the researchers. We understand the above list of topics as a menu for researchers to choose from, and we expect that our actual work will only cover a small portion of the relevant issues. We hope to collaborate with other AI safety research groups on some of these topics.
We are looking to grow our research team, so we would be excited to hear from you if you think you might be a good fit! We are also considering running a hiring round based on our research agenda as well as a summer research fellowship.
Research community
We aim to develop a global research community, promoting regular exchange and coordination between researchers whose work contributes to reducing s-risks.
Research workshops. Our previous workshops were attended by researchers from major AI labs and academic research groups. They resulted in several researchers becoming more involved with research relevant to s-risks. We plan to continue to host research workshops near London and in the San Francisco Bay Area. Besides, we might host seminars at other research groups and explore the idea of hosting a retreat on moral reflection.
Research agenda dissemination. We plan to reach out proactively to researchers who may be interested in working on our agenda. We plan to present the agenda at several research organizations, on podcasts, and at EA Global San Francisco. We may also publish a complementary overview of research questions focused on macrostrategy and s-risks from causes other than conflict involving AI systems.
Grantmaking. We will continue to support work relevant to reducing s-risks through the EAF Fund. We plan to run at least one open grant application round. If we have sufficient capacity, we plan to explore more active forms of grantmaking, such as reaching out to academic researchers, laying the groundwork for setting up an academic research institute, or working closely with individuals who could launch valuable projects.
Community coordination. We see substantial benefits from bringing the existential-risk-oriented (x-risk-oriented) and s-risk-oriented parts of the longtermism community closer together. We believe that concern for s-risks should be a core component of longtermist EA, so we will continue to encourage x-risk-oriented groups and authors to consider s-risks in their key content and thinking. We will also continue to suggest to suffering-focused EAs that they consider potential risks to people with other value systems in their publications (see below). We plan to reassess to what extent EAF should continue to have a coordinating role in the longtermist EA community at the end of 2020.
Advising and in-person exchange. In the past, in-person exchange has been an important step for helping community members better understand our priorities and become more involved with our work. We will continue to advise people who are interested in reducing s-risks in their careers and research priorities. Next year, we might experiment with regular meetups and co-working at our offices.
Other activities
Raising for Effective Giving (REG). We will continue to fundraise from professional poker players for EA charities, including a significant percentage for longtermist organizations. Because fundraising for others does not directly contribute to our main priorities, and it is difficult to scale REG further, we plan to maintain REG but not expand it further.
Regranting. We currently enable German, Swiss, and Dutch donors to deduct their donations from their taxes when giving to EA charities around the world, leading to around $400,000 in additional counterfactual donations per year. Because this project does not further our main strategic goals, we are exploring ways of handing it over to a successor who can further improve our current service.
Organizational opportunities and challenges
Rebranding. We will likely rebrand the Foundational Research Institute (FRI) and stop using the Effective Altruism Foundation (EAF) brand (except as the name of our legal entities). We expect to announce our new brand in January. We are making this change for the following reasons:
we perceive the FRI brand as too grandiose and confusing given the scope and nature of our research, and have received unprompted negative feedback to this effect;
we do not want to use the EAF brand because it does not describe our activities well and is easily confused with the Centre for Effective Altruism (CEA), especially after our move to the UK.
Research office. We expect some of our remote researchers to join us at our offices in London sometime next year. We also hope to hire more researchers.
Lead researcher. Our research team currently lacks a lead researcher with academic experience and management skills. We hope that Jesse Clifton will take on this role in mid-2020.
Review of 2019
Research
S-risks from conflict. In 2019, we mainly worked on s-risks as a result of conflicts involving advanced AI systems:
Kokotajlo: The “Commitment Races” problem: In this post on the Alignment Forum, EAF Fund grantee Daniel Kokotajlo explores the dilemma in which there are strong reasons to lock in commitments as early as possible. Such premature commitments might also lead to disaster.
We also circulated nine internal articles and working papers with the participants of our research workshops.
Foundational work on decision theory. This work might be relevant in the context of acausal interactions (see the last section of the research agenda):
MacAskill, Vallinder, Shulman, Oesterheld, Treutlein: The Evidentialist’s Wager: In this working paper, the authors present a wager for altruists in favor of following acausal decision theories, even if they assign significantly lower credence to them being correct. The basic idea is that under acausal decision theories, correlated decision-makers amplify the impact of one’s action manifold. Johannes Treutlein first explored the main idea in a blog post in 2018.
Research workshops. We ran three research workshops on s-risks from AI. They improved our prioritization, helped us develop our research agenda, and informed the future work of some participants:
“S-risk research workshop,” Berlin, 2 days, March 2019, with junior researchers.
“Preventing disvalue from AI,” San Francisco Bay Area, 2.5 days, May 2019, with 21 AI safety and AI strategy researchers from leading institutes and AI labs (including DeepMind, OpenAI, MIRI, FHI). Participants rated the content at 4.3 out of 5 and the logistics at 4.5 out of 5 (weighted average). They said attending the event was about 4x as valuable as what they would have been doing otherwise (weighted geometric mean).
“S-risk research workshop,” near London, 3 days, November 2019, with a mixture of junior and more experienced researchers.
We have developed the capacity to host research workshops with consistently good quality.
Grantmaking through the EAF Fund. We ran our first application round and made six grants worth $221,306 in total. Another $600,000 is available in the fund that we could not disburse so far (in part because we had planned to hire a Research Analyst for our grantmaking but were unable to fill the position).
Community coordination. We worked to bring the x-risk-oriented and s-risk-oriented parts of the longtermism community closer together. We believe this will result in synergies in AI safety and AI governance research and policy and perhaps also in macrostrategy research and broad longtermist interventions.
Background. Until 2018, there had been little collaboration between the x-risk-oriented and s-risk-oriented parts of the longtermism community, despite the overlap in philosophical views and cause areas (especially AI risk). For this reason, our work on s-risks received less engagement than it could have. Over the past four years, we worked hard to bridge this divide. For instance, we repeatedly sought feedback from other community members. In response to that feedback, we decided to focus less on public moral advocacy and more on research on reducing s-risks (which we consider more pressing anyway) and encouraged other s-risk-oriented community members to do so as well. We also visited other research groups to increase their engagement with our work.
Communication guidelines. This year, we further expanded these efforts. We worked with Nick Beckstead, then Program Officer for effective altruism at the Open Philanthropy Project, to develop a set of communication guidelines for discussing astronomical stakes:
Nick’s guidelines recommend highlighting beliefs and priorities that are important to the s-risk-oriented community. We are excited about these guidelines because we expect them to result in more contributions by outside experts to our research (at our workshops and on an ongoing basis) and a better representation of s-risks in the most popular EA content (see, e.g., the 80,000 Hours job board and previous edits to “The Long-Term Future”).
EAF’s guidelines recommend communicating in a more nuanced manner about pessimistic views of the long-term future by considering highlighting moral cooperation and uncertainty, focusing more on practical questions if possible, and anticipating potential misunderstandings and misrepresentations. We see it as our responsibility to ensure that those who come to prioritize s-risks based on our writings will also share our cooperative approach and commitment against violence. We expect the guidelines to reduce that risk and to result in increased interest in s-risks by major funders (including the Open Philanthropy Project’s grant, see below). We expect both guidelines to contribute to a more balanced discussion about the long-term future.
Nick put in a substantial effort to ensure his guidelines are read and endorsed by large parts of the community. Similarly, we reached out to the most active authors and sent our guidelines to them. Some community members suggested that these guidelines should be transparent to the community; we agree with them and are, therefore, sharing them publicly.
Longer-term plans. We believe that these activities are only the beginning of longer and deeper collaborations. We plan to reassess the costs and benefits at the end of 2020.
Research community.
We advised 13 potential researchers and professionals interested in s-risks in their careers.
We sent out our first research newsletter to about 70 researchers.
We started providing scholarships and more systematic operations support for researchers.
We improved our online communication platform for researchers (Slack workspace with several channels) and have received positive feedback on the discussion quality.
Research management. We published a report on disruptive research groups. The main learnings for us were: (1) We should seriously consider how to address our lack of research leadership and (2) we should improve the physical proximity of our research staff.
Organizational updates
We moved to London. We relocated our headquarters from Berlin to London because this allows us to attract and retain staff better and collaborate with other researchers and EA organizations in London and Oxford. Our team of six will work from our offices in Primrose Hill, London.
Hiring. We have hired Jesse Clifton to join our research team part-time. Jesse is pursuing a PhD in statistics at NCSU and is the primary author of our technical research agenda.
Open Philanthropy Project grant. The Open Philanthropy Project awarded us a $1 million grant over two years to support our research, general operations, and grantmaking.
Strategic clarity. At the end of 2018, we were still substantially uncertain about the strategic goals of our organization. We have since refined our mission and strategy and have overhauled our website accordingly.
Other activities
We doubled Zurich’s development cooperation and made it more effective. Thanks to a ballot initiative launched by EAF, the city of Zurich’s development cooperation budget is increasing from $3 million to $8 million per year and allocated “based on the available scientific research on effectiveness and cost-effectiveness.” This appears to be the first time that Swiss legislation on development cooperation mentions effectiveness requirements. See EA Forum article: EAF’s ballot initiative doubled Zurich’s development aid.
Fundraising from professional poker players (Raising for Effective Giving). In 2018, we raised$5,160,435 for high-impact charities to which the poker players would otherwise not have donated (mainly thanks to our fundraising efforts in previous years). After subtracting expenses and opportunity costs, the net impact was $4,941,930. About 34% of the total went to longtermist charities. We expect almost as good results in 2019. We dropped previous plans to reach out to wealthy individuals and provide them with philanthropic advice.
Tax deductibility for German, Swiss, and Dutch donors. We regranted $2,494,210 in tax-deductible donations to other high-impact charities, leading to an estimated $400,000 in contributions that the donors would not have made otherwise. Accounting for expenses and opportunity costs, the net impact was small ($57,851), though this ignores benefits from getting donors involved with EA. We expect similar results in 2019. We are exploring ways of handing this project over to a successor.
In January, Wild-Animal Suffering Researchmerged with Utility Farm to form the Wild Animal Initiative. As part of this process, this project became fully independent from us. We wish them all the best with their efforts!
Swiss ballot initiative for a ban on factory farming. Sentience Politics, a spin-off of ours, successfully collected the 100,000 signatures required to launch a binding ballot initiative in Switzerland. The initiative demands a ban on the production and import of animal products that do not meet current organic meat production standards. We expect the initiative to come to the ballot in 2023. Surveys suggest that the initiative has a nonnegligible chance (perhaps 1–10%) of passing. Much of the groundwork for the initiative was laid at a time when Sentience Politics was still part of EAF.
Mistakes and lessons learned
Research output. While we were satisfied with our internal drafts, we fell short on our goals to produce written research output (for publication, or at least for sharing with peers).
Handing over community building in Germany. As planned, we handed off our community-building work in the German-speaking area to CEA and EA local groups. In August, we realized that we could have done more to ensure a smooth transition for the national-level coordination of the community in Germany. As a result, we dedicated some additional resources to this in the second half of this year and improved our general heuristics for handing over projects to successors.
Feedback and transparency for our communication guidelines. We did not seek feedback on the guidelines as systematically as we now think we should have. As a result, some people in our network were dissatisfied with the outcome. Moreover, while we were planning to give a general update on our efforts in our end-of-year update, we now believe it would have been worth the time to publish the full guidelines sooner.
Hiring. We planned to hire a Research Analyst for grantmaking and an Operations Analyst and made two job offers. One of them was not accepted; the other one did not work out during the first few months of employment. In hindsight, it might have been better to hire even more slowly and ensure we understood the roles we were hiring for better. Doing so would have allowed us to make a more convincing case for the positions and hire from a larger pool of candidates.
Anticipating implications of strategic changes. When we decided to shift our strategic focus towards research on s-risks, we were insufficiently aware of how this would change everyone’s daily work and responsibilities. We now think we could have anticipated these changes more proactively and taken measures to make the transition easier for our staff.
Strategic planning procedure. Due to repeated organizational changes over the past years, we had not developed a reliable annual strategic planning routine. This year, we did not realize that building such a process is important. We plan to prioritize this in 2020.
Communicating our move to London. We did not communicate our decision to relocate from Berlin to London very carefully in some instances. As a result, we received some negative feedback from people who did not support our decision and were under the impression we had not thought carefully about it. We invested some time to provide more background on our reasoning.
Financials
Budget 2020: $994,000 (7.4 expected full-time equivalent employees). Our per-staff expenses have increased compared with 2019 because we do not have access to free office space anymore, and the cost of living in London is significantly higher than in Berlin.
EAF reserves as of early November: $1,305,000 (corresponds to 15 months of expenses; excluding EAF Fund balance).
EAF Fund balance as of mid-December: $600,000.
Room for more funding: $185,000 (to attain 18 months of reserves); stretch goal: $700,000 (to attain 24 months of reserves).
We invest funds that we are unlikely to deploy soon in the global stock market as per our investment policy.
Work with us. We are always hiring researchers and might also hire for new positions in research operations and management. If you are interested, we would be very excited to hear from you!
Get career advice. If you are interested in our priorities, we are happy to discuss your career plans with you. Schedule a call now.
Engage with our research. If you are interested in discussing our research with our team and giving feedback on internal drafts, please reach out to Stefan Torges.
Make a donation. We aim to raise $185,000 (stretch goal: $700,000) for EAF. (We can set up a donor-advised fund (DAF) for value-aligned donors who give at least $100,000 over two years.)
Recommendation for donors
We think it makes sense for donors to support us if:
(a) you assign significant credence to some form of suffering-focused ethics, (b) you think s-risks are not unlikely compared to very positive future scenarios, and/or (c) you think work on s-risks is particularly neglected and reasonably tractable, and
you assign significant credence that our prioritization and strategy is sound, i.e., you consider our work on AI and/or non-AI priorities sufficiently pressing (e.g., you assign a nontrivial probability (at least 5–10%) to the development of transformative AI within the next 20 years).
For donors who do not agree with these points, we recommend giving to the donor lottery (or the EA Funds). We recommend that donors who are interested in the EAF Fund support EAF instead because the EAF Fund has a limited capacity to absorb further funding.
If you have any questions or comments, we look forward to hearing from you; you can also send us feedback anonymously. We greatly appreciate any thoughts that could help us improve our work. Thank you!
Acknowledgments
I would like to thank Tobias Baumann, Max Daniel, Ruairi Donnelly, Lukas Gloor, Chi Nguyen, and Stefan Torges for giving feedback on this article.
Effective Altruism Foundation: Plans for 2020
Summary
Our mission. We are building a global community of researchers and professionals working on reducing risks of astronomical suffering (s-risks).
Our plans for 2020
Research. We aim to investigate the questions listed in our research agenda titled “Cooperation, Conflict, and Transformative Artificial Intelligence” and other areas.
Research community. We plan to host research workshops, make grants to support work relevant to our priorities, present our work to other research groups, and advise people who are interested in reducing s-risks in their careers and research priorities.
Rebranding. We plan to rebrand from “Effective Altruism Foundation” to a name that better fits our new strategy.
2019 review
Research. In 2019, we mainly worked on s-risks as a result of conflicts involving advanced AI systems.
Research workshops. We ran research workshops on s-risks from AI in Berlin, the San Francisco Bay Area, and near London. The participants gave positive feedback.
Location. We moved to London (Primrose Hill) to attract and retain staff better and collaborate with other researchers in London and Oxford.
Fundraising target. We aim to raise $185,000 (stretch goal: $700,000). If you prioritize reducing s-risks, there is a strong case for supporting us. Make a donation.
About us
We are building a global community of researchers and professionals working on reducing risks of astronomical suffering (s-risks). (Read more about us and our values.)
We are a London-based nonprofit. Previously, we were located in Switzerland (Basel) and Germany (Berlin). Before shifting our focus to s-risks from artificial intelligence (AI), we implemented projects in global health and development, farm animal welfare, wild animal welfare, and effective altruism (EA) community building and fundraising.
Background on our strategy
For an overview of our strategic thinking, see the following pieces:
Gloor: Cause prioritization for downside-focused value systems
Althaus & Gloor: Reducing Risks of Astronomical Suffering: A Neglected Priority
Gloor: Altruists Should Prioritize Artificial Intelligence (somewhat dated)
The best work on reducing s-risks cuts across a broad range of academic disciplines and interventions. Our recent research agenda, for instance, draws from computer science, economics, political science, and philosophy. That means (a) we must work in many different disciplines and (b) find people who can bridge disciplinary boundaries. The longtermism community brings together people with diverse backgrounds who understand our prioritization and share it to some extent. For this reason, we focus on making reducing s-risks a well-established priority in that community.
Strategic goals
Inspired by GiveWell’s self-evaluations, we are tracking our progress with a set of deliberately vague performance questions:
Building long-term capacity. Have we made progress towards becoming a research group that will have an outsized impact on the research landscape and relevant actors shaping the future?
Research progress. Has our work resulted in research progress that helps reduce s-risks (both in-house and elsewhere)?
Research dissemination. Have we communicated our research to our target audience, and has the target audience engaged with our ideas?
Organizational health. Are we a healthy organization with an effective board, staff in appropriate roles, appropriate evaluation of our work, reliable policies and procedures, adequate financial reserves and reporting, and so forth?
Our team will answer these questions at the end of 2020.
Plans for 2020
Research
Note: We currently carry out some of our research as part of the Foundational Research Institute (FRI). We plan to consolidate our activities related to s-risks under one brand and website in early 2020.
We aim to investigate research questions listed in our research agenda titled “Cooperation, Conflict, and Transformative Artificial Intelligence.” We explain our focus on cooperation and conflict in the preface:
“S-risks might arise by malevolence, by accident, or in the course of conflict. (…) We believe that s-risks arising from conflict are among the most important, tractable, and neglected of these. In particular, strategic threats by powerful AI agents or AI-assisted humans against altruistic values may be among the largest sources of expected suffering. Strategic threats have historically been a source of significant danger to civilization (the Cold War being a prime example). And the potential downsides from such threats, including those involving large amounts of suffering, may increase significantly with the emergence of transformative AI systems.”
Topics covered by our research agenda include:
AI strategy and governance. What does the strategic landscape at the time of transformative AI (TAI) development look like? For example, will it be unipolar or multipolar, and how will offensive and defensive capabilities scale? What does this imply for cooperation failures? How can we shape the governance of AI to reduce the chances of catastrophic cooperation failures?
Credibility. What might the nature of credible commitment among TAI systems look like, and what are the implications for improving cooperation? Can we develop new theories (e.g., program equilibrium) to account for relevant features of AI?
Peaceful bargaining mechanism. Can we further develop bargaining mechanisms that do not lead to destructive conflict (e.g., by implementing surrogate goals)?
Contemporary AI architectures. How can we make progress on reducing cooperation failures using contemporary AI tools (e.g., learning to solve social dilemmas among deep reinforcement learners)?
Humans in the loop. How do we expect human overseers or operators of AI systems to behave in interactions between humans and AI systems?
Foundations of rational agency, including bounded decision-making and acausal reasoning.
We did not list some topics in the research agenda because they did not fit its scope, but we consider them very important:
macrostrategy research on questions related to s-risk,
nontechnical work on strategic threats,
reducing the likelihood of s-risks from hatred, sadism, and other kinds of malevolence,
research on whether and how we should advocate rights for (sentient) digital minds,
reducing potential risks from genetic enhancement (especially in the context of TAI development),
AI strategy topics not captured by the research agenda (e.g., near misses),
AI governance topics not captured by the research agenda (e.g., the governance of digital minds),
foundational questions relevant to s-risk (e.g., metaethics, population ethics, and the feasibility and moral relevance of artificial consciousness), and
other potentially relevant areas (e.g., great power conflict, space governance, or promoting cooperation).
In practice, our publications and grants will be determined to a large extent by the ideas and motivation of the researchers. We understand the above list of topics as a menu for researchers to choose from, and we expect that our actual work will only cover a small portion of the relevant issues. We hope to collaborate with other AI safety research groups on some of these topics.
We are looking to grow our research team, so we would be excited to hear from you if you think you might be a good fit! We are also considering running a hiring round based on our research agenda as well as a summer research fellowship.
Research community
We aim to develop a global research community, promoting regular exchange and coordination between researchers whose work contributes to reducing s-risks.
Research workshops. Our previous workshops were attended by researchers from major AI labs and academic research groups. They resulted in several researchers becoming more involved with research relevant to s-risks. We plan to continue to host research workshops near London and in the San Francisco Bay Area. Besides, we might host seminars at other research groups and explore the idea of hosting a retreat on moral reflection.
Research agenda dissemination. We plan to reach out proactively to researchers who may be interested in working on our agenda. We plan to present the agenda at several research organizations, on podcasts, and at EA Global San Francisco. We may also publish a complementary overview of research questions focused on macrostrategy and s-risks from causes other than conflict involving AI systems.
Grantmaking. We will continue to support work relevant to reducing s-risks through the EAF Fund. We plan to run at least one open grant application round. If we have sufficient capacity, we plan to explore more active forms of grantmaking, such as reaching out to academic researchers, laying the groundwork for setting up an academic research institute, or working closely with individuals who could launch valuable projects.
Community coordination. We see substantial benefits from bringing the existential-risk-oriented (x-risk-oriented) and s-risk-oriented parts of the longtermism community closer together. We believe that concern for s-risks should be a core component of longtermist EA, so we will continue to encourage x-risk-oriented groups and authors to consider s-risks in their key content and thinking. We will also continue to suggest to suffering-focused EAs that they consider potential risks to people with other value systems in their publications (see below). We plan to reassess to what extent EAF should continue to have a coordinating role in the longtermist EA community at the end of 2020.
Advising and in-person exchange. In the past, in-person exchange has been an important step for helping community members better understand our priorities and become more involved with our work. We will continue to advise people who are interested in reducing s-risks in their careers and research priorities. Next year, we might experiment with regular meetups and co-working at our offices.
Other activities
Raising for Effective Giving (REG). We will continue to fundraise from professional poker players for EA charities, including a significant percentage for longtermist organizations. Because fundraising for others does not directly contribute to our main priorities, and it is difficult to scale REG further, we plan to maintain REG but not expand it further.
Regranting. We currently enable German, Swiss, and Dutch donors to deduct their donations from their taxes when giving to EA charities around the world, leading to around $400,000 in additional counterfactual donations per year. Because this project does not further our main strategic goals, we are exploring ways of handing it over to a successor who can further improve our current service.
Organizational opportunities and challenges
Rebranding. We will likely rebrand the Foundational Research Institute (FRI) and stop using the Effective Altruism Foundation (EAF) brand (except as the name of our legal entities). We expect to announce our new brand in January. We are making this change for the following reasons:
we perceive the FRI brand as too grandiose and confusing given the scope and nature of our research, and have received unprompted negative feedback to this effect;
we do not want to use the EAF brand because it does not describe our activities well and is easily confused with the Centre for Effective Altruism (CEA), especially after our move to the UK.
Research office. We expect some of our remote researchers to join us at our offices in London sometime next year. We also hope to hire more researchers.
Lead researcher. Our research team currently lacks a lead researcher with academic experience and management skills. We hope that Jesse Clifton will take on this role in mid-2020.
Review of 2019
Research
S-risks from conflict. In 2019, we mainly worked on s-risks as a result of conflicts involving advanced AI systems:
Research agenda: Clifton: Cooperation, Conflict, and Transformative Artificial Intelligence: for a summary, see above.
Kokotajlo: The “Commitment Races” problem: In this post on the Alignment Forum, EAF Fund grantee Daniel Kokotajlo explores the dilemma in which there are strong reasons to lock in commitments as early as possible. Such premature commitments might also lead to disaster.
We also circulated nine internal articles and working papers with the participants of our research workshops.
Foundational work on decision theory. This work might be relevant in the context of acausal interactions (see the last section of the research agenda):
MacAskill, Vallinder, Shulman, Oesterheld, Treutlein: The Evidentialist’s Wager: In this working paper, the authors present a wager for altruists in favor of following acausal decision theories, even if they assign significantly lower credence to them being correct. The basic idea is that under acausal decision theories, correlated decision-makers amplify the impact of one’s action manifold. Johannes Treutlein first explored the main idea in a blog post in 2018.
Oesterheld: Approval-directed agency and the decision theory of Newcomb-like problems: This paper on the implicit decision heuristics of trained AI agents has now been published in a special issue of Synthese.
Miscellaneous publications:
Sotala: Multiagent Models of Mind (sequence)
Baumann: Risk factors for s-risks (independent researcher)
Kokotajlo: Soft takeoff can still lead to decisive strategic advantage (EAF Fund grantee)
Torges: Ingredients for creating disruptive research teams
Torges: Assessing the state of AI R&D in the US, China, and Europe – Part 1: Output indicators
Research community
Research workshops. We ran three research workshops on s-risks from AI. They improved our prioritization, helped us develop our research agenda, and informed the future work of some participants:
“S-risk research workshop,” Berlin, 2 days, March 2019, with junior researchers.
“Preventing disvalue from AI,” San Francisco Bay Area, 2.5 days, May 2019, with 21 AI safety and AI strategy researchers from leading institutes and AI labs (including DeepMind, OpenAI, MIRI, FHI). Participants rated the content at 4.3 out of 5 and the logistics at 4.5 out of 5 (weighted average). They said attending the event was about 4x as valuable as what they would have been doing otherwise (weighted geometric mean).
“S-risk research workshop,” near London, 3 days, November 2019, with a mixture of junior and more experienced researchers.
We have developed the capacity to host research workshops with consistently good quality.
Grantmaking through the EAF Fund. We ran our first application round and made six grants worth $221,306 in total. Another $600,000 is available in the fund that we could not disburse so far (in part because we had planned to hire a Research Analyst for our grantmaking but were unable to fill the position).
Community coordination. We worked to bring the x-risk-oriented and s-risk-oriented parts of the longtermism community closer together. We believe this will result in synergies in AI safety and AI governance research and policy and perhaps also in macrostrategy research and broad longtermist interventions.
Background. Until 2018, there had been little collaboration between the x-risk-oriented and s-risk-oriented parts of the longtermism community, despite the overlap in philosophical views and cause areas (especially AI risk). For this reason, our work on s-risks received less engagement than it could have. Over the past four years, we worked hard to bridge this divide. For instance, we repeatedly sought feedback from other community members. In response to that feedback, we decided to focus less on public moral advocacy and more on research on reducing s-risks (which we consider more pressing anyway) and encouraged other s-risk-oriented community members to do so as well. We also visited other research groups to increase their engagement with our work.
Communication guidelines. This year, we further expanded these efforts. We worked with Nick Beckstead, then Program Officer for effective altruism at the Open Philanthropy Project, to develop a set of communication guidelines for discussing astronomical stakes:
Nick’s guidelines recommend highlighting beliefs and priorities that are important to the s-risk-oriented community. We are excited about these guidelines because we expect them to result in more contributions by outside experts to our research (at our workshops and on an ongoing basis) and a better representation of s-risks in the most popular EA content (see, e.g., the 80,000 Hours job board and previous edits to “The Long-Term Future”).
EAF’s guidelines recommend communicating in a more nuanced manner about pessimistic views of the long-term future by considering highlighting moral cooperation and uncertainty, focusing more on practical questions if possible, and anticipating potential misunderstandings and misrepresentations. We see it as our responsibility to ensure that those who come to prioritize s-risks based on our writings will also share our cooperative approach and commitment against violence. We expect the guidelines to reduce that risk and to result in increased interest in s-risks by major funders (including the Open Philanthropy Project’s grant, see below). We expect both guidelines to contribute to a more balanced discussion about the long-term future.
Nick put in a substantial effort to ensure his guidelines are read and endorsed by large parts of the community. Similarly, we reached out to the most active authors and sent our guidelines to them. Some community members suggested that these guidelines should be transparent to the community; we agree with them and are, therefore, sharing them publicly.
Longer-term plans. We believe that these activities are only the beginning of longer and deeper collaborations. We plan to reassess the costs and benefits at the end of 2020.
Research community.
We advised 13 potential researchers and professionals interested in s-risks in their careers.
We sent out our first research newsletter to about 70 researchers.
We started providing scholarships and more systematic operations support for researchers.
We improved our online communication platform for researchers (Slack workspace with several channels) and have received positive feedback on the discussion quality.
Research management. We published a report on disruptive research groups. The main learnings for us were: (1) We should seriously consider how to address our lack of research leadership and (2) we should improve the physical proximity of our research staff.
Organizational updates
We moved to London. We relocated our headquarters from Berlin to London because this allows us to attract and retain staff better and collaborate with other researchers and EA organizations in London and Oxford. Our team of six will work from our offices in Primrose Hill, London.
Hiring. We have hired Jesse Clifton to join our research team part-time. Jesse is pursuing a PhD in statistics at NCSU and is the primary author of our technical research agenda.
Open Philanthropy Project grant. The Open Philanthropy Project awarded us a $1 million grant over two years to support our research, general operations, and grantmaking.
Strategic clarity. At the end of 2018, we were still substantially uncertain about the strategic goals of our organization. We have since refined our mission and strategy and have overhauled our website accordingly.
Other activities
We doubled Zurich’s development cooperation and made it more effective. Thanks to a ballot initiative launched by EAF, the city of Zurich’s development cooperation budget is increasing from $3 million to $8 million per year and allocated “based on the available scientific research on effectiveness and cost-effectiveness.” This appears to be the first time that Swiss legislation on development cooperation mentions effectiveness requirements. See EA Forum article: EAF’s ballot initiative doubled Zurich’s development aid.
Fundraising from professional poker players (Raising for Effective Giving). In 2018, we raised $5,160,435 for high-impact charities to which the poker players would otherwise not have donated (mainly thanks to our fundraising efforts in previous years). After subtracting expenses and opportunity costs, the net impact was $4,941,930. About 34% of the total went to longtermist charities. We expect almost as good results in 2019. We dropped previous plans to reach out to wealthy individuals and provide them with philanthropic advice.
Tax deductibility for German, Swiss, and Dutch donors. We regranted $2,494,210 in tax-deductible donations to other high-impact charities, leading to an estimated $400,000 in contributions that the donors would not have made otherwise. Accounting for expenses and opportunity costs, the net impact was small ($57,851), though this ignores benefits from getting donors involved with EA. We expect similar results in 2019. We are exploring ways of handing this project over to a successor.
In January, Wild-Animal Suffering Research merged with Utility Farm to form the Wild Animal Initiative. As part of this process, this project became fully independent from us. We wish them all the best with their efforts!
Swiss ballot initiative for a ban on factory farming. Sentience Politics, a spin-off of ours, successfully collected the 100,000 signatures required to launch a binding ballot initiative in Switzerland. The initiative demands a ban on the production and import of animal products that do not meet current organic meat production standards. We expect the initiative to come to the ballot in 2023. Surveys suggest that the initiative has a nonnegligible chance (perhaps 1–10%) of passing. Much of the groundwork for the initiative was laid at a time when Sentience Politics was still part of EAF.
Mistakes and lessons learned
Research output. While we were satisfied with our internal drafts, we fell short on our goals to produce written research output (for publication, or at least for sharing with peers).
Handing over community building in Germany. As planned, we handed off our community-building work in the German-speaking area to CEA and EA local groups. In August, we realized that we could have done more to ensure a smooth transition for the national-level coordination of the community in Germany. As a result, we dedicated some additional resources to this in the second half of this year and improved our general heuristics for handing over projects to successors.
Feedback and transparency for our communication guidelines. We did not seek feedback on the guidelines as systematically as we now think we should have. As a result, some people in our network were dissatisfied with the outcome. Moreover, while we were planning to give a general update on our efforts in our end-of-year update, we now believe it would have been worth the time to publish the full guidelines sooner.
Hiring. We planned to hire a Research Analyst for grantmaking and an Operations Analyst and made two job offers. One of them was not accepted; the other one did not work out during the first few months of employment. In hindsight, it might have been better to hire even more slowly and ensure we understood the roles we were hiring for better. Doing so would have allowed us to make a more convincing case for the positions and hire from a larger pool of candidates.
Anticipating implications of strategic changes. When we decided to shift our strategic focus towards research on s-risks, we were insufficiently aware of how this would change everyone’s daily work and responsibilities. We now think we could have anticipated these changes more proactively and taken measures to make the transition easier for our staff.
Strategic planning procedure. Due to repeated organizational changes over the past years, we had not developed a reliable annual strategic planning routine. This year, we did not realize that building such a process is important. We plan to prioritize this in 2020.
Communicating our move to London. We did not communicate our decision to relocate from Berlin to London very carefully in some instances. As a result, we received some negative feedback from people who did not support our decision and were under the impression we had not thought carefully about it. We invested some time to provide more background on our reasoning.
Financials
Budget 2020: $994,000 (7.4 expected full-time equivalent employees). Our per-staff expenses have increased compared with 2019 because we do not have access to free office space anymore, and the cost of living in London is significantly higher than in Berlin.
EAF reserves as of early November: $1,305,000 (corresponds to 15 months of expenses; excluding EAF Fund balance).
EAF Fund balance as of mid-December: $600,000.
Room for more funding: $185,000 (to attain 18 months of reserves); stretch goal: $700,000 (to attain 24 months of reserves).
We invest funds that we are unlikely to deploy soon in the global stock market as per our investment policy.
(View full-size image.)
How to contribute
Stay up to date. Subscribe to our supporter updates and follow our Facebook page.
Work with us. We are always hiring researchers and might also hire for new positions in research operations and management. If you are interested, we would be very excited to hear from you!
Get career advice. If you are interested in our priorities, we are happy to discuss your career plans with you. Schedule a call now.
Engage with our research. If you are interested in discussing our research with our team and giving feedback on internal drafts, please reach out to Stefan Torges.
Make a donation. We aim to raise $185,000 (stretch goal: $700,000) for EAF. (We can set up a donor-advised fund (DAF) for value-aligned donors who give at least $100,000 over two years.)
Recommendation for donors
We think it makes sense for donors to support us if:
you believe we should prioritize interventions that affect the long-term future positively,
(a) you assign significant credence to some form of suffering-focused ethics, (b) you think s-risks are not unlikely compared to very positive future scenarios, and/or (c) you think work on s-risks is particularly neglected and reasonably tractable, and
you assign significant credence that our prioritization and strategy is sound, i.e., you consider our work on AI and/or non-AI priorities sufficiently pressing (e.g., you assign a nontrivial probability (at least 5–10%) to the development of transformative AI within the next 20 years).
For donors who do not agree with these points, we recommend giving to the donor lottery (or the EA Funds). We recommend that donors who are interested in the EAF Fund support EAF instead because the EAF Fund has a limited capacity to absorb further funding.
Would you like to support us? Make a donation.
We are interested in your feedback
If you have any questions or comments, we look forward to hearing from you; you can also send us feedback anonymously. We greatly appreciate any thoughts that could help us improve our work. Thank you!
Acknowledgments
I would like to thank Tobias Baumann, Max Daniel, Ruairi Donnelly, Lukas Gloor, Chi Nguyen, and Stefan Torges for giving feedback on this article.