A Long-run perspective on strategic cause selection and philanthropy

Co-written by Nick Beckstead and Carl Shulman

Introduction

A philanthropist who will remain anonymous recently asked us about what we would do if we didn’t face financial constraints. We gave a detailed answer that we thought we might as well share with others, who may also find our perspective interesting. We gave the answer largely in hope of creating some interest in our way of thinking about philanthropy and some of the causes that we find interesting for further investigation, and because we thought the answer would be fruitful for conversation.

Our honest answer to your question

Our honest answer to your question is that we would systematically examine a wide variety of causes and opportunities with the intention of identifying the ones which could use additional money and talent to produce the best long-run outcomes. This would look a lot like setting up a major foundation—which is unsurprising, given that many people in this situation do set up foundations—so we will concentrate on the distinguishing or less typical features of our approach:
  1. Unlike many foundations, we would place a great deal of emphasis on selecting the highest impact program areas, rather than selecting program areas for other reasons and working hardest to find the best opportunities within those areas. Like GiveWell, we believe that the choice of program areas may be one of the most important decisions a major philanthropist makes and is consistently underemphasized.

  2. We would invest heavily in learning, funding systematic examination of the spectrum of opportunities, and the transparent publication of our process and findings.

  3. In addition to sharing information about giving opportunities, we would share detailed information about talent gaps, encouraging people with the right abilities to seek out opportunities in promising areas that are constrained by people rather than money.

  4. We would measure impact primarily in terms of very long-run positive consequences for humanity, as outlined in Nick’s PhD thesis.

  5. We would be skeptical of our intuitions, and check them through such means as external review, the collection of track records for our predictions, structured evaluations, and the use of simple and sophisticated methods of aggregating and improving on expert opinion (e.g. the forecasting training and aggregation methods developed by Philip Tetlock, calibration training, prediction markets, and anonymous surveys of appropriate experts).

We understand that you probably aren’t contacting us about setting up a foundation, though you might be interested in hearing more about the approach and assumptions above, so we’ll say a few things about how we would go about strategically selecting causes, and our leading hypotheses about which causes are most promising to investigate further.

Briefly,

  1. We believe that maximizing good accomplished largely reduces to doing what is best in terms of very long-run outcomes for humanity. We think this has significant practical implications when making trade-offs between short-term welfare and the broad functioning of society, our ability to face major global challenges and opportunities, and increasing society’s resilience to global catastrophes.

  2. Five causes we are interested in investigating first include immigration reform, methods for improved forecasting, an area we call “philanthropic infrastructure,” catastrophic risks to humanity, and research integrity. These would be areas for investigation and experimentation, and we would pursue them in the short run primarily for the sake of gaining information about how attractive they are in comparison with other areas. There many other causes we would like to investigate early on, and would begin investigating those causes less deeply and in parallel with our investigations of the causes we are most enthusiastic about. We’d be happy to discuss the other causes with you as well.

We elaborate on these ideas below.

Is the long run actionable in the short run?

As just mentioned, we believe that maximizing good accomplished largely reduces to doing what is best in terms of very long-run outcomes for humanity, and that this has strategic implications for people aiming to maximize good accomplished with their resources. We think these implications are significant when choosing between causes or program areas, and less significant when comparing opportunities within program areas.

There is a lot of detail behind this perspective and it is hard to summarize briefly. But here is an attempt to quickly explain our reasoning:

  1. We think humanity has a reasonable probability of lasting a very long time, becoming very large, and/​or eventually enjoying a very high quality of life. This could happen through radical (or even moderate) technological change, if industrial civilization persists as long as agriculture has persisted (though upper limits for life on Earth are around a billion years), or if future generations colonize other regions of space. Though we wouldn’t bet on very specific details, we think some of these possibilities have a reasonable probability of occurring.

  2. Because of this, we think that, from an impartial perspective, almost all of the potential good we can accomplish comes through influencing very long-run outcomes for humanity.

  3. We believe long-run outcomes may be highly sensitive to how well humanity handles key challenges and opportunities, especially challenges from new technology, in the next hundred years or so.

  4. We believe that (especially with substantial resources) we could have small but significant positive impacts on how effectively we face these challenges and opportunities, and thereby affect expected long-run outcomes for humanity.

  5. We could face these challenges and opportunities more effectively by preparing for specific challenges and opportunities (such as nuclear security and climate change in the past and present, and advances in synthetic biology and artificial intelligence in the future), or by enhancing humanity’s general capacities to deal with these challenges and opportunities when we face them (through higher rates of economic growth, improved political coordination, improved use of information and decision-making for individuals and groups, and increases in education and human capital).

We believe that this perspective diverges from the recommendations of a more short-run focus in a few ways.

First, when we consider attempts to prepare for global challenges and opportunities in general, we weigh such factors as economic output, log incomes, education, quality-adjusted life-years (QALYs), scientific progress, and governance quality differently than if we would if we put less emphasis on long-run outcomes for humanity. In particular, a more short-term focus would lead to a much stronger emphasis on QALYs and log incomes, which we suspect could be purchased more cheaply through interventions targeting people in developing countries, e.g. through public health or more open migration. Attending to long-run impacts creates a closer contest between such interventions and those which increase economic output or institutional quality (and thus the quality of our response to future challenges and opportunities). Our perspective would place an especially high premium on intermediate goals such as the quality of forecasting and the transmission of scientific knowledge to policy makers, which are disproportionately helpful for navigating global challenges and opportunities.

Second, when there are opportunities for identifying specific major challenges or opportunities for affecting long-run outcomes for humanity, our perspective favors treating these challenges and opportunities with the utmost seriousness. We believe that reducing the risk of catastrophes with the potential to destroy humanity—which we call “global catastrophic risks” or sometimes “existential risks”—has an unusually clear and positive connection with long-run outcomes, and this is a reason we are unusually interested in problems in this area.

Third, the long-run perspective values resilience against permanent disruption or worsening of civilization over and above resilience to short-term catastrophe. From a long-run perspective, there is an enormous difference between a collapse of civilization followed by eventual recovery, versus a permanent collapse of civilization. This point has been made by philosophers like Derek Parfit (very memorably at the end of his book Reasons and Persons) and Peter Singer (in a short piece he wrote with Nick Beckstead and Matt Wage).

Five causes we would like to investigate more deeply

Immigration reform

What it is: By “immigration reform,” we mean loosening immigration restrictions in rich countries with stronger political institutions, especially for people who are migrating from poor countries with weaker political institutions. We include both efforts to allow more high-skill immigration and efforts to allow more immigration in general. Some people to talk to in this area include Michael Clemens, Lant Pritchett, and others at the Center for Global Development. Fwd.us and the Krieble Foundation are two examples of organizations working in this area.

Why we think it is promising: Many individual workers in poor countries could produce much more economic value and better realize their potential in other ways if they lived in rich countries, meaning that much of the world’s human capital is being severely underutilized. This claim is unusually well supported by basic economic theory and the views of a large majority of economists. Many concerns have been raised, but we think the most plausible ones involve political feasibility and political and cultural consequences of migration.

Philanthropic infrastructure

What it is: By “philanthropic infrastructure,” we mean activities that expand the flexible capabilities of those trying to do good in a cause-neutral, outcome-oriented way. Some organizations in this area we are most familiar with include charity evaluator GiveWell, donation pledge organizations (Giving What We Can, The Life You Can Save, the Giving Pledge), and 80,000 Hours (an organization that provides information to help people make career choices that maximize their impact). There are many examples we are less familiar with, such as the Bridgespan Group and the Center for Effective Philanthropy. (Disclosure: Nick Beckstead is on the board of trustees for the Centre for Effective Altruism, which houses Giving What We Can, The Life You Can Save, and 80,000 Hours, though The Life You Can Save is substantially independent.)

Why we think it is promising: We are interested in this area because we want to build up resources which are flexible enough to ultimately support the causes and opportunities that are later found to be the most promising, and because we see a lot of growth in this area and think early investments may result in more money and talent available for very promising opportunities later on.

Methods for improved forecasting

What it is: Forecasting is challenging, and very high accuracy is difficult to obtain in many of the domains of greatest interest. However, a number of methods have been developed to improve forecasting accuracy through training, aggregation of opinion, incentives, and other means. Some examples include expert judgment aggregation algorithms, probability and calibration training, and prediction markets. We are excited about recent progress in this area in a prediction tournament sponsored by IARPA, which Philip Tetlock’s Good Judgment Project is currently winning.

Why we think it is promising: Improved forecasting could be useful in a wide variety of political and business contexts. Improved forecasting over a period of multiple years could improve overall preparedness for many global challenges and opportunities. Moreover, strong evidence of the superior performance of some methods of forecasting over others could help policymakers base decisions on the best available evidence. We currently have limited information about room for more funding for existing organizations in this area.

Global catastrophic risk

What it is: Opportunities in this area focus on identifying and mitigating specific threats of human extinction, such as large asteroid impact and tail risks of climate change and nuclear winter. Examples of interventions in this category include tracking asteroids (which has largely been completed for asteroids that threaten civilization, though not for comets), improving resilience of the food supply through cellulose-to-food conversion, disease surveillance (for natural or man-made pandemics), advocacy for non-proliferation of nuclear weapons, and research on other possible risks and methods for mitigating them. An unusual view we take seriously is that some of the most significant risks in this area will come from new technologies that may emerge this century, such as advanced artificial intelligence and advanced biological weapons. (We also believe technologies of this type have massive upside potential which must be thought about carefully as we think about the risks.) Notable defenders of views in this vicinity include Martin Rees, Richard Posner, and Nick Bostrom. (Disclosure: Nick Bostrom is the Director at the Future of Humanity Institute, where Nick Beckstead is a research fellow and Carl Shulman is a research associate.)

Why we think it is promising: Progress in this area has a clear relationship with long-run outcomes for humanity. There have been some very good buys in this area in the past, such as early asteroid tracking programs. Apart from climate change, total foundation spending in this area is around 0.1%, and little of that carefully distinguishes between large catastrophes and catastrophes with the potential to significant change long-run outcomes for humanity.

Meta-research

What it is: We will make use of GiveWell’s explanation of the cause area here and here.

Why we think it is promising: We believe that many improvements in meta-research can accelerate scientific progress and make it easier for non-experts to discern what is known in a field. We believe this is likely to systematically improve our ability to navigate global challenges and opportunities. From a long-term perspective the importance of different impacts of meta-research diverges from a short-term analysis because, e.g. the degree to which policymakers can understand the state of scientific knowledge at any given level of progress looms larger in comparison to simple acceleration of progress.