Updates from the Global Priorities Institute and how to get involved
The Global Priorities Institute (GPI) conducts research in philosophy and economics on how to do the most good. GPI aims to found global priorities research as an academic field, in order to enable policymakers and intergovernmental bodies around the world to make decisions based on what will improve the world most. You can learn more about GPI in Hilary Greaves’ recent episode of the 80,000 Hours podcast, Michelle Hutchinson’s earlier episode, or Michelle’s EAG talk.
GPI is looking for a Head of Research Operations to lead its next stage of growth. Finding the right person for this role is crucial to GPI achieving its mission. If it sounds interesting to you, please consider applying!
Recent Progress
GPI officially became an institute within the University of Oxford in January this year. We’ve had initial success in building a strong research team in philosophy, in producing papers, and in building relationships with academics based at other institutions. In the coming year we are prioritising building up the economics side of GPI, which has been more challenging due to GPI being founded by philosophers.
Will MacAskill recently gave the keynote talk at the 2018 Conference for the International Society for Utilitarian Studies, which covered a major part of GPI’s research agenda. The full talk is available here.
We’ve set up two distinguished lecture series in Economics and Philosophy. The inaugural Atkinson Memorial Lecture was given by Professor Yew-Kwang Ng. The inaugural Parfit Memorial Lecture will take place in early 2019 and will be given by Associate Professor Lara Buchak, who is known for her work on risk aversion.
We’ve run our first summer program for early-career researchers (Summer Research Visitor Programme), which is intended to attract top early-career researchers and graduate students to our research interests. We’ve already filled all of the slots for philosophers on our 2019 visitor programme, but still have some slots remaining for economists.
We’ve set up the Parfit and Atkinson Scholarship programmes for graduate students in economics and philosophy to come to Oxford and work on global priorities research topics through the DPhil programme. We’re also offering prizes for students already studying a DPhil at Oxford (in either economics or philosophy) to do the same.
Our Research
At the end of last year, we released our research agenda, which lays out a preliminary sketch of the topics which we think are most important, neglected, and tractable for GPI to work on. We will soon be releasing a second version of the research agenda, drafted with GPI’s new economics team.
In carrying out research, we’ve tested a novel model for academic institutes using collaborative working groups. Our researchers meet to discuss and brainstorm a particular topic from the research agenda. Based on these brainstorms, researchers draw up a prioritised list of possible research articles. This allows us to canvas a wide array of potential ideas and then prioritise those which are most promising and which will be most impactful (in terms of engaging other academics and also of providing value to the EA community). So far, we’ve found this model to be more efficient at identifying the best avenues of research and producing high-quality papers, and we plan to continue using it for the foreseeable future.
In 2018, our researchers focussed on the following topics in their working groups:
Longtermism: the view that the primary determinant of the moral value of our actions today is the effect of those actions on the very long-run future. What are the most compelling objections to longtermism? If we exclude actions which reduce existential risk, is it still true or should we expect the consequences of our actions to wash out in the long run?
Extinction risk and risk/ambiguity aversion: Should agents who prefer prospects with less risk (or ambiguity) prioritise short-term, highly certain interventions (such as in global health) over longer-term, highly uncertain interventions such as those which mitigate existential risk? This working group has already generated a paper by Andreas Mogensen, which shows that this depends on whether the agent is concerned just with the impact of their actions or with the total value in the world.
Fanaticism: allowing decisions to be determined by small probabilities of extremely large payoffs (or extremely dire consequences). This may be seen as an objection to the use of expected value. But justifications of existential risk mitigation rely on the use of expected value, sometimes even appear to endorse fanaticism. Is this a problem for those justifications? Should we endorse a principle of ‘timidity’? Teruji Thomas is currently writing up our findings on this.
Indirect effects: The cost-effectiveness of charitable interventions is typically evaluated by comparing the intervention’s direct benefits, and only its direct benefits, to its costs. How do our evaluations change we incorporate indirect effects? Could indirect effects be the most important determinant of the moral value of most actions?
Deliberation ladders: Suppose you have undergone a series of significant changes in your moral views. Should you expect to undergo further changes and, if so, how should you act now? Should we be far less confident in our moral views than we are?
Donor coordination: Given multiple actors deciding how to distribute resources for altruistic purposes, how will they, and how should they, act? How can we use donor coordination strategies to leverage more donations to effective causes and to reduce spending in zero-sum games such as political campaign funding?
Long-run economic growth: How is standard growth theory altered when we consider the catastrophic risks of new technologies rather than just the increases in consumption they cause? Given these risks, what is the optimal rate of growth? And what can we say about the optimal rate of growth in any given country, when growth in one country imposes risks on other countries?
Join GPI
GPI is currently hiring for a new Head of Research Operations.
The Head of Research Operations role is a central part of GPI, and necessary for making GPI’s vision a reality. They will manage all operational aspects of GPI. They will have a great deal of autonomy, but the key responsibilities are:
Helping to develop GPI’s long-term strategy and plan the Institute’s activities over the coming years, e.g., seminars, visitor programmes, scholarships, and conferences.
Doing the necessary logistical work to make those activities happen.
Managing communications—representing GPI externally, promoting global priorities research to academics, and presenting GPI’s work to public audiences.
Fundraising, particularly from private donors.
Managing GPI’s finances.
Recruiting and managing a larger operational team to share these responsibilities as GPI continues to grow.
We’re looking for someone with an analytic and entrepreneurial mindset with a demonstrated track record for independently planning and managing complex projects, excellent oral communications skills, and experience of working well in a team.
If you’re interested in learning more about the role, you can find more detail on what the role involves, what we’re looking for and how to apply here.
In addition to the Head of Research Operations role, there are opportunities for academics to get involved with GPI through our scholarships, prizes, and visitor program. You can see the full list of opportunities we have open at any time here.
Appendix—GPI’s current working papers
Here’s a snapshot of some of the papers the GPI team are working on currently.
Andreas Mogensen—Long-termism for risk averse altruists
Abstract:
According to Long-termism, altruistic agents should try to beneficially influence the long-run future, as opposed to aiming at short-term benefits. The likelihood that I can significantly impact the long-term future of humanity is arguably very small, whereas I can be reasonably confident of achieving significant short-term goods. However, the potential value of the far future is so enormous that even an act with only a tiny probability of preventing an existential catastrophe should apparently be assigned much higher expected value than an alternative that realizes some short-term benefit with near certainty. This paper explores whether agents who are risk averse should be more or less willing to endorse Long-termism, looking in particular at agents who can be modelled as risk avoidant within the framework of risk-weighted expected utility theory. I find that risk aversion may be more friendly to Long-termism than risk neutrality. However, I find that there is some reason to suppose that ambiguity aversion disfavours Long-termism.
Christian Tarsney—Exceeding expectations: Stochastic dominance as a general decision theory (full paper)
Abstract:
The principle that rational agents should maximize expectations is intuitively plausible with respect to many ordinary cases of decision-making under uncertainty. But it becomes increasingly implausible as we consider cases of more extreme, low-probability risk (like Pascal’s Mugging), and intolerably paradoxical in cases like the St. Petersburg Lottery and the Pasadena Game. In this paper I show that, under certain assumptions, stochastic dominance reasoning can capture many of the plausible implications of expectational reasoning while avoiding its implausible implications. More specifically, when an agent starts from a condition of background uncertainty about the choiceworthiness of her options representable by a probability distribution over possible degrees of choiceworthiness with exponential or heavier tails and a sufficiently large scale parameter, many expectation-maximizing gambles that would not stochastically dominate their alternatives “in a vacuum” turn out to do so in virtue of this background uncertainty. Nonetheless, even under these conditions, stochastic dominance will generally not require agents to accept extreme gambles like Pascal’s Mugging or the St. Petersburg Lottery. I argue that the sort of background uncertainty on which these results depend is appropriate for any agent who assigns normative weight to aggregative consequentialist considerations, i.e., who measures the choiceworthiness of an option in part by the total amount of value in the resulting world. At least for such agents, then, stochastic dominance offers a plausible general principle of choice under uncertainty that can explain more of the apparent rational constraints on such choices than has previously been recognized.
Rossa O’Keeffe O’Donovan—Water, spillovers and free riding: Provision of local public goods in a spatial network (full paper)
Abstract:
Both state and non-governmental organizations provide public goods in developing countries, potentially generating inefficiencies where they lack coordination. In rural Tanzania, more than 500 organizations have installed hand-powered water pumps in a decentralized fashion. I estimate the costs of this fragmented provision by studying how communities’ pump maintenance decisions are shaped by strategic interactions between them. I model the maintenance of pumps as a network game between neighboring communities, and estimate this model using geo-coded data on the location, characteristics and functionality of water sources, and human capital outcomes. Estimation combines maximum simulated likelihood with a clustering algorithm that partitions the data into geographic clusters. Using exogenous variation in the similarity of water sources to identify spillover and free riding effects between communities, I find evidence of maintenance cost-reduction spillovers among pumps of the same technology and strong water source free-riding incentives. As a result, standardization of pump technologies would increase pump functionality rates by 6 percentage points. Moreover, water collection fees discourage free riding and would increase pump functionality rates by 11 percentage points if adopted universally. This increased availability of water would have a modest positive effect on child survival and school attendance rates.
Andreas Mogensen—Meaning, medicine, and merit (full paper)
Abstract:
Given the inevitability of scarcity, should public institutions ration healthcare resources so as to prioritize those who contribute more to society? Intuitively, we may feel that this would be somehow inegalitarian. I begin by showing that it is surprisingly hard to substantiate this belief. I then argue that the egalitarian objection to prioritizing treatment on the basis of patients’ usefulness to others is best thought of as semiotic: i.e., as having to do with what this practice would mean, convey, or express about each person’s standing. I explore the implications of this conclusion when taken in conjunction with the observation that semiotic objections are generally flimsy, failing to identify anything wrong with a practice as such and having limited capacity to generalize beyond particular contexts. In particular, I consider the implications for evaluating rationing decisions concerning life and health in the sphere of private philanthropy, where donors might wish to give preference to beneficiaries with greater instrumental value to others.
Philip Trammell—Fixed-point solutions to the regress problem in normative uncertainty (full paper)
Abstract:
When we are faced with a choice among acts, but are uncertain about the true state of the world, we may be uncertain about the acts’ “choiceworthiness”. Decision theories guide our choice by making normative claims about how we should respond to this uncertainty. If we are unsure which decision theory is correct, however, we may remain unsure of what we ought to do. Given this decision-theoretic uncertainty, meta-theories attempt to resolve the conflicts between our decision theories… but we may be unsure which meta-theory is correct as well. This reasoning can launch a regress of ever-higher-order uncertainty, which may leave one forever uncertain about what one ought to do. There is, fortunately, a class of circumstances under which this regress is not a problem. If one holds a cardinal understanding of subjective choiceworthiness, and accepts certain other criteria (which are too weak to specify any particular decision theory), one’s hierarchy of metanormative uncertainty ultimately converges to precise definitions of “subjective choiceworthiness” for any finite set of acts. If one allows the metanormative regress to extend to the transfinite ordinals, the convergence criteria can be weakened further. Finally, the structure of these results applies straightforwardly not just to decision-theoretic uncertainty, but also to other varieties of normative uncertainty, such as moral uncertainty.
Andreas Mogensen & Will MacAskill—The paralysis argument (full paper)
Abstract:
This paper explores the difficulties that arise when we apply the Doctrine of Doing and Allowing (DDA) to the indirect and unforeseeable long-run consequences of our actions. Given some plausible empirical assumptions about the long-run impact of our actions, the DDA appears to entail that we should aim to do as little as possible because we cannot know the distribution of benefits and harms that result from our actions over the long term. We consider a number of objections to the argument and suggest what we think is the most promising response. This involves accepting a highly demanding morality of beneficence with a long-termist focus. This may be taken to represent a striking point of convergence between consequentialist and deontological moral theories.
Andreas Mogensen—Doomsday redux (full paper)
Abstract:
This paper considers the argument that because we should regard it as a priori very unlikely that we are among the most important people who will ever exist, we should decrease our confidence in theories on which we are living during a period of high extinction risk that will be followed by a long period of high safety. This may involve substantially increasing our confidence that the human species will become extinct within the near future. The argument is a descendant of the Carter-Leslie Doomsday Argument. In showing why the latter argument fails, I argue that the former fails to inherit its defects, and should therefore be taken seriously even if we reject the Doomsday Argument.
Christian Tarsney—Metanormative regress: An escape plan (full paper)
Abstract:
How should an agent decide what to do when she is uncertain about basic normative principles? Several philosophers have suggested that such an agent should follow some second-order norm: e.g., she should comply with the first-order normative theory she regards as most probable, choose the option that’s most likely to be objectively right, or maximize expected objective value. But such proposals face a potentially-fatal difficulty: If an agent who is uncertain about first-order norms must invoke second-order norms to reach a rationally guided decision, then an agent who is uncertain about second-order norms must invoke third-order norms—and so on ad infinitum, such that an agent who is at least a little uncertain about any normative principle will never be able to reach a rationally guided decision at all. This paper tries to solve this “metanormative regress” problem. I first elaborate and defend Brian Weatherson’s argument that the regress problem forces us to accept the view he calls normative externalism, according to which some norms are incumbent on an agent regardless of her beliefs. But, contra Weatherson, I argue that we need not accept externalism about first-order (e.g. moral) norms, thus closing off any question of what an agent should do in light of her normative beliefs. Rather, it is more plausible to ascribe external force to a single, second-order rational norm: the enkratic principle, correctly formulated. In the second half of the paper, I argue that this modest form of externalism can solve the regress problem. More specifically, I distinguish two regress problems, afflicting ideal and non-ideal agents respectively, and offer solutions to both.
Christian Tarsney—Non-identity, times infinity
Abstract:
This paper describes a new difficulty for consequentialist ethics in infinite worlds. Although infinite worlds in and of themselves have been thought to challenge aggregative consequentialism, I begin by arguing that, for agents who can only make a finite difference to an infinite world, there is a simple principle (namely, to compare pairs of worlds by summing the differences in value realized at each possible value location) that yields all the conclusions an aggregative consequentialist would intuitively want. But, if the world is not merely infinite in spatial extent but contains infinitely many value-bearing entities in our causal future, then this principle breaks down. Specifically, because our choices are likely to be “identity-affecting” with respect to all or nearly all the value-bearing entities in our causal future, any two options in a given choice situation will result in worlds whose sum of value differences is non-convergent and hence undefined. There is an apparently-natural anonymity principle that seemingly must be true if we are to make any comparisons at all between “infinite non-identity” worlds. But in combination with other very modest assumptions, this principle generates axiological cycles. From this cyclicity problem, I draw out several simple impossibility results suggesting that, if the population of the causal future is infinite, then we will have to pay a very high theoretical price to hang onto the idea that our actions to matter from an impartial perspective.
Christian Tarsney—Vive la différence? Structural diversity as a challenge for metanormative theories
Abstract:
How should agents decide what to do when they’re uncertain about basic normative principles? Most answers to this question involve some form of intertheoretic value aggregation, i.e., some way of combining the rankings of options given by rival normative theories into a single ranking that tells an agent what to do given her uncertainty. An important obstacle to any form of intertheoretic value aggregation, however, is the structural diversity of normative theories: The rankings given by first-order theories, which serve as inputs to intertheoretic aggregation, may have any number of structures, including ordinal, interval, ratio, multidimensional, and (I claim) many more. But it is often not obvious how to combine rankings with different structures. In this paper, I survey and evaluate three general approaches to this problem. Structural depletion solves the problem by stripping theories of all but some minimum, universal structure for purposes of aggregation. Structural enrichment, on the other hand, adds structure to theories, e.g. by mapping ordinal rankings onto a cardinal scale. Finally, multi-stage aggregation aggregates classes of identically-structured theories first, then takes the result as input to one or more further stages of aggregation that combine larger classes of more distantly related theories. I tentatively defend multi-stage aggregation as the least bad of these options, but all three approaches have serious drawbacks. This “problem of structural diversity” needs more attention, both since it represents a serious challenge to the possibility of intertheoretic aggregation and since whatever approach we adopt will substantively constrain other aspects of our metanormative theories.
For those PhD students who might not be eligible for the scholarships/prizes because they are not at Oxford University, note that the Forethought Foundation for Global Priorities Research has just opened applications for the Global Priorities Fellowship for both—economics and philosophy students.