Broad vs. nar­row interventions

TagLast edit: 8 May 2021 7:25 UTC by MichaelA

The philosopher Nick Beckstead has distinguished between two different ways of influencing the long-term future: broad interventions, which “focus on unforeseeable benefits from ripple effects”, and narrow (or targeted) interventions, which “aim for more specific effects on the far future, or aim at a relatively narrow class of possible ripple effects.” (Beckstead 2013a)

Clarifying the distinction

The chain of causation connecting an intervention with its intended effect can be analysed along two separate dimensions. One dimension concerns the number of causal steps in the chain. Another dimension concerns the number of causal paths in the chain. In one sense of the term, broad interventions involve both many steps and many paths, while narrow interventions involve both few steps and few paths. For example, the broad intervention of promoting peace can reduce existential risk in countless different ways, each of which involves a long sequence of events culminating in the risk reduction. By contrast, the narrow intervention of distributing bed nets saves lives in just one way (by protecting people from mosquito bites) and in just a few steps (distribution > installation > protection).

However, interventions with many causal steps may have few causal paths, and interventions with many causal paths may have few causal steps. It is therefore convenient to have separate terms for each of these dimensions of variation. Some effective altruists reserve the terms “narrow” and “broad” for interventions with few or many causal paths, and use the terms “direct” and “indirect” for interventions with few or many causal steps (Cotton-Barratt 2015).

Assessing broad and narrow interventions

A number of arguments in favor of either broad or narrow interventions have been offered (e.g. Beckstead 2013b). A commonly given consideration in favor of broad interventions concerns their apparently superior historical track record. This point has been made independently by a number of authors at around the same time.[1] Beckstead himself writes (Beckstead 2013a: 145):

Suppose that in 1500 CE, someone wrote a forward-looking novel that featured a technology from the present day, such as a telephone. And suppose another person read this novel and then set for himself the goal that, in the future, people utilized rapid long-distance communication as effectively as possible. He would know that if making telephones was actually a good idea, future people would be in a much better position to find a way to create telephones and use them effectively. He would know very little about telephones or hw they might be discovered, so it would not make sense offer him to do something very targeted, such as drafting potential telephone designs. It would make more sense, I believe, for him to help in very broad ways (such as becoming a teacher or fighting political and religious threats to the advance of science), thereby empowering future generations to discover and effectively utilize rapid long-distance communication.

Similarly, Brian Tomasik writes (2013):

imagine an effective altruist in the year 1800 trying to optimize his positive impact. He would not know most of modern economics, political science, game theory, physics, cosmology, biology, cognitive science, psychology, business, philosophy, probability theory, computation theory, or manifold other subjects that would have been crucial for him to consider. If he tried to place his bets on the most significant object-level issue that would be relevant centuries later, he’d almost certainly get it wrong. I doubt we would fare substantially better today at trying to guess a specific, concrete area of focus more than a few decades out. [...] What this 1800s effective altruist might have guessed correctly would have been the importance of world peace, philosophical reflection, positive-sum social institutions, and wisdom. Promoting those in 1800 may have been close to the best thing this person could have done, and this suggests that these may remain among the best options for us today.

And Gwern Branwen writes (Branwen 2014):

Imagine someone in England in 1500 who reasons the same way about x-risk: humanity might be destroyed, so preventing that is the most important task possible. He then spends the rest of his life researching the Devil and the Apocalypse. Such research is, unfortunately, of no value whatsoever unless it produces arguments for atheism demonstrating that that entire line of enquiry is useless and should not be pursued further. But as the Industrial and Scientific Revolutions were just beginning, with exponential increases in global wealth and science and technology and population, ultimately leading to vaccine technology, rockets and space programs, and enough wealth to fund all manner of investments in x-risk reduction, he could instead had made a perhaps small but real contribution by contributing to economic growth by work & investment or making scientific discoveries.

In response to these claims, Toby Ord argues that comparisons with previous centuries may be misleading, because the bulk of the existential risk to which humanity is currently exposed is anthropogenic in nature, and originates in technologies developed only since around the mid-20th century. Narrow interventions aimed specifically at mitigating the risks posed by such technologies should thus be expected to accomplish much more than similar efforts in previous centuries. Ord also points out that broad interventions receive tens of thousands of times more funding than do narrow interventions, so even people with reasonable differences about the relative merits of broad and targeted interventions should favor the latter, given their much higher neglectedness (Ord 2020: ch. 6).


Beckstead, Nick (2013a) On the Overwhelming Importance of Shaping the Far Future, Doctoral thesis, Rutgers University.

Beckstead, Nick (2013b) How to compare broad and targeted attempts to shape the far future, July 13.

Branwen, Gwern (2014) Optimal existential risk reduction investment,, July 17.

Cotton-Barratt, Owen (2015) Comment on “What is a ‘broad intervention’ and what is a ‘narrow intervention’? Are we confusing ourselves?”, Effective Altruism Forum, December 19.

Koehler, Arden, Benjamin Todd, Robert Wiblin & Keiran Harris (2020) Benjamin Todd on varieties of longtermism and things 80,000 Hours might be getting wrong, The 80,000 Hours Podcast, September.

Ord, Toby (2020) The Precipice: Existential Risk and the Future of Humanity, London: Bloomsbury Publishing.

Smart, J. J. C. (1973) An outline of a system of utilitarian ethics, in J. J. C. Smart & Bernard Williams (eds.) Utilitarianism: For and Against, Cambridge: Cambridge University Press, pp. 1–74.

Tomasik, Brian (2013) Charity cost-effectiveness in an uncertain world, Center on Long-Term Risk, October 28.

Wiblin, Robert (2015) What is a “broad intervention” and what is a “narrow intervention”? Are we confusing ourselves?, Effective Altruism Forum, December 19.

Related entries

civilizational collapse | existential risk factor | indirect long-term effects

The philosopher J. J. C. Smart made this point decades earlier: “Could Jeremy Bentham or Karl Marx (to take two very different political theorists) have foreseen the atom bomb? Could they have foreseen automation? Can we foresee the technology of the next century?” (Smart 1973: 64) ↩︎

What is a ‘broad in­ter­ven­tion’ and what is a ‘nar­row in­ter­ven­tion’? Are we con­fus­ing our­selves?

Robert_Wiblin19 Dec 2015 16:12 UTC
16 points
2 commentsEA link

Cru­cial ques­tions for longtermists

MichaelA29 Jul 2020 9:39 UTC
76 points
16 comments19 min readEA link