RSS

Broad vs. nar­row interventions

TagLast edit: 2 Aug 2022 12:55 UTC by Pablo

The philosopher Nick Beckstead has distinguished between two different ways of influencing the long-term future: broad interventions, which “focus on unforeseeable benefits from ripple effects”, and narrow (or targeted) interventions, which “aim for more specific effects on the far future, or aim at a relatively narrow class of possible ripple effects.”[1]

Clarifying the distinction

The chain of causation connecting an intervention with its intended effect can be analysed along two separate dimensions. One dimension concerns the number of causal steps in the chain. Another dimension concerns the number of causal paths in the chain. In one sense of the term, broad interventions involve both many steps and many paths, while narrow interventions involve both few steps and few paths. For example, the broad intervention of promoting peace can reduce existential risk in countless different ways, each of which involves a long sequence of events culminating in the risk reduction. By contrast, the narrow intervention of distributing bed nets saves lives in just one way (by protecting people from mosquito bites) and in just a few steps (distribution > installation > protection).

However, interventions with many causal steps may have few causal paths, and interventions with many causal paths may have few causal steps. It is therefore convenient to have separate terms for each of these dimensions of variation. Some effective altruists reserve the terms “narrow” and “broad” for interventions with few or many causal paths, and use the terms “direct” and “indirect” for interventions with few or many causal steps.[2]

Assessing broad and narrow interventions

A number of arguments in favor of either broad or narrow interventions have been offered.[3] A commonly given consideration in favor of broad interventions concerns their apparently superior historical track record. This point has been made independently by a number of authors at around the same time.[4] Beckstead himself writes:[5]

Suppose that in 1500 CE, someone wrote a forward-looking novel that featured a technology from the present day, such as a telephone. And suppose another person read this novel and then set for himself the goal that, in the future, people utilized rapid long-distance communication as effectively as possible. He would know that if making telephones was actually a good idea, future people would be in a much better position to find a way to create telephones and use them effectively. He would know very little about telephones or hw they might be discovered, so it would not make sense offer him to do something very targeted, such as drafting potential telephone designs. It would make more sense, I believe, for him to help in very broad ways (such as becoming a teacher or fighting political and religious threats to the advance of science), thereby empowering future generations to discover and effectively utilize rapid long-distance communication.

Similarly, Brian Tomasik writes:[6]

imagine an effective altruist in the year 1800 trying to optimize his positive impact. He would not know most of modern economics, political science, game theory, physics, cosmology, biology, cognitive science, psychology, business, philosophy, probability theory, computation theory, or manifold other subjects that would have been crucial for him to consider. If he tried to place his bets on the most significant object-level issue that would be relevant centuries later, he’d almost certainly get it wrong. I doubt we would fare substantially better today at trying to guess a specific, concrete area of focus more than a few decades out. [...] What this 1800s effective altruist might have guessed correctly would have been the importance of world peace, philosophical reflection, positive-sum social institutions, and wisdom. Promoting those in 1800 may have been close to the best thing this person could have done, and this suggests that these may remain among the best options for us today.

And Gwern Branwen writes:[7]

Imagine someone in England in 1500 who reasons the same way about x-risk: humanity might be destroyed, so preventing that is the most important task possible. He then spends the rest of his life researching the Devil and the Apocalypse. Such research is, unfortunately, of no value whatsoever unless it produces arguments for atheism demonstrating that that entire line of enquiry is useless and should not be pursued further. But as the Industrial and Scientific Revolutions were just beginning, with exponential increases in global wealth and science and technology and population, ultimately leading to vaccine technology, rockets and space programs, and enough wealth to fund all manner of investments in x-risk reduction, he could instead had made a perhaps small but real contribution by contributing to economic growth by work & investment or making scientific discoveries.

In response to these claims, Toby Ord argues that comparisons with previous centuries may be misleading, because the bulk of the existential risk to which humanity is currently exposed is anthropogenic in nature, and originates in technologies developed only since around the mid-20th century. Narrow interventions aimed specifically at mitigating the risks posed by such technologies should thus be expected to accomplish much more than similar efforts in previous centuries. Ord also points out that broad interventions receive tens of thousands of times more funding than do narrow interventions, so even people with reasonable differences about the relative merits of broad and targeted interventions should favor the latter, given their much higher neglectedness.[8]

Further reading

Beckstead, Nick (2013) On the Overwhelming Importance of Shaping the Far Future, Doctoral thesis, Rutgers University.

Koehler, Arden, Benjamin Todd, Robert Wiblin & Keiran Harris (2020) Benjamin Todd on varieties of longtermism and things 80,000 Hours might be getting wrong, The 80,000 Hours Podcast, September.

Ord, Toby (2020) The Precipice: Existential Risk and the Future of Humanity, London: Bloomsbury Publishing.

Wiblin, Robert (2015) What is a “broad intervention” and what is a “narrow intervention”? Are we confusing ourselves?, Effective Altruism Forum, December 19.

Related entries

civilizational collapse | existential risk factor | indirect long-term effects

  1. ^

    Beckstead, Nick (2013) On the Overwhelming Importance of Shaping the Far Future, Doctoral thesis, Rutgers University.

  2. ^
  3. ^

    For example, Nick Beckstead (2013) How to compare broad and targeted attempts to shape the far future, July 13.

  4. ^

    The philosopher J. J. C. Smart made this point decades earlier: “Could Jeremy Bentham or Karl Marx (to take two very different political theorists) have foreseen the atom bomb? Could they have foreseen automation? Can we foresee the technology of the next century?” (Smart, J. J. C. (1973) An outline of a system of utilitarian ethics, in J. J. C. Smart & Bernard Williams (eds.) Utilitarianism: For and Against, Cambridge: Cambridge University Press, pp. 1–74, p. 64)

  5. ^
  6. ^

    Tomasik, Brian (2013) Charity cost-effectiveness in an uncertain world, Center on Long-Term Risk, October 28.

  7. ^
  8. ^

    Ord, Toby (2020) The Precipice: Existential Risk and the Future of Humanity, London: Bloomsbury Publishing, ch. 6.

What is a ‘broad in­ter­ven­tion’ and what is a ‘nar­row in­ter­ven­tion’? Are we con­fus­ing our­selves?

Robert_Wiblin19 Dec 2015 16:12 UTC
20 points
3 comments2 min readEA link

[Question] Where should I donate?

Eevee🔹22 Nov 2021 20:56 UTC
29 points
10 comments1 min readEA link

A Cri­tique of The Precipice: Chap­ter 6 - The Risk Land­scape [Red Team Challenge]

Sarah Weiler26 Jun 2022 10:59 UTC
57 points
2 comments21 min readEA link

Cru­cial ques­tions for longtermists

MichaelA🔸29 Jul 2020 9:39 UTC
104 points
17 comments19 min readEA link

How I Came To Longter­mism On My Own & An Out­sider Per­spec­tive On EA Longtermism

Jordan Arel7 Aug 2022 2:42 UTC
34 points
2 comments20 min readEA link

Disen­tan­gling “Im­prov­ing In­sti­tu­tional De­ci­sion-Mak­ing”

Lizka13 Sep 2021 23:50 UTC
92 points
16 comments19 min readEA link