Across the community it is common to hear distinctions drawn between ‘broad’ and ‘narrow’ interventions—though less so lately than in 2014/2013. For some imperfect context see this blog post by Holden on ‘flow-through effects’. Typical classifications people might make would be:
Broader
GiveDirectly or the Against Malaria Foundation, with the goal of general human empowerment
Studying math
Promoting effective altruism
Improving humanity’s forecasting ability through prediction markets
Narrower
Specialising in answering one technical question about e.g. artificial intelligence or nanotechnology, etc.
Studying ‘crucial considerations’
Trying to stop Chris Christie becoming President in 2020
Trying to stop a war between India and Pakistan
All I want to do here is draw attention to the fact that there are multiple distinctions that should be drawn out separately so that we can have more productive conversations about the relative merits and weaknesses of different approaches. One way to do this is to make causal diagrams. I’ve made two below for illustrative purposes.
Below is a list of some possible things we might mean by narrow and broad, or related terms.
A long path to impact vs a short path to impact. Illustrated here:
Many possible paths to impact vs few possible paths to impact. Illustrated here.
The path functions in a wide variety of possible future scenarios vs path can function in only one possible (and conjunctive, or unlikely) future scenario. Illustratedamusinglyhere (though mixed in with other forms of narrowness as well).
The path has many necessary steps that are notably weak (that is, there’s a high probability they might not happen) vs the path has few such weak steps.
The path has a weak step that might not happen early on in its causal chain, vs path has a weak step that might not happen later in its causal chain.
The path initially has only few routes, but after one step opens up many paths, vs the path initially has many paths but after later on narrows down to few paths.
Other quick observations about this:
A ‘narrow’ approach on any of these definitions clearly has downsides, but could be compensated for with greater ‘force’ e.g. the magnitude of the impact along a single chain is very huge if the scheme works out.
It’s hard to know what should ‘count’ as a step and what doesn’t. How do we cleave these causal diagrams at their true joints? One rule of thumb for drawing out such diagrams in practice could be that a ‘step’ in the chain is something that is less than 99% likely to happen—anything higher than that can be ignored, though this too is vulnerable to arbitrariness in what you lump together as a single step.
I believe in the overwhelming importance of shaping the long term future. In my view most causal chains that could actually matter are likely to be very long by normal standards. But they might at least have many paths to impact, or be robust (i.e. have few weak steps).
People who say they are working on broad, robust or short chains usually ignore the major uncertainties about whether the farther out regions of the chain they are a part of are positive, neutral or negative in value. I think this is dangerous and makes these plans less reliable than they superficially appear to be.
If any single step in a chain produces an output of zero, or negative expected value (e.g. your plan has many paths to increasing our forecasting ability, but it turns out that doing so is harmful), then the whole rest of that chain isn’t desirable.
Most plans in practice seem to go through a combination of both narrow and broad sections (e.g. lobbying for change to education policy has a bottleneck around the change in the law; after that there may be many paths to impact).
I’m most excited about look for ways to reduce future risks that can work in a wide range of scenarios—so called ‘capacity building’.
What is a ‘broad intervention’ and what is a ‘narrow intervention’? Are we confusing ourselves?
Across the community it is common to hear distinctions drawn between ‘broad’ and ‘narrow’ interventions—though less so lately than in 2014/2013. For some imperfect context see this blog post by Holden on ‘flow-through effects’. Typical classifications people might make would be:
Broader
GiveDirectly or the Against Malaria Foundation, with the goal of general human empowerment
Studying math
Promoting effective altruism
Improving humanity’s forecasting ability through prediction markets
Narrower
Specialising in answering one technical question about e.g. artificial intelligence or nanotechnology, etc.
Studying ‘crucial considerations’
Trying to stop Chris Christie becoming President in 2020
Trying to stop a war between India and Pakistan
All I want to do here is draw attention to the fact that there are multiple distinctions that should be drawn out separately so that we can have more productive conversations about the relative merits and weaknesses of different approaches. One way to do this is to make causal diagrams. I’ve made two below for illustrative purposes.
Below is a list of some possible things we might mean by narrow and broad, or related terms.
A long path to impact vs a short path to impact. Illustrated here:
Many possible paths to impact vs few possible paths to impact. Illustrated here.
The path functions in a wide variety of possible future scenarios vs path can function in only one possible (and conjunctive, or unlikely) future scenario. Illustrated amusingly here (though mixed in with other forms of narrowness as well).
The path has many necessary steps that are notably weak (that is, there’s a high probability they might not happen) vs the path has few such weak steps.
The path has a weak step that might not happen early on in its causal chain, vs path has a weak step that might not happen later in its causal chain.
The path initially has only few routes, but after one step opens up many paths, vs the path initially has many paths but after later on narrows down to few paths.
Other quick observations about this:
A ‘narrow’ approach on any of these definitions clearly has downsides, but could be compensated for with greater ‘force’ e.g. the magnitude of the impact along a single chain is very huge if the scheme works out.
It’s hard to know what should ‘count’ as a step and what doesn’t. How do we cleave these causal diagrams at their true joints? One rule of thumb for drawing out such diagrams in practice could be that a ‘step’ in the chain is something that is less than 99% likely to happen—anything higher than that can be ignored, though this too is vulnerable to arbitrariness in what you lump together as a single step.
I believe in the overwhelming importance of shaping the long term future. In my view most causal chains that could actually matter are likely to be very long by normal standards. But they might at least have many paths to impact, or be robust (i.e. have few weak steps).
People who say they are working on broad, robust or short chains usually ignore the major uncertainties about whether the farther out regions of the chain they are a part of are positive, neutral or negative in value. I think this is dangerous and makes these plans less reliable than they superficially appear to be.
If any single step in a chain produces an output of zero, or negative expected value (e.g. your plan has many paths to increasing our forecasting ability, but it turns out that doing so is harmful), then the whole rest of that chain isn’t desirable.
Most plans in practice seem to go through a combination of both narrow and broad sections (e.g. lobbying for change to education policy has a bottleneck around the change in the law; after that there may be many paths to impact).
I’m most excited about look for ways to reduce future risks that can work in a wide range of scenarios—so called ‘capacity building’.