What consequences?

This is the first in a series of posts exploring consequentialist cluelessness and its implications for effective altruism:

  • This post describes cluelessness & its relevance to EA; arguing that for many popular EA interventions we don’t have a clue about the intervention’s overall net impact.

  • The second post considers a potential reply to concerns about cluelessness.

  • The third post examines how tractable cluelessness is – to what extent we can grow more clueful about an intervention through intentional effort?

  • The fourth post discusses how we might do good while being clueless to an important extent.

My prior is that cluelessness presents a profound challenge to effective altruism in its current instantiation, and that we need to radically revise our beliefs about doing good such that we prioritize activities that are robust to moral & empirical uncertainty.

My goal in writing this piece is to elucidate this position, or to discover why it’s mistaken. I’m posting in serial form to allow more opportunity for forum readers to change my mind about cluelessness and its implications.


By “cluelessness”, I mean the possibility that we don’t have a clue about the overall net impact of our actions.[1] Another way of framing this concern: when we think about the consequences of our actions, how do we determine what consequences we should consider?

First, some definitions. The consequences of an action can be divided into three categories:

  • Proximate consequences – the immediate effects that occur soon afterward to intended object(s) of an action. Relatively easy to observe and measure.

  • Indirect consequences – the effects that occur soon afterward to unintended object(s) of an action. These could also be termed “cross-stream” effects. Relatively difficult to observe and measure.

  • Long-run consequences – the effects of an action that occur much later, including effects on both intended and unintended objects. These could also be termed “downstream” effects. Impossible to observe and measure; most long-run consequences can only be estimated.[2]


Effective altruist approaches towards consequences

EA-style reasoning addresses consequentialist cluelessness in one of two ways:

1. The brute-good approach – collapsing the consequences of an action into a proximate “brute-good” unit, then comparing the aggregate “brute-good” consequences of multiple interventions to determine the intervention with the best (brute good) consequences.

    • For example, GiveWell uses “deaths averted” as a brute-good unit, then converts other impacts of the intervention being considered into “deaths-averted equivalents”, then compares interventions to each other using this common unit.

    • This approach is common among the cause areas of animal welfare, global development, and EA coalition-building.

2. The x-risk reduction approach – simplifying “do the actions with the best consequences” into “do the actions that yield the most existential-risk reduction.” Proximate & indirect consequences are only considered insofar as they bear on x-risk; the main focus is on the long-run: whether or not humanity will survive into the far future.

    • Nick Bostrom makes this explicit in his essay, Astronomical Waste: “The utilitarian imperative ‘Maximize expected aggregate utility!’ can be simplified to the maxim ‘Minimize existential risk!’”

    • This approach is common among the x-risk reduction cause area.

EA focus can be imagined as a bimodal distribution – EA either considers only the proximate effects of an intervention, ignoring its indirect & long-run consequences; or considers only the very long-run effects of an intervention (i.e. to what extent the intervention reduces x-risk), considering all proximate & indirect effects only insofar as they bear on x-risk reduction.[3]

Consequences that fall between these two peaks of attention are not included in EA’s moral calculus, nor are they explicitly determined to be of negligible importance. Instead, they are mentioned in passing, or ignored entirely.

This is problematic. It’s likely that for most interventions, these consequences compose a substantial portion of the intervention’s overall impact.


Cluelessness and the brute-good approach

The cluelessness problem for the brute-good approach can be stated as follows:

Due to the difficulty of observing and measuring indirect & long-run consequences of interventions, we do not know the bulk of the consequences of any intervention, and so cannot confidently compare the consequences of one intervention to another. Comparing only the proximate effects of interventions assumes that proximate effects compose the majority of interventions’ impact, whereas in reality the bulk of an intervention’s impact is composed of indirect & long-run effects which are difficult to observe and difficult to estimate.[4]

The brute-good approach often implicitly assumes symmetry of non-proximate consequences (i.e. for every indirect & long-run consequence, there is an equal and opposite consequence such that indirect & long-run consequences cancel out and only proximate consequences matter). This assumption seems poorly supported.[5]

It might be thought that indirect & long-run consequences can be surfaced as part of the decision-making process, then included in the decision-maker’s calculus. This seems very difficult to do in a believable way (i.e. a way in which we feel confident that we’ve uncovered all crucial considerations). I will consider this issue further in the next post of this series.

Some examples follow, to make the cluelessness problem for the brute-good approach salient.

Example: baby Hitler

Consider the position of an Austrian physician in the 1890s who was called to tend to a sick infant, Adolf Hitler.

Considering only proximate effects, the physician should clearly have treated baby Hitler and made efforts to ensure his survival. But the picture is clouded when indirect & long-run consequences are added to the calculus. Perhaps letting baby Hitler die (or even committing infanticide) would have been better in the long-run. Or perhaps the German zeitgeist of the 1920s and 30s was such that the terrors of Nazism would have been unleashed even absent Hitler’s leadership. Regardless, the decision to minister to Hitler as a sick infant is not straightforward when indirect & long-run consequences are considered.

A potential objection here is that the Austrian physician could in no way have foreseen that the infant they were called to tend to would later become a terrible dictator, so the physician should have done what seemed best given the information they could uncover. But this objection only highlights the difficulty presented by cluelessness. In a very literal sense, a physician in this position is clueless about what action would be best. Assessing only proximate consequences would provide some guidance about what action to take, but this guidance would not necessarily point to the action with the best consequences in the long run.

Example: bednet distributions in unstable regions

The Against Malaria Foundation (AMF) funds bed net distributions in developing countries, with the goal of reducing malaria incidence. In 2017, AMF funded its largest distribution to date, over 12 million nets in Uganda.

Uganda has had a chronic problem with terror groups, notably the Lord’s Resistance Army operating in the north and Al-Shabab carrying out attacks in the capital. Though the country is believed to be relatively stable at present, there remain non-negligible risks of civil war or government overthrow.

Considering only the proximate consequences, distributing bednets in Uganda is probably a highly cost-effective method of reducing malaria incidence and saving lives. But this assessment is muddied when indirect and long-run effects are also considered.

Perhaps saving the lives of young children results in increasing the supply of child-soldier recruits for rebel groups, leading to increased regional instability.

Perhaps importing & distributing millions of foreign-made bed nets disrupts local supply chains and breeds Ugandan resentment toward foreign aid.

Perhaps stabilizing the child mortality rate during a period of fundamentalist-Christian revival increases the probability of a fundamentalist-Christian value system becoming locked in, which could prove problematic further down the road.

I’m not claiming that any of the above are likely outcomes of large-scale bed net distributions. The claim is that the above are all possible effects of a large-scale bed net distribution (each with a non-negligible, unknown probability), and that due to many possible effects like this, we are prospectively clueless about the overall impact of a large-scale bed net distribution.

Example: direct-action animal-welfare interventions

Some animal welfare activists advocate direct action, the practice of directly confronting problematic food industry practices.

In 2013, animal-welfare activists organized a “die-in” at a San Francisco Chipotle. At the die-in, activists confronted Chipotle consumers with claims about the harm inflicted on farm animals by Chipotle’s supply chain.

The die-in likely had the proximate effect of raising awareness of animal welfare among the Chipotle consumers and employees who were present during the demonstration. Increasing social awareness of animal welfare is probably positive according to consequentialist perspectives that give moral consideration to animals.

However, if considering indirect and long-run consequences as well, the overall impact of direct action demonstrations like the die-in is unclear. Highly confrontational demonstrations may result in the animal welfare movement being labeled “radical” or “dangerous” by the mainstream, thus limiting the movement’s influence.

Confrontational tactics may also be controversial within the animal welfare movement, causing divisiveness and potentially leading to a schism, which could harm the movement’s efficacy.

Again, I’m not claiming that the above are likely effects of direct-action animal-welfare interventions. The claim is that indirect & long-run effects like this each have a non-negligible, unknown probability, such that we are prospectively clueless regarding the overall impact of the intervention.


Cluelessness and the existential risk reduction approach

Unlike the brute-good approach, which tends to overweight the impact of proximate effects and underweight that of indirect & long-run effects, the x-risk reduction approach focuses almost exclusively on the long-run consequences of actions (i.e. how they effect the probability that humanity survives into the far future). Interventions can be compared according to a common criterion: the amount by which they are expected to reduce existential risk.

While I think cluelessness poses less difficulty for the x-risk reduction approach, it remains problematic. The cluelessness problem for the x-risk reduction approach can be stated as follows:

Interventions aimed at reducing existential risk have a clear criterion by which to make comparisons: “which intervention yields a larger reduction in existential risk?” However, because the indirect & long-run consequences of any specific x-risk intervention are difficult to observe, measure, and estimate, arriving at a believable estimate of the amount of x-risk reduction yielded by an intervention is difficult. Because it is difficult to arrive at believable estimates of the amount of x-risk reduction yielded by interventions, we are somewhat clueless when trying to compare the impact of one x-risk intervention to another.

An example follows to make this salient.

Example: stratospheric aerosol injection to blunt impacts of climate change

Injecting sulfate aerosols into the stratosphere has been put forward as an intervention that could reduce the impact of climate change (by reflecting sunlight away from the earth, thus cooling the planet).

However, it’s possible that stratospheric aerosol injection could have unintended consequences, such as cooling the planet so much that the surface is rendered uninhabitable (incidentally, this is the background story of the film Snowpiercer). Because aerosol injection is relatively cheap to do (on the order of tens of billions USD), there is concern that small nation-states, especially those disproportionately affected by climate change, might deploy aerosol injection programs without the consent or foreknowledge of other countries.

Given this strategic landscape, the effects of calling attention to stratospheric aerosol injection as a cause are unclear. It’s possible that further public-facing work on the intervention results in international agreements governing the use of the technology. This would most likely be a reduction in existential risk along this vector.

However, it’s also possible that further public-facing work on aerosol injection makes the technology more discoverable, revealing the technology to decision-makers who were previously ignorant of its promise. Some of these decision-makers might be inclined to pursue research programs aimed at developing a stratospheric aerosol injection capability, which would most likely increase existential risk along this vector.

It is difficult to arrive at believable estimates of the probability that further work on aerosol injection yields an x-risk reduction, and of the probability that further work yields an x-risk increase (though more granular mapping of the game-theoretic and strategic landscape here would increase the believability of our estimates).

Taken together, then, it’s unclear whether public-facing work on aerosol injection yields an x-risk reduction on net. (Note too that keeping work on the intervention secret may not straightforwardly reduce x-risk either, as no secret research program can guarantee 100% leak prevention, and leaked knowledge may have a more negative effect than the same knowledge made freely available.)

We are, to some extent, clueless regarding the net impact of further work on the intervention.


Where to, from here?

It might be claimed that, although we start out being clueless about the consequences of our actions, we can grow more clueful by way of intentional effort & investigation. Unknown unknowns can be uncovered and incorporated into expected-value estimates. Plans can be adjusted in light of new information. Organizations can pivot as their approaches run into unexpected hurdles.

Cluelessness, in other words, might be very tractable.

This is the claim I will consider in the next post. My prior is that cluelessness is quite intractable, and that despite best efforts we will remain clueless to an important extent.

The topic definitely deserves careful examination.

Thanks to members of the Mather essay discussion group for thoughtful feedback on drafts of this post. Views expressed above are my own. Cross-posted to my personal blog.


Footnotes

[1]: The term “cluelessness” is not my coinage; I am borrowing it from academic philosophy. See in particular Greaves 2016.

[2]: Indirect & long-run consequences are sometimes referred to as “flow-through effects,” which, as far as I can tell, does not make a clean distinction between temporally near effects (“indirect consequences”) and temporally distant effects (“long-run consequences”). This distinction seems interesting, so I will use “indirect” & “long-run” in favor of “flow-through effects.”

[3]: Thanks to Daniel Berman for making this point.

[4]: More precisely, the brute-good approach assumes that indirect & long-run consequences will either:

  • Be negligible

  • Cancel each other out via symmetry (see footnote 5)

  • On net point in the same direction as the proximate consequences (see Cotton-Barratt 2014: “The upshot of this is that it is likely interventions in human welfare, as well as being immediately effective to relieve suffering and improve lives, also tend to have a significant long-term impact. This is often more difficult to measure, but the short-term impact can generally be used as a reasonable proxy.”)

[5]: See Greaves 2016 for discussion of the symmetry argument, and in particular p. 9 for discussion of why it’s insufficient for cases of “complex cluelessness.”