Longtermism and Computational Complexity

Introduction

This post argues that longtermist effective altruism may not be action-guiding. This is bad news for a theory of how we ought to live. The crux of my argument is as follows: determining which long-term risks we ought to try to prevent or mitigate requires us to solve problems that may be computationally intractable for finite agents such as ourselves. I take my argument to establish that: i) the longtermist effective altruist helps themselves to the unjustified assumption that we are capable of making inferences with high computational complexity, and ii) this raises difficult challenges for the longtermist effective altruist program that have been under-appreciated to date. Responding to these challenges may require significant rethinking of the conceptual foundations of the longtermist effective altruist movement. In what follows, I’ll lay out my argument in more detail, consider some potential counterarguments to my central claims, and respond to those counterarguments. I’ll then offer some recommendations in light of my conclusions.

My Argument in Ordinary Language

In this section, I use ordinary language to state my argument that longtermist effective altruism is not action-guiding. In a later section, I will also give a more mathematical version of the same argument. I take both arguments to be valid and sound, although the more mathematical version allows the premises to be stated more precisely.

The ordinary-language version of the argument begins with the following premise, which I take to be a core tenet of longtermist effective altruism as it is defended by, among others, Ord (2020) and MacAskill (2022).

Prevent Possible Harms: For any event e that might occur in the future (including the very far future), there is a sufficiently large reduction in utility such that, if e’s occurrence would lead to said reduction, then we ought to take costly steps now to make e less likely, or mitigate the negative impacts of e.

For example, if a potential climate catastrophe could cause many vertebrate species to become extinct, then we ought to take steps to reduce the probability of said catastrophe, even if doing so requires us to incur significant costs.

The ethos expressed in Prevent Possible Harms is sometimes known as “fanaticism,” since it recommends that we should potentially incur very high costs now in order to lower the already-microscopic probability of some far-future event. The charge of fanaticism is sometimes seen as an objection to longtermist effective altruism, but I follow MacAskill and Greaves (2021, Sec. 8) in holding that these charges do not stick. If Prevent Possible Harms could feasibly guide our actions now, and also recommends fanaticism, then I think that we ought to be fanatics. Thus, my aim here isn’t to object to the content of Prevent Possible Harms per se. Rather, I’ll object here to the assumption that Prevent Possible Harms is capable of informing how actual human agents ought to live their lives.

The second premise in my argument to this end is the following:

Don’t Prevent Impossible Harms: For any event e that will not occur in the future, there is no sufficiently large reduction in utility such that, if e’s occurrence would lead to said reduction, then we ought take costly steps now to make e less likely, or mitigate the negative impacts of e.

While it is not discussed as frequently in the longtermist and effective altruist literature, Don’t Prevent Impossible Harms is a truism for most theories of how we ought to act: if an event won’t occur, then we shouldn’t take steps to make it less likely, or mitigate it’s negative impacts. Moreover, Don’t Prevent Impossible Harms follows from the idea that “ought implies can” (Kant, 1781); if e won’t occur, then it’s not possible for us to make it any less likely, or to mitigate negative outcomes that occur because e occurs, and so we cannot be compelled to attempt to do so. To illustrate, if Venus were to suddenly deviate from its orbit tomorrow and collide with Earth, this would presumably lead to a very large aggregate reduction in utility on Earth. But Venus won’t do that, and so no matter how large this hypothetical reduction in utility might be, we shouldn’t take any costly steps now to prevent this hypothetical event from occurring.

My third and final premise describes a key limitation on our ability to predict whether an arbitrary future event will occur.

No Efficient Algorithm: There might not be an efficient algorithm that can take as input: i) a theory of the relationships between all possible events, and ii) any event e such that e’s occurrence would lead to a significant reduction in utility, and return the correct answer as to whether or not, according to the theory, e could occur.

In what follows, I will define the terms “theory” and “efficient algorithm” more precisely. For now, I note that No Efficient Algorithm is derived from an ordinary-language statement of a famous result in computational complexity theory due to Cooper (1990).

Equipped with these three premises, we are able to state the following proposition:

Proposition 1: If Prevent Possible Harms, Don’t Prevent Impossible Harms, and No Efficient Algorithm are all true, then there might not be an efficient algorithm that can take as that input: i) a theory of the relationships between all possible events, and ii) any event e such that e’s occurrence would lead to a significant reduction in utility, and return the correct answer as to whether or not, according to the theory, we ought to take costly steps to prevent E or mitigate the negative impacts of e.

This proposition follows immediately from the definitions of its three antecedent conditions. It is on the basis of this result that I conclude that any theory committed to Prevent Possible Harms, which I take to include most versions of longtermist effective altruism, may be limited in its ability to be action-guiding. If there isn’t an efficient algorithm that tells us whether not we ought to take steps to prevent or mitigate the effects of any potentially catastrophic event, then it seems that Prevent Possible Harms does not give us any practical advice as to how we ought to live our lives. The key assumption here is that if a maxim is meant to be generally action-guiding, then we need to be able to efficiently determine, for any given case, whether or not it recommends taking a particular action.

In what follows, I’ll unpack this argument in more mathematical detail. But first, I’ll need to provide some necessary technical background on computational complexity theory, probabilistic inference, and the computational complexity of probabilistic inference.

Technical Background

Computational Complexity

My discussion of computational complexity makes use of Turing’s (1936) now-famous theoretical framework defining a machine that computes the outputs of functions from their inputs. I won’t go into the details of Turing machines here, as they ultimately aren’t necessary for any of the arguments that I will give below. I’ll also use the term ‘Turing machine’ as a shorthand for ‘deterministic Turing machine.’

Computational complexity theory is a field of mathematics that studies how difficult is is for a computer (e.g., a Turing machine) to solve problems. A problem, for our purposes here, is a pair , where is a function . A Turing machine solves a problem if, given any input in , it eventually outputs and then halts. Note that some problems cannot be solved by a Turing machine (e.g., if the function is not computable). We suppose that is possible to measure the size of any input in ; that is, there is a function that tells us how large any input is. For example, if is a set of bit strings, then might be the number of bits in the string. We also suppose that for any problem , we can measure the maximum amount of time that it takes any Turing machine to solve that problem.

Equipped with these definitions, we can now define some complexity classes, which are sets of problems such that a problem’s membership in the set depends on how difficult that problem is to solve efficiently. The first is the complexity class :

The Complexity Class : A problem is in if and only if there is a Turing machine such that, given any input in , the Turing machine outputs and then halts, in an amount of time that is a polynomial function of the size of .

Intuitively, if a problem is in and we use a Turing machine to find that value for the input with size n, then the Turing machine will take some amount of time t to find the correct answer and then halt. The specific amount of time t will be determined by a polynomial function of n (e.g., a quadratic, cubic, or quartic function). This means that if we feed the same Turing machine some larger input , that Turing machine will take longer to return the value , but the difference between the time that it takes to compute , as compared to , will not be astronomical, since the time it takes to solve a problem in grows sub-exponentially with the size of the input.

When a problem is in , there is a precise sense in which there is an “efficient algorithm” for solving that problem. A Turing machine can solve the problem in such a way that as the size of the input increases, the time it takes to solve the problem does not undergo a quantitative explosion, but instead increases at a manageable rate. A good intuition pump is to think of the time it might take you to read a 400 page book versus an 800 page book written in a similar style. If the 400-page book takes ten hours of active reading to complete, then the 800-page book might take twenty hours, or thirty hours, but it won’t take 1,000 hours of active reading. This is because reading is a task that humans, in general, are able to do efficiently.

A second important class of problems is the set of problems in . These are problems that can be verified in polynomial time by a Turing machine. We define this complexity class more precisely as follows:

The Complexity Class : A problem is in if and only if there is a problem , where only if , that is in .

Intuitively, a problem is in if and only if, for any input-output pair, one can be given a “hint” that allows you to verify whether the output given is the correct one for that input, in an amount of time that scales polynomially with the size of the input. For instance, there is no known algorithm that solves the general problem of determining whether a finite set of integers has a subset that sums to zero in time polynomial in the size of the set. But, if we are given a pair containing the set of integers and the subset , then we can check, in time polynomial in the size of M, whether or not all elements of M sum to zero, by simply find the sum of the elements of M. This allows us to verify that N contains a subset of integers that sum to zero, if indeed it does. The set M is the “hint” that allows us to verify our answer.

Clearly, . But one of the most beguiling open questions in mathematics is whether or not (resolving this question currently comes with a $1 million prize). Most professionals working on the problem believe that (see Rosenberger 2012), such that there are problems that can be verified but not solved in polynomial time.

The final class of problems that we will consider is the set of -Hard problems:

The -Hard Problems: A problem is -Hard if and only if, were it to be in , then every problem in would be in .

Intuitively, one can think of -Hard problems as problems such that, if we could somehow solve them in polynomial time, then it would act as a key to unlock our ability to solve all of the problems in in polynomial time. Crucially, if it is indeed the case that , then no -Hard problem can be solved in polynomial time.

In what follows, I’ll present the result, which I take to be crucial to my argument that longtermist effective altruism may not be action guiding, that probabilistic inference is -Hard. But first, I’ll have to say a little more about what I mean by “probabilistic inference” in this context.

Probabilistic Inference

In this section, I’ll present some formal details on how I’m understanding probabilistic inference, as well as formalize the concept of a “theory,” as the term is used in my argument above.

A probability space is a triple , where is a set of all possible worlds, is a set of subsets of closed under union, intersection, and complement, and is a probability distribution such that , , and for any disjoint sets A and B, . Intuitively, since is a set of possible worlds, the elements of are possible events, to which P assigns probabilities.

A binary random variable is a function . Any binary random variable is measurable with respect to a probability space if and . For our purposes, we’ll treat any binary random variable V as corresponding to a proposition in a natural language, with if the proposition is true and if it is false. We can assign a probability to the event that , where , using the formula

A propositional network is a pair , in which is a set of binary random variables and is a set of edges, or ordered pairs, relating those variables. If contains an ordered pair , we say that is a parent of . One can visualize a propositional network as a directed graph in which the nodes are binary random variables, connected by directed arrows indicating parent-child relationships.

A belief network is a pair consisting of a propositional network and a probability space such that:

  1. All variables in are measurable with respect to .

  2. For any variable in and any , the probability is given by the equation: , where is the set containing all and only V’s parents in the belief network, and each is a possible assignment of truth values to all variables in .

Thus, in a belief network, the probability of the truth or falsehood of some proposition is determined solely by the conditional distribution over the corresponding propositional variable, given its parents, and the prior distribution over each of those parents. A belief network formalizes the intuitive idea of a “theory,” in that it specifies how, and the extend to which, the truth of each proposition in some salient set depends on the truth of the other propositions in that set. As such, it also provides a blueprint for making probabilistic inferences, in that it allows us to determine, based on which propositions we take to be true, what our degree of belief in the truth of other propositions should be. A belief network is a special case of what is also called a Bayesian network (see Pearl, 1988).

The Computational Complexity of Probabilistic Inference

Consider the following problem, which Cooper (1990) calls “PIBNETD” (an abbreviation for “Probabilistic Inference in Belief NETworks—Decision”):

PIBNETD: For any belief network and variable within that belief network, output if and output otherwise.

More formally, PIBNETD is the problem , where is the set of all triples such that the pair is a belief network. The problem is solved by outputting if and only if , with otherwise.

A Turing machine that solves this problem is able to determine, for any probabilistic theory of the relationships between propositions, whether or not any proposition expressible in that theory has positive probability of being true.

As foreshadowed above, Cooper (1990) proves the following:

Proposition 2: PIBNETD is -Hard, where the size of the input is given by the number of variables in the belief network.

This comes with the immediate implication that if , then there is no algorithm that can take as input any belief network and return an answer as to whether a given proposition has positive probability, while always outputting that answer in time polynomial in the number of variables in the belief network. This important result is central to the mathematical argument, given in the next section, that longtermist effective altruism may not be action-guiding.

My Argument in More Mathematical Language

I begin by re-writing Prevent Possible Harms in terms of belief networks and probabilities.

Prevent Possible Harms*: Let be a belief network representing some agent or group of agents’ beliefs about the probabilistic relationships between some set of propositions. Let be a function such that represents the change in aggregate utility that is realized if rather than . If , then there is real number such that, if , then those agents ought to take costly steps now to make it less likely that , or to mitigate the negative impacts of the event(s) that occur if .

To illustrate, suppose that, according to our best theory of AI development (represented as a belief network), there is positive probability of an AI-driven human extinction event in the next hundred years. Presumably, the change in utility that is realized by this extinction event occurring, rather than not occurring, is sufficiently negative that the event’s having positive probability warrants taking costly action now to make this event less likely, or to mitigate its potential negative effects.

Next, I’ll provide an analogous re-writing of Don’t Prevent Impossible Harms:

Don’t Prevent Impossible Harms*: Let be a belief network representing some agent or group of agents’ beliefs about the probabilistic relationships between some set of propositions. Let be a function such that represents the change in aggregate utility that is realized if rather than . If , then there is no real number such that, if , then those agents ought to take costly steps now to make it less likely that , or to mitigate the negative impacts of the event(s) that occur if .

This formalizes the idea that if an event has zero probability of occurring, then no matter how bad it would be if that event were to occur, we are not compelled to take costly steps now to lower the probability of that event occurring in the future.

I now define the following problem:

PIBNETD-Harms: For any belief network and any variable within that belief network such that for some , output if and output otherwise.

PIBNETD-Harms is just PIBNETD restricted to those propositional variables whose truth would lead to a sufficiently negative outcome. From this definition, the corollary immediately follows:

Corollary 3: If Prevent Possible Harms* and Don’t Prevent Impossible Harms* are both true, then we can determine, for any belief network and any variable within that belief network such that for some , whether we ought to take steps now to make it less likely that , or to mitigate the negative impacts of the event(s) that occur if , only if we can solve PIBNETD-Harms.

Corollary 3 establishes the conditions under which Prevent Possible Harms* and Don’t Prevent Impossible Harms* are mutually action guiding. Namely, if we can solve PIBNETD-Harms, then these two tenets of longtermist action can help us decide what to do now. Unfortunately, as I’ll argue below, we have reason to suspect that we cannot solve PIBNETD-Harms efficiently.

To this end, I’ll introduce a new premise, with no analog in the ordinary-language argument:

Independence of Bad Outcomes*: For any belief network , any function , and any real number , there is no Turing machine that takes as input the belief network and the real number and outputs the proper subset of variables such that if , then , and halts.

In other words, there is no way for any finite intelligent being to determine, using only structural and probabilistic properties of a belief network, which proper subset of propositions represented in the network are such that their truth could amount to a very bad outcome. Under this assumption, the badness of a proposition is independent of its structural or probabilistic relationships to other propositions. As an intuition pump, suppose that you were able to view a large spreadsheet of propositions, each labelled just with a number. You also have a conditional probability table showing you the probabilistic relationships between all these propositions. It is hard to see how, given just this information, you could identify a proper propositions are such that, if they were true, things could be especially bad. The structural and probabilistic information contained in the spreadsheet does not, on its own, tell us anything about how we value the truth or falsehood of the propositions depicted.

With this new premise in hand, I can state the following proposition:

Proposition 4: If Independence of Bad Outcomes* is true, then PIBNETD-Harms is -Hard, where the size of the input is given by the number of variables in the belief network.

Proof. If it were the case that for any given belief network and , any function , and any real number , only certain variables could be such that , then it would be the case that a Turing machine could take the belief network and the number as input and output the relevant subset of variables that could be such that . Thus, the truth of Independence of Bad Outcomes* entails that for any belief network , any function , and any real number , any can be such that . Thus, solving PIBNETD-Harms requires us to solve the problem of determining, for any any belief network and variable within that belief network, whether . This is just the problem PIBNETD, which is known to be -Hard, where the size of the input is given by the number of variables in the belief network. And so PIBNETD-Harms is also -Hard where the size of the input is measured in the same way.

This proposition leads directly to the following corollary:

Corollary 5: If , then PIBNETD-Harms cannot be solved in time polynomial in the number of variables in the belief network.

Which, in conjunction with Corollary 3, leads to the key claim of this post:

Corollary 6: If , and if Prevent Possible Harms* and Don’t Prevent Impossible Harms* are both true, then there is no algorithm that takes as input any belief network and any variable within that belief network such that for some and determines, in time polynomial in the number of variables in the belief network, whether we ought to take steps now to make it less likely that , or to mitigate the negative impacts of the event(s) that occur if .

It is on the basis of this corollary that I conclude that Prevent Possible Harms* and Don’t Prevent Impossible Harms*, and their ordinary-language counterparts, may not be generally action-guiding. My inclusion of the quantifier “may” reflects that my conclusion ultimately depends on the conjecture that . But the fact that this conjecture may be true (indeed, a single polynomial-time algorithm that solves an -Hard problem would establish that , and yet no such algorithm has ever been found) should give us serious pause that we have a general ability to separate between those potential catastrophes that have no probability of occurring, and those that have positive probability of occurring. This is bad news for a worldview that seeks to tell us what we ought to do, and which insists that extreme measures may need to be taken to prevent low-probability events with potentially catastrophic effects.

In the next section, I’ll consider some counterarguments to the claims that I have presented here, and respond to each.

Counterarguments and Responses

Counterargument 1: Salient Variables Only

Counterargument: The results given above show that no general-purpose algorithm can efficiently receive as input any probabilistic theory and any proposition in that theory, and then tell us whether that proposition has positive probability of being true. But this is more than the longtermist effective altruist needs. Rather, the longtermist effective altruist just needs a special-purpose algorithm to tell them whether or not, on a given probabilistic theory, a few salient propositions (e.g., that a catastrophic climate change event, nuclear war, or AI-driven extinction event will occur) have positive probability of being true. Nothing in the argument given above rules out the possibility of such an algorithm. Moreover, we have independent reasons for believing that each of these events has positive probability, and so it seems that such a special-purpose algorithm already exists.

Response: This response assumes that when we attempt to solve PIBNETD-Harms, the fact that a propositional variable is such such that (and so, is salient from the perspective of avoiding significant harms) will entail that the same variable is also one such that, however it appears in a belief network, we will be able to efficiently determine whether or not it has positive probability. It would be fantastic if this were true, but I see no reason to believe that it is. In general, we may be able to design special-purpose algorithms that efficiently calculate the probability distribution over a variable when that variable occupies a special position in a belief network; namely, when it is in a sparse region of that network, without too many edges connecting variables (see Pearl 1986). As expressed in Independence of Bad Outcomes*, it seems plausible that there is no way of inferring, just from the structural and probabilistic properties of a belief network, what the moral valence of a given propositional variable is. By the same token, it would be strange if the special badness of a propositional variable’s being true would imply that the propositional variable in question must occupy a special position within a belief network.

That said, this response is correct to point out that there are some catastrophic events that longtermist effective altruists tell us we ought to take costly steps to avoid for which we have good reason to assign positive probability. Take the possibility of a catastrophe due to compounding effects of anthropogenic climate change. Here, we have an abundance of well-defined models of Earth’s climate, each supported by empirical data. While making high-resolution predictions of the behavior of the climate on 100-year timescales is not possible (see Frigg et al., 2013), these models are in agreement that there is positive probability of extremely harmful climate change that warrants very costly action now (Hoegh-Guldberg et al., 2019). What is noteworthy about this case is not that the possibility of catastrophic climate change is a particularly salient variable, but rather that on the much narrower set of belief networks that are supported by existing climate data, we are able to efficiently reason that a climate catastrophe has positive probability of occurring.

However, I will also note that the imperative to avoid and mitigate the possibility of catastrophic climate change is not uniquely highlighted by longtermist effective altruists. Indeed, we have good evidence that we are already experiencing significant negative impacts of climate change (Letchner 2021), such that there is nothing especially longtermist about taking steps now to reduce climate change. The threat of climate change is also a concern that is highlighted by many organizations that are not explicitly effective altruist (e.g., the United Nations or the Democratic Party in the U.S.). Much the same could be said about the existential risks due to nuclear war; there are models, well-supported by data, that demonstrate the consequences of multiple detonations of the kinds of nuclear weapons that we know for a fact are currently deployable. That these weapons are deployable now means that there is nothing “longtermist” about wanting to avoid nuclear war. Moreover, nuclear de-proliferation is a concern of many organizations that are not explicitly effective altruist (indeed, many of the same ones concerned with climate change).

On the other hand, consider a putative existential risk like the possibility of malevolent artificial general intelligence. The possibility that the creation of greater-than-human intelligence could pose an existential risk to humanity was first seriously theorized in a thought experiment due to Good (1966), and has been an animating cause of many longtermist effective altruists since at least the publication of Bostrom’s Superintelligence (2014). However, unlike in the case of climate change, the proposition that we have data-supported models that entail that there is a positive probability of a catastrophe brought about by artificial intelligence is much less plausible. Here is Ord (2020), discussing his motivation for entertaining the possibility of an AI-driven catastrophic event, one that we ought to take costly action to prevent:

The most plausible existential risk would come from success in AI researchers’ grand ambition of creating agents with a general intelligence that surpasses our own. But how likely is that to happen, and when? In 2016, a detailed survey was conducted of more than 300 top researchers in machine learning. Asked when an AI system would be “able to accomplish every task better and more cheaply than human workers,” on average they estimated a 50 percent chance of this happening by 2061 and a 10 percent chance of it happening as soon as 2025 (pp. 139-140).

In light of the results presented above, we should be very skeptical of the epistemic value of individuals’ estimates of the probability of any event, especially when those estimates are not transparently based on a data-supported model. We know that in general, Turing machines cannot take an arbitrary probabilistic theory of how their environment works and accurately compute whether some proposition in that network has positive probability of being true. As such, we have no reason to think that any human being, even a top ML researcher, can use their best attempt at a theory of how intelligence and computing work to compute whether it is even possible for an AI system to accomplish every task better and more cheaply than human workers.

Counterargument 2: Salient Belief Networks Only

Counterargument: Along the same lines of the previous counterargument, it is not the case that longtermist effective altruists need a general-purpose efficient algorithm for telling them whether any probabilistic theory assigns positive probability to certain catastrophic events. Rather, one only needs an algorithm that works on those belief networks that are well-supported by our best scientific theories of natural systems. The success of science to date suggests that these kinds of efficient algorithms do exist, and can be used to detect positive-probability disaster events like catastrophic climate change or an AI-driven extinction event.

Response: Many of the disaster events that are most salient in longtermist effective altruist discourse are mesoscale phenomena. Their occurrence is governed by the dynamics of objects larger that the particles whose behavior is very well-predicted by quantum mechanics, and smaller than the astronomical bodies whose behavior is similarly well-predicted by general relativity. It is precisely at this mid-size range that science has a harder time making accurate predictions and even generating good models. Thus, we are not in a position now to tell which belief networks relating propositions about mesoscale phenomena are well-supported by our best science.

Again, there are cases where we do have well-supported scientific models that assign positive probability to catastrophic events like climate change. And in the case of nuclear war, we have historical evidence of the destruction that nuclear weapons can cause. We can and should heed the recommendations of those models. But for more esoteric putative existential risks like an AI-driven extinction event, it is far less clear that we have any scientific basis for restricting ourselves to any class of belief networks when representing the relevant aspects of our environment. Indeed, we still do not know whether humanity-threatening superintelligence is even physically possible. Without an answer to this question, we lack a basis for restricting ourselves to any one class of belief networks for determining whether such an event has positive probability of occurring. Lacking such a basis, and also lacking any efficient algorithm for determining whether such an event has positive probability according to any input belief network, we must, I conclude, accept that our epistemic position with respect to whether such events have positive probability of occurring is extremely limited.

Counterargument 3: Limits on Harms and Costs

Counterargument: It may be that there is a finite lower bound on the function , such that for any propositional variable, there is only so much negative change in utility that can result from that proposition’s being true. Moreover, there may also be a finite lower bound on the cost (expressed as a positive real number) of taking any action to lower the probability of any event. Under these circumstances, the longtermist effective altruist does not need to compute whether for any V, but must instead be able to compute whether . If this inequality does not hold, then , and so the magnitude of the greatest possible decrease in utility that would occur if is less than or equal to the minimal cost that we can incur to reduce the probability that , such that we should be either indifferent or averse to incurring these costs now to reduce the probability that in the future. If the inequality does hold, then the cost of acting now to reduce the probability that may be worth incurring.

Response: Efficiently determining whether for any c and b is still an -Hard problem. To show this, we first define a problem:

PIBNETD-THRESHOLD: For some , any belief network, and any variable within that belief network, output if and output otherwise.

We then prove the proposition:

Proposition 7: PIBNETD-THRESHOLD is -Hard for when the size of the input is defined in terms of the number of variables in the belief network.

Proof. Shimony (1994) shows that for any , the following problem is -Hard when the size of the input is defined in terms of the number of variables in the belief network:

MAPBNETD (Maximum A-posteriori Probability in a Belief NETwork—Decision): For any belief network, output if there is an assignment of values to all variables in the belief network such that , and output otherwise.

Suppose that we could solve PIBNETD-THRESHOLD in time polynomial in the size of the input belief network. Then for the input variable V, we would could determine in polynomial time either that or that (since ). Let be an assignment of values to the variables . If we could determine in polynomial time either that or that , then we also know either that or that . This would entail in turn that there exists a such that either or , where n is the cardinality of . This would provide a polynomial-time solution to MAPBNETD for . This shows that PIBNETD-THRESHOLD is -Hard.

Even if there is a lower bound on the probability of a catastrophic event such that the longtermist effective altruist will recommend taking costly steps now to lower the probability of that event, it can be safely assumed that this lower bound is less than .5. Thus, this result shows that restrictions on the badness of any outcome or the minimal costs we can incur to lower the probability of some outcome occurring won’t get the longtermist effective altruist out their epistemic predicament.

Counterargument 4: Why Not Approximate?

Counterargument: Dagum and Luby (1993) show that if one is able to introduce some randomness into an algorithmic procedure, then one can use simulation techniques to estimate the value of the probability for any variable V in any belief network to accuracy level , with failure probability . That is, with we can run an algorithm that takes as input a belief network and a variable, and outputs a value Z such that the probability that Z is in the interval is . This algorithm runs in time polynomial in the size of the belief network, and the values and . Thus, we can efficiently find an arbitrarily tight bound on an estimate of with minimal failure probability. This is good enough for the longtermist effective altruist to make action-guiding predictions about what will happen in the future.

Response: In the same paper, Dagum and Luby note that the algorithm described in this response only works if we do not allow ourselves to condition on any events in our belief network. That is, they also show that there is not a similarly efficient estimation procedure that takes as input any belief network, any variable V in that network, and any assignment of values to some subset of variables within the network, and returns an estimate of with accuracy level and failure probability . I take it that longtermist effective altruists do not only want to estimate the unconditional values of probabilities, but also want to be able to adopt a particular probabilistic theory of the world, make observations about that world, and use those observations to accurately update their estimates of the probability of future events. Dagum and Luby’s result shows that this cannot be done in efficiently in the general case.

In a later paper, Dagum and Luby (1997) do provide an efficient sampling algorithm for solving the estimation problem described immediately above. However, that algorithm adopts the assumption that there is no assignment of values to variables in the belief network such that . That is, every conjunction of propositions is assigned positive probability of being true and positive probability of being false. This means that if the longtermist effective altruist wants to use such an algorithm in a way that is action-guiding, they will first have to assume the existence of the kinds of limits on harms and costs discussed above. This may be controversial in its own right. But even if we can make this assumption, we note that lower bounds on any sampling algorithm for estimating probabilities mean that the extra effort needed to lower margins of error to acceptable rates can still be computationally expensive.

Canetti et al. (1994) prove that in the best-case scenario, sampling algorithms for estimating probability are lower-bounded by a constant factor of , where is again the probability of failure and is the margin for error. In general, this means we can make efficient gains in the accuracy of our inferences. Setting , if it takes takes approximately 1 minute to generate an estimate with a margin for error of , then achieving a margin for error of will take four minutes. However, as we approach , efficiency gains become more expensive. Reducing the margin for error to would require us to run the same algorithm for approximately 174 days. When deciding whether to expend significant resources to reduce the probability of a low-probability event, tight margins for error and low failure rates will be required, rendering statistical inference using sampling algorithms increasingly intractable, even if we are willing to make the assumption that all events have positive probability.

Counterargument 5: Possibility vs. Probability

Counterargument: The longtermist effective altruist can reject the move, implicit in the restatement of Prevent Possible Harms and Don’t Prevent Impossible Harms as Prevent Possible Harms* and Don’t Prevent Impossible Harms*, that ‘possible’ and ‘impossible’ are equivalent to ‘has positive probability’ and ‘has probability zero.’ To illustrate, suppose that one believes that a bus is equally likely to arrive at any time between three and four o’clock, and that the set of times between three and four o’clock can be represented using the set of real numbers between zero and one. On this set-up, the bus has probability zero of arriving at any specific time, and yet (let’s assume), there is a specific time that the bus arrives. (See Williamson (2007) for further arguments that probability-zero events can occur.) Moreover, suppose that it would be very bad for the bus to arrive at exactly 3:30, and perfectly fine for it to arrive at all other times. Even though this event has zero probability, it seems that one would be justified in incurring some costs to prevent the bus from possibly arriving at exactly 3:30 (e.g., by blocking the road to the bus stop). By the same token, if a possible future event is bad enough, then we may still want to incur costs now, even if that event currently has probability zero.

Response: My only response here is to say that while this is a deeply interesting counterargument from a theoretical proposal, the conceptual problem that it points to is very challenging. For the longtermist effective altruist to stand behind this counterargument, they would need to provide a decision theory that recommends, in a systematic way, taking costly action to avoid the potential bad consequences of some zero-probability events but not others, while simultaneously allowing us to rank the choice-worthiness of actions whose expected value depends on a probability distribution over positive-probability events. From a philosophical standpoint, this amounts to a deeply interesting project, but one that would need to be significantly further development before it constitutes a serious response to the challenge that I have presented here.

Recommendations

Good-Old-Fashioned-EA

In Singer’s (1972) drowning-child thought experiment, the reader is asked whether the cost of ruining one’s nice new clothes could ever override the moral imperative to save a child drowning in a shallow pool. The answer is clearly ‘no.’ Singer then argues via analogy that the globally affluent ought to incur far greater costs than they currently do to improve the lot of the global poor. This argument has had a massive influence on the effective altruist movement, and in my opinion, it gets things mostly right. Importantly, it does not depend in any way on our predictive capacities. In the thought experiment, the child is drowning right in front of us, such that there is no reason not to assign significant credence to the possibility that they will drown. We can also be confident that we would be able to save the child, should we jump into the pool after them. Thus, the idea that we should do more to efficiently and effectively improve the lot of those living more precarious lives is not subject to any of the objections put forward in this post.

In light of this, one recommendation to take away from this piece is that the effective altruist movement should be wary of aligning itself too strongly with the view that the most important thing we can be doing now is offsetting the risk of future catastrophic events. Indeed, the thrust of my argument has been that intellectual humility requires deep skepticism about whether we are currently in an epistemic position to accurately assess what steps we should be taking now to help those who will exist after we are long gone. Instead, I’d advocate a return to the roots of effective altruism as a movement dedicated to doing more and doing better to help those who are in danger now. Note that this is all consistent with an effective altruist movement that dedicates significant resources and attention to risks like anthropogenic climate change and nuclear war, both of which pose a clear danger to those of us living now, and to those who will live in the near future.

Models, not Forecasts

As described above, longtermist effective altruists occasionally appeal to expert testimony to estimate the probability of far-future events for which we lack well-supported empirical models. One also finds support in some pockets of effective altruist movement for efforts like Metaculus, which aim to crowdsource forecasts of future events that may lead to significant catastrophes. As argued above, I think that the provable -Hardness of probabilistic inference in belief networks support deep skepticism about the value of these kinds of predictions.

By contrast, where we have empirically-supported, demonstrably-tractable mathematical models that assign positive probability to some future event, we should take that event much more seriously as a real possibility. This leads to the recommendation that effective altruists should spend less resources on forecasting and more effort on modeling. If effective altruists suspect that a given complex system may result in a catastrophic event, the way to confirm this suspicion is to begin modeling that system, comparing the model with available data, refining that model, and using the model to tractably produce increasingly reliable estimates of the probabilities of these potential catastrophes. Briefly, to the extent that there is a longtermist effective altruist program, that program should be significantly dedicated to the tractable probabilistic modeling of complex systems.

Computational Capabilities

My conclusions in this post have a conditional form: if , then the kind of inferential capacities needed for longtermist effective altruism to be action-guiding may not be achievable by beings such as ourselves. But if , then nothing that I have argued here applies, and the longtermist effective altruist’s epistemic situation may be far better than I have made it out to be. Moreover, the only evidence that I have presented for the claim is exactly the kind of expert testimony and survey data with respect to which I have advocated skepticism. So, there is a case to be made that the longtermist effective altruist actually has a lot to gain by focusing at least some of their effort on settling the question of whether . If this question were settled in the affirmative, it could open up incredible opportunities for our ability to forecast and prevent catastrophic events.

There are at two streams of work that would be effective in achieving this end. The first is technical work aimed at settling the question from a theoretical perspective (i.e., proving a theorem that settles the question one way or the other). The second is work aimed at designing novel algorithms (recall that if a single algorithm solves an -Hard problem in polynomial time, then and the floodgates are open). However, improving our algorithms in this way would also raise many of the AI-security risks that I have argued we currently lack the ability to accurately forecast. To this end, it is likely a positive feature of current effective altruist practice that funding on computational research goes to organizations, like the Machine Intelligence Research Institute, that place safety and alignment at the heart of their mission. This holds even if, as I argue above, we have little reason now to take predictions regarding the possibility of an AI-driven extinction event seriously.

Conclusion

Nothing that I have said here should be taken as an argument that it is wrong, in itself, to extend moral concern to those persons who will exist in the future, even the distant future. Rather, my conclusion is that we have good reasons to adopt significant epistemic humility with respect to the risks those future people might face. These reasons, which are based on results in computational complexity theory, have been under-explored to date. This leads me to the conclusion our moral concern with future people, justifiable as it may be, nevertheless fails to be action-guiding in some crucial cases.