Finding a global maximum in the benefit/cost function is the task of the effective alturist. I argue that this process is destabilised when infinite values are permitted. I first characterise (and slightly caricature) effective decision-making. Second, I describe three cases where infinities arise, with examples: costs that tend to zero (deworming), benefits that tend to positive infinity (artificial intelligence), and benefits that tend to negative infinity (climate change). Third, I will offer suggestions for bounding these values and therefore stabilising the decision-making function.
‘Effective’ in effective altruism refers to the way that decisions are made about what to do with altruistic intent. The benefits from altruism can be defined in a variety of ways, appealing to utilitarianism, human rights, fairness, or other concepts of justice. For now we can simply say that there is a benefit that the altruist intends. Meanwhile, there are costs. Costs include money, time, effort, and side-effects. Since the purpose is making decisions, uncertainty is also a cost, since a more uncertain option has a higher risk of an opportunity cost — the benefit missed-out on by inadvertently choosing a less ideal option. For our purposes we can lump all of these dimensions as costs, and assume that the altruist is making their best effort to capture all of the costs associated with each option. Thus, with benefits and costs defined and estimated (including uncertainty), the altruist plots the benefit/cost of each option. The altruist looks for the globally maximal benefit/cost, and exhausts this option until another option becomes optimal, and so on. I will refer to this maximisation of benefits/costs as the ‘decision function’. This characterisation of effective decision making becomes unstable – non-finite – when the denominator tends to zero or the numerator to infinity.
In the first of three cases, let us consider when the costs are thought to tend to zero. Since the function is benefit/cost, the value of this function tends to infinity so long as the benefit is finite but regardless of its size. An example of this scenario is mass deworming in schools. In the presence of worm-awareness and treatment for acute infections, whatever evidence there is for the benefit of school-based deworming is that the effect on health and other outcomes is small. However, characterisation of the costs as virtually zero has inflated the decision function to values that exceed all other options; deworming often tops the charts for most effective altruistic option. Note that this is despite the high levels of uncertainty about the effects: a denominator that multiplies a large number by zero is still zero.
A second instability arises when the benefit of an action tends to infinity. In such a case, only infinite costs can bound the decision function in real terms. The potential of artificial general intelligence is an example of a benefit (although also a risk) that can be characterised as infinite: once an intelligence exceeds human intelligence it could, in theory, design systems with ever increasing capacity without limit. As a consequence, it is impossible to consider options with respect to this potential.
We observe similar destabilisation when the benefit tends to negative infinity. For example, this arises whenever we consider effective altruistic options in the context of climate change, since the long-term societal costs of climate change are often thought to be so large as to be infinite. This is a consequence of the severity of climate change for human life, and the ‘longtermist’ view that gives weight to the trillions of potential future humans (as Will MacAskill argues in his latest book What We Owe the Future: A Million-Year View). With infinite negative benefit, only infinite costs can bring the decision function back to the real.
In each of these examples, the decision function (benefit/cost) tends to infinity and is no longer bounded. One might wonder if this matters. It matters because the decision function becomes unstable. When costs are thought to tend to zero, and the benefit is small, the decision function is highly sensitive to any information that could turn the costs positive. If, for example, we learn that school-based deworming was having a negative effect on other school-based health programmes, the cost would turn positive and the decision function could plummet. Allowing infinite positive benefit directly drives the decision function skyward, even as the costs multiply, such that a turn to the finite (for example because of some proof that AI might not function as hoped) would rapidly unravel the decision function. Infinite positive or negative benefits imply that only infinite costs can bring the decision function into the realm of the real. On the one hand this can lead to apathy, defeatism, and an apocalyptic version of longtermism that stymies action (more likely observed with negative benefit, such as climate change), and on the other it can become an insatiable cause that demands all possible resources (and more).
Finite bounds are possible, stablisising the decision function. For the first case, we can acknowledge that zero cost interventions are a fiction. There is no free lunch; were an option genuinely costless then it would already have happened. As in the case of school-based deworming, perhaps we are missing costs that are not immediately apparent. For example, we may miss costs associated with delivering the pills, or the effect of ‘crowding out’ other educational interventions where national and local governments and schools have finite bandwidth for new ideas. Accounting for all costs will bring the decision function back to the real.
Bounds can be set on the benefits of potentially limitless innovations such as artificial intelligence: setting appropriate discount rates on the utility for future generations will limit the benefit both in terms of the number of people who will gain and the extent of technological improvement that is being considered. For example, if we restrict the question of investing in AI to benefits for people alive now with technology that will be available in the next 20 years, then this altruistic option is no longer infinite but finite.
Similarly, negative infinite benefit will usually be driven by multiplying the utility losses experienced by infinite future generations (e.g. shortened life). We can counter this with appropriate, democratic, discounting for future lives lost. While it might seem rational to respect the lives of people yet unborn with the same rights as people alive now, that is neither credible or — more importantly — how people think. Heavily discounting for future life after a number of generations, such that it falls well below the value of, for example, animals alive now, will bound the decision function. This might seem radical, but there is little evidence from popular behaviour that we put anything more than a cursory value on the lives of future generations.
I have argued that infinities destabilise the decision function behind effective altruism, and given examples. The proposed solutions — count costs, discount future life lost, discount future utility — may seem at once defeatist and short-termist. They are neither. By restricting decisions to a finite benefit / cost space, we are making decisions on a human scale. We avoid bad decisions, avoid being paralysed by inaction, or betting on a unicorn.
Keeping it Real
Finding a global maximum in the benefit/cost function is the task of the effective alturist. I argue that this process is destabilised when infinite values are permitted. I first characterise (and slightly caricature) effective decision-making. Second, I describe three cases where infinities arise, with examples: costs that tend to zero (deworming), benefits that tend to positive infinity (artificial intelligence), and benefits that tend to negative infinity (climate change). Third, I will offer suggestions for bounding these values and therefore stabilising the decision-making function.
‘Effective’ in effective altruism refers to the way that decisions are made about what to do with altruistic intent. The benefits from altruism can be defined in a variety of ways, appealing to utilitarianism, human rights, fairness, or other concepts of justice. For now we can simply say that there is a benefit that the altruist intends. Meanwhile, there are costs. Costs include money, time, effort, and side-effects. Since the purpose is making decisions, uncertainty is also a cost, since a more uncertain option has a higher risk of an opportunity cost — the benefit missed-out on by inadvertently choosing a less ideal option. For our purposes we can lump all of these dimensions as costs, and assume that the altruist is making their best effort to capture all of the costs associated with each option. Thus, with benefits and costs defined and estimated (including uncertainty), the altruist plots the benefit/cost of each option. The altruist looks for the globally maximal benefit/cost, and exhausts this option until another option becomes optimal, and so on. I will refer to this maximisation of benefits/costs as the ‘decision function’. This characterisation of effective decision making becomes unstable – non-finite – when the denominator tends to zero or the numerator to infinity.
In the first of three cases, let us consider when the costs are thought to tend to zero. Since the function is benefit/cost, the value of this function tends to infinity so long as the benefit is finite but regardless of its size. An example of this scenario is mass deworming in schools. In the presence of worm-awareness and treatment for acute infections, whatever evidence there is for the benefit of school-based deworming is that the effect on health and other outcomes is small. However, characterisation of the costs as virtually zero has inflated the decision function to values that exceed all other options; deworming often tops the charts for most effective altruistic option. Note that this is despite the high levels of uncertainty about the effects: a denominator that multiplies a large number by zero is still zero.
A second instability arises when the benefit of an action tends to infinity. In such a case, only infinite costs can bound the decision function in real terms. The potential of artificial general intelligence is an example of a benefit (although also a risk) that can be characterised as infinite: once an intelligence exceeds human intelligence it could, in theory, design systems with ever increasing capacity without limit. As a consequence, it is impossible to consider options with respect to this potential.
We observe similar destabilisation when the benefit tends to negative infinity. For example, this arises whenever we consider effective altruistic options in the context of climate change, since the long-term societal costs of climate change are often thought to be so large as to be infinite. This is a consequence of the severity of climate change for human life, and the ‘longtermist’ view that gives weight to the trillions of potential future humans (as Will MacAskill argues in his latest book What We Owe the Future: A Million-Year View). With infinite negative benefit, only infinite costs can bring the decision function back to the real.
In each of these examples, the decision function (benefit/cost) tends to infinity and is no longer bounded. One might wonder if this matters. It matters because the decision function becomes unstable. When costs are thought to tend to zero, and the benefit is small, the decision function is highly sensitive to any information that could turn the costs positive. If, for example, we learn that school-based deworming was having a negative effect on other school-based health programmes, the cost would turn positive and the decision function could plummet. Allowing infinite positive benefit directly drives the decision function skyward, even as the costs multiply, such that a turn to the finite (for example because of some proof that AI might not function as hoped) would rapidly unravel the decision function. Infinite positive or negative benefits imply that only infinite costs can bring the decision function into the realm of the real. On the one hand this can lead to apathy, defeatism, and an apocalyptic version of longtermism that stymies action (more likely observed with negative benefit, such as climate change), and on the other it can become an insatiable cause that demands all possible resources (and more).
Finite bounds are possible, stablisising the decision function. For the first case, we can acknowledge that zero cost interventions are a fiction. There is no free lunch; were an option genuinely costless then it would already have happened. As in the case of school-based deworming, perhaps we are missing costs that are not immediately apparent. For example, we may miss costs associated with delivering the pills, or the effect of ‘crowding out’ other educational interventions where national and local governments and schools have finite bandwidth for new ideas. Accounting for all costs will bring the decision function back to the real.
Bounds can be set on the benefits of potentially limitless innovations such as artificial intelligence: setting appropriate discount rates on the utility for future generations will limit the benefit both in terms of the number of people who will gain and the extent of technological improvement that is being considered. For example, if we restrict the question of investing in AI to benefits for people alive now with technology that will be available in the next 20 years, then this altruistic option is no longer infinite but finite.
Similarly, negative infinite benefit will usually be driven by multiplying the utility losses experienced by infinite future generations (e.g. shortened life). We can counter this with appropriate, democratic, discounting for future lives lost. While it might seem rational to respect the lives of people yet unborn with the same rights as people alive now, that is neither credible or — more importantly — how people think. Heavily discounting for future life after a number of generations, such that it falls well below the value of, for example, animals alive now, will bound the decision function. This might seem radical, but there is little evidence from popular behaviour that we put anything more than a cursory value on the lives of future generations.
I have argued that infinities destabilise the decision function behind effective altruism, and given examples. The proposed solutions — count costs, discount future life lost, discount future utility — may seem at once defeatist and short-termist. They are neither. By restricting decisions to a finite benefit / cost space, we are making decisions on a human scale. We avoid bad decisions, avoid being paralysed by inaction, or betting on a unicorn.