How to think about an uncertain future: lessons from other sectors & mistakes of longtermist EAs

[Edit Oct 2021 – For some reason folk are still reading this post (it keeps getting the occasional upvote) so adding a note to say I have an updated 2021 post that makes similar points in a hopefully better, less confrontational, although slightly more long-winded way. See: https://​​forum.effectivealtruism.org/​​posts/​​wyHjpcCxuqFzzRgtX/​​a-practical-guide-to-long-term-planning-and-suggestions-for ]



I recently wrote, perhaps somewhat flippantly, that effective altruism longtermism relies on armchair philosophising and the occasional back of the envelope expected value calculation with little or no concern about those calculations being highly uncertain as long as the numbers tend to infinity. Upon deeper reflection of the situation I have decided that there is a truth in this (at least a technical sense in which this is correct).

This post explores some of my thoughts in a little more detail. The summary is:

Expected value calculations[1], the favoured approach for EA decision making, are all well and good for comparing evidence backed global health charities, but they are often the wrong tool for dealing with situations of high uncertainty, the domain of EA longtermism.

I reached this conclusion by digging into the stories of communities who deal with decision making under uncertainty, so I am going to start by sharing those stories with you.

(Disclaimer: I would like to note that I am not a historian. I think I have got the intellectual ideas correct but do worry that I have slapped a historical narrative on a bunch of vague dates drawn from Wikipedia. Would love more research, but doing the best I can in my spare time.)

Are you sitting comfortably, ok let’s begin:

Story 1: RAND and the US military

In a way this story begins in the 1920’s when the economist Frank Knight made a distinction between risk and uncertainty:
Risk denotes the calculable (the probability of an event times the loss if the event occurred) and thus controllable part of all that is unknowable. The remainder is the uncertain—incalculable and uncontrollable

But our story does not really take off until the 1980’s. In the late 1980s the RAND Corporation and the US military began looking at how to harness growing computing power and other tools to make better decisions in situations of high uncertainty. This work developed especially as the US military adjusted its plans in the post-Cold War era. As it became apparent that computing power was not sufficient to predict the future these tools and models focused less on trying to predict and more on trying to support decision makers to make the best decisions despite the uncertainty.

Tools included things like:

  • Assumption Based Planning – having a written version of an organization’s plans and then identify load-bearing assumptions and assessing the vulnerability of the plan to each assumption.

  • Exploratory Modeling – rather than trying to model all available data to predict the most likely outcome these models map out a wide range of assumptions and show how different assumptions lead to different consequences

  • Scenario planning [2] – identifying the critical uncertainties, developing a set of internally consistent descriptions of future events based on each uncertainty, then developing plans that are robust [3] to all options.

Research developed into the 21st century and gradually got more in depth. The idea of different levels of uncertainty sprang up and the cases with the most uncertainty became known as Deep Uncertainty. This is where:
analysts either struggle to or cannot specify the appropriate models to describe interactions among the system’s variables, select the probability distributions to represent uncertainty about key parameters in the models, and/​or value the desirability of alternative outcomes

As the community of researchers and academics interested in this grew it attracted other fields, encompassing policy analysts, engineers and most recently climate scientists. And more and more tools were developed (with even more boring sounding names) such as:

  • Consensus building tools – research on how to present ideas to decision makers to facilitate collaboration, highlight the full implications of various options and lead to the best policy solution being chosen.

  • Engineering Options Analysis – a process for assigning a value to flexibility and optionality.

  • Info-Gap Decision Theory – non probabilistic and non predictive decision tool that computationally evaluates plans to maximise robustness[3] to failure modes (or similar metrics).

The current academic community, is still backed by RAND. It is now a beautifully odd mix of serious military types and young bubbly climate change researchers. It focuses on developing the field of Decision Making Under Deep Uncertainty, or as they call it DMDU.

One thing DMDU practitioners dislike is predictions. They tend to take the view that trying to predict the probability of different outcomes in situations of deep uncertainty is an unnecessary step that adds complexity and has minimal value to decision making.[4] None of the tools listed above involve predicting the probabilities of future events. They say:
all the extant forecasting methods—including the use of expert judgment, statistical forecasting, Delphi and prediction markets—contain fundamental weaknesses.” … “Decisionmaking in the context of deep uncertainty requires a paradigm that is not based on predictions of the future”.[5]

Story 2: risk management in industry

Traditional risk management is fairly simple. List the possible risks, work out the impact that each risk could have and the likelihood of each risk. Then mitigate or plan for the risks, prioritising the highest impact and the most likely risks.

Now this approach has to be adapted for situations of high uncertainty. How can you prioritise on likelihood if you have only the vaguest idea of what the likelihood actually is?

We can draw some lessons from this report from 1992 on risk in nuclear power stations. One recommendation is to for high impact low probability risks to put less weight on the likelihood assessment. The guidance also cautions against overusing cost benefit analyses for prioritising. It says they are a useful tool but should not only be the only way of making decisions about risks.

In 2008 the financial crisis hit. Government’s responded with sweeping regulations to reduce risks across the financial sector. Senior bankers were now responsible for the mistakes of everyone underneath them and could even face criminal punishment for not doing enough to address risk in their firms. This drove innovation in Enterprise Risk Management best practice across the finance sector.

The current best practice mostly does away with likelihood assessments. It primarily uses a vulnerability assessment approach: risks are assessed and compared in terms of both the scale of the risks and the level of vulnerability of the business to those risks. There is an assumption that all risks could feasibly materialize and a focus on reducing vulnerability to them and building preparedness.

This approach is complemented by developing two sets of worse case scenarios. The first set illustrates the scale of the risk and expected damage pre-mitigation (using the assumption that there is no risk response planning) – this allows risks to be compared. The second set illustrates the level of residual risk and damage expected after mitigation – this highlights to decision makers the level of damage they are still willing to accept the cut off point at which further mitigation is deemed too costly.

The vulnerability assessment approach has a number of advantages. The approach highlights the gaps that need closing and supports flexible risk planning. In situations of high uncertainty this approach reduces the extent to which important decisions are made based on highly highly speculative predictions of risk likelihoods that can be orders of magnitude out, as well as saving risk assessors from unproductive and difficult debates over assessing likelihood. The approach also avoids needing to specify a timeline over which the risk assessment is made.

In the last few years these approaches have been adopted more widely, for example in government agencies.

Interlude

Maybe you can already tell where I am going with this. I have talked about two other communities who deal with uncertainty. And, based on my rather hazy histories, it does appear that both communities have, over time, shifted away from making decisions based on predicting the future, expected value calculations and cost benefit analyses and developed bespoke tools for handling situations of high uncertainty.

The effective altruism community looks a bit different...

Story 3: effective altruism

There is a problem with giving to charity – the donor is not the recipient – so there is no feedback, no inbuilt mechanism to ensure that the donor understands the impact of their donations or to ensure that the charity uses donations as effectively as possible. And so the world became rife with ineffective charities that had minimal impact. By the early 2000s the idea that charity does not work was trending.

For people who cared about making the world better, those ideas were a bit of a blow. But surely, they reasoned, there has to be some programs somewhere that work. In the mid 2000s some Oxford academics decided to work out how to give effectively, focusing on the world’s poorest and using tools like DCP2 to compare interventions. Turns out that the problem of doing good can be solved with maths and expected value calculations. Hurrah! Giving What We Can was set up in 2009 to spread the good word.

As the community grew it spread into new areas – Animal Charity Evaluators was founded in 2012 looking at animal welfare – the community also connected to the rationalist community that was worried about AI and to academics at FHI thinking about the long term future. Throughout all of this expected value calculations remained the gold star for making decisions on how to do good. The idea was to shut up and multiply. Even as effective altruism decision makers spread into areas of greater and greater uncertainty they (as far as I can tell) have mostly continued to use the same decision making tools (expected value calculations), without much questioning if these were the best tools.

Why expected value calculations might not be the best tools for thinking about the longterm?

[Note: Apologies. I edited this section after posting and made a few other changes to based on feedback in the comments that this post was hard to engage with. I mostly aimed to make the crux of my argument more clear and less storylike. For fairness to readers I will try to avoid further changes, but do let me know if anything still doesn’t make sense]

Firstly, we have seen from the stories above that the best approaches to risk management are more advanced than just looking at the probability and scale of each risk, and that dealing with situations of high uncertainty requires more tools than simple cost benefit analysis. These shifts have allowed those experts to make quicker better less error prone decisions.[6] Maybe the EA community should shift too.

Secondly, the more tools the better. When it comes to decision making the more different decision making tools of different types you use, the better. For example if you are deciding what project to work on you can, follow your intuition, speak to friends, speak to experts, do an expected value calculation, steelman the case against each project, scope out each project in more detail, refer to a checklist of what makes a useful project, etc, etc. The more of these you use the better the decision will be.[7] Furthermore, the more bespoke a decision making tool is to a specific situation, the better. For example traders might use decision making tools carefully designed to assist them in the kind of trading they are doing.[8]

Thirdly, there is some evidence that expected value calculations approach does not work in the longtermism context. For example I recommend reading this great paper by the Global Priorities Institute that looked at the problem of cluelessness[9] which concluded that the only solution was decision making processes other than expected value calculations.

I would also ask, if the tools we were using were not the right tool for thinking about the long term future, if they had a tendency to lead us astray, how would we know? What would that look like? Feedback is weak, the recipients of our charity do not yet exist. We could maybe look elsewhere, at other communities, try to find the best tools available (which returns us to the first point above), we could try to identify if they were not working (point three above) or maybe just use a more diverse range of tools (point two above).

Please do note the difference between “expected value” and “expected value calculations as a decision making tool”. I am not claiming in this post that maximising the true expected value of your actions is not the ideal aim of all decision making.[10] Just that we need better tools in our decision making arsenal.

Also note that I am not saying expected value calculation are useless. Expected value calculation and cost benefit analysis are useful tools, especially for comparisons between different domains. But in cases of high uncertainty they can mislead and distract from better tools.

So what? you maybe asking. Maybe we can do a bit more scenario planning or something, how does this effect longtermist EAs?

What could this mean for longtermist EAs?

(Or, the 8 mistakes of longtermist EAs, number 6 will shock you!)

So if you accept for a moment the idea that that in situations of high uncertainty it may make sense to avoid cost benefit analysis and develop/​use other bespoke decision making tools, what might this suggest about longtermism in EA.

Some suggestions below for how we might currently be getting things wrong:

1. There is too much focus on expected value calculation as the primary decision making tool.

The obvious point: the longtermism community could use a wider tool set, ideally one more customised for high uncertainty, such as the tools listed in the stories above. This may well lead to different conclusions (elaborated on in the points below).

2. Undervaluing unknown risks and broad interventions to shape the future

In most cases using expected value calculations is not actively bad, although it may be a waste of time, but one notable flaw is that expected value calculations can give a false impression of precision. This can lead to decision makers investing too heavily (or not enough) in specific options when the case for doing so is actually highly highly uncertain. This can also lead to decision makers ignoring the unknown unknowns. I expect a broader toolkit will lead to the EA community focusing more on broad interventions to shape the future and less on specific risks than it has done so to date.

3. Not planning sufficiently for when the world changes.

DMDU highlights the advantages of a prepare-and-adapt approach rather than a predict-then-act approach. I felt that some people in the EA community seemed somewhat surprised that the arguments about AI made in Bostrom’s book Superintelligence did not apply well to the world of machine learning AI 5 years later.[11] A prepare-and-adapt approach could imply putting more effort into identifying and looking out for key future indicators that might be a sign that plans should change. It might also perhaps push individuals towards more regularly reviewing their assumptions and building more general expertise, eg risk management policy, or general policy design skills, rather than specifically AI policy.[12]

4. Searching for spurious levels of accuracy.

I see some EA folk who think it is useful to estimate things like the expected number of future people, how many stars humans might colonise and so forth (GPI and CLR and to some degree OpenPhil have done research like this). I expect there is no need and absolutely minimal use for work like this. As far as I can tell other groups that work with high uncertainty decisions try to avoid this kind of exercise and I don’t think other sectors of society debate such spurious numbers. We certainly do not need to look beyond the end of the life of the sun (or anywhere even close to that) to realise the future is big and therefore important.[13]

5. Overemphasising the value of improving forecasting

The EA community has somewhat of an obsession with prediction and forecasting tools. Putting high value on improving forecasting methodologies and on encouraging the adoption of forecasting practice. These are nice and all and good for short to medium term planning. But they should be seen as only one tool of many in a diverse tool box and probably not even the best tool for long-term planning.

6. Worrying about non-problems such as the problem of cluelessness or Pascal’s Mugging.

I highly recommend reading the aforementioned GPI paper: Heuristics for clueless agents. The GPI paper looks into the problem of cluelessness.[6] It concludes that there are no solutions that involve making expected value calculations of your actions – the only solutions are to shift to other forms of decision making.

I would take a tiny step further and suggest that the whole problem of cluelessness is a non-problem. It is just the result of trying to use expected value calculations as a decision making tool where it is not appropriate to do so. As soon as you realise that humans have more ways of making decisions the problem “promptly disappears in a puff of logic”[14].

I expect the Pascal’s mugging problem[15] to similarly be a non-problem due to using the wrong decision making tools. I think the EA community should stop getting bogged down in these issues.

7. Overconfidence in the cause prioritisation work done to date

Given all of the above, given the gulf between how EA folk think about risk and how risk professionals think about risk, given the lack of thought about decision tools, one takeaway I have is that we should be a bit wary of the longtermist cause prioritisation work done to date.

This is not meant as a criticism of all the amazing people working in this field, simply a statement that it is a difficult problem and we should be cautious of overestimating how far we have come.

8. Not learning enough from other existing communities

Sometimes it feels to me like the EA community has a tendency to stick its head in the sand. I expect there is a wealth of information out there from existing communities about how to make decisions about the future, about policy influencing, about global cooperation, about preventing extreme risks. We could look at organisations that plan for the long term, historical examples of groups trying to drive long-term change and current groups working on causes EAs care about. I don’t think EA folk are learning enough from these sources.

For example, I expect that the longtermism community could benefit from looking at business planning strategies.[16] Maybe the longtermism community could benefit from a clearer vision of what it wants the world to look like in 3, 10 or 30 years time.[17]

Thank you for reading

How much does this all matter?

On the one hand I don’t think further investigation along these lines will greatly weaken the case for caring about the long run future and future risks. For a start expected value calculations are not bad, they are just not always the best tool for the job. Furthermore, most approaches to managing uncertainty seem to encourage focus on preventing extreme risk from materialising.

On the other hand I think it is plausible that someone approaching longtermism with a different toolkit might reach different conclusions. For me looking into all of this strengthens my belief that: doing the most good is really difficult and often lacking in clear feedback loops, it is easy to be led astray and become over confident, and that you need to find ways to do good that have reasonable evidence supporting them. [18]

I think we need to think a little more carefully about how we shape a good future for all.

I hope you find this interesting and I hope this post sparks some discussion. Let me know which of the 8 points above you agree or disagree with.

Footnotes and bits I cut

[1] In case it needs clarifying expected value calculations involve: have some options, predict the likelihood and utility of outcomes, multiply then sum probabilities and utilities to give expected future utility of each option, then go with whichever option has the highest number.

[2] RAND actually developed scenario planning techniques much earlier than this, I think in the 50s, but it was used as the basis for further tools developed at this time.

[3] The robust option is the option that produces the most favorable outcome across a broad range of future scenarios. The point is to minimise the chance of failure across scenarios.

[4] DMDU practitioners would argue that predictions are unnecessary. Those involved in DMDU would say why build a complex model to predict the outcome of option x and other models for options y and z. Instead you could just build a complex model to tell you the best thing to do to minimise regret for options x, y and z and everything in between. “Within the world of DMDU, a model is intended to be used not as a prediction tool, but as an engine for generating and examining possible futures (i.e., as an exploration tool)”. Those involved in risk management would say that trying to make predictions leads to lengthy unnecessary arguments about details.

[5] All the italicized quotes in this section are from Chapters 1 and 2 of “Decision Making under Deep Uncertainty From Theory to Practice” – which I recommend reading if you have the time.

[6] Apparently there is a whole book on this called Radical Uncertainty by economists John Kay and Mervyn King

[7] There is a good talk on this here: https://​​www.youtube.com/​​watch?v=BbADuyeqwqY

[8] See, Judgment in managerial decision making, by Max Bazerman, I think Chapter 12.

[9] The problem of cluelessness is the idea that if you are trying to decide any action there is so much uncertainty about the long term effects that it is impossible to know if it was the correct action. Talking to a friend might lead to them leaving to go home 5 minutes later that could lead to a whole chain of effects culminating with them meeting someone new and then marrying that person and then their great great great ancestor is the next Einstein or Hitler. You just cannot know.

[10] I don’t have a view on this, and am not sure how useful it would be to have a view on this.

[11] https://​​80000hours.org/​​podcast/​​episodes/​​ben-garfinkel-classic-ai-risk-arguments/​​

[12] I am not super in with with the rationalist community but I wonder if it is missing exploration of these topics. I have read “Harry Potter and the Methods of Rationality” and this seems to be the mistake Harry makes: he decides that Quirrel is not Voldermort then doesn’t review as new evidence comes to light. His “shut up and multiply” rationality approach to decision making does not capture uncertainty and change well.

[13] I would add that even if your decision tool of choice is an expected value calculation trying to pinpoint such numbers can be problematic. This paper (p20) by GPI highlights a “less is more effect” where adding tangentially relevant inputs to models decreases the predictive accuracy of these models.

[14] Quote from The Hitchhiker’s Guide to the Galaxy, by Douglas Adams

[15] Pascal’s mugging is a philosophical problem discussed sometimes in EA. Imagine a person stops you in the street and says “Oy you – give us ya wallet – otherwise I will use my advanced technology powers to cause infinite suffering”. An expected value calculation would say that if there is a tiny tiny non-zero chance they are telling the truth and the suffering could be infinite then you should do what they say. I repeatedly tried to explain Pascal’s mugging to DMDU experts. They didn’t seem to understand the problem. At the time I thought I was explaining it badly but reading more on this topic I think it is just a non-problem: it only appears to be a problem to those whose only decision making tool is an expected value calculation.

[16] I have not looked in detail but I see business planning on quarterly, annual, 3 year and then longer cycles. Organisations, even those with long term goals, do not make concrete plans more than 30 years ahead according to The Good Ancestor, by Roman Krznaric. (Which is probably the most popular book on longtermism, that for some reason no one in EA has read. Check it out.)

[17] Like if I play chess I don’t plot the whole game in my mind, or even try to. I play for position, I know that moving my pieces to places where they have more reach is a stronger position so I make those moves. Maybe the animal rights community does this kind of thinking better, having a very clear long term goal to end factory farming and a bunch of very clear short term goals to promote veganism and develop meat alternatives.

[18] I think that highly unusual claims require a lot of evidence. I believe I can save the life of a child in the developing world for ~£3000 and can see a lot of evidence for this. I am keen to support more research in this area but if I had to decide now to donate today between say technical AI safety research like MIRI or an effective developing world charity like AMF, I would give to the later.