How to Understand and Mitigate Risk (Crosspost from LessWrong)
Epistemic Status: Fairly certain these distinctions are pointing at real things, less certain that the categories are exactly right. There’s still things I don’t know how to fit into this model, such as using Nash Equilibria as a strategy for adversarial environments.
Instrumental Status: Very confident that you’ll get better outcomes if you start using these distinctions where previously you had less nuanced models of risk.
Transparent risks are those risks that can be easily quantified and known, in advance. They’re equivalent to the picture above, with a transparent bag where I can count the exact amount of marbles in each bag. If I’m also certain about how much each marble is worth, then I have a simple strategy for dealing with risks in this situation.
How to Mitigate Transparent Risks: Do the Math
The simple strategy for transparent risks like the one above is to do the math.
Expected value is a simple bit of probability theory that says you should multiply the likelihood of an event happening by the payoff to get your long run value over time. It’s a simple way to figure out if the risk is worth the reward in any given situation. The best introduction I know to expected value is here.
The Kelly criterion is helpful when losing your entire bankroll is worse than other
outcomes. I don’t fully understand it, but you should, and Zvi wrote a post in it here. (If someone would be willing to walk me through a few examples and show me where all the numbers in the equation come from, I’d be very grateful.)
Transparent Risks in Real Life
Driving drunk is a simple, well studied risk on which you can quickly find probabilities of crash, injury and death to yourself and others. By comparing these costs to the costs of cab fare (and the the time needed to get your car in the morning if you left it), you can make a relatively transparent and easy estimate whether it’s worth driving at your Blood Alcohol Content level (spoiler alert, No if your BAC is anywhere near .08 on either side.) The same method can be used for any well-studied risks that exist within tight, slow changing bounds.
Commodity and Utility Markets
While most business opportunities are not transparent risks, an exception exists for commodities and utilities (in the sense mean’t by Wardley Mapping). It’s quite easy to research the cost of creating a rice farm, or a power plant, as well as get a tight bounded probability distribution for the expected price you can sell your rice or electricity at after making the initial investment. These markets are very mature and there’s unlikely to be wild swings or unexpected innovations that significantly change the market. However, because these risks are transparent it also means that competition drives margins down. The winners are those which can squeeze a little extra margin through economies of scale or other monopoly effects like regulatory capture.
Edit: After being pointed to the data on commodities, I no longer lump them in with utilities as transparent risks and would call them more Knightian.
Opaque risks are those risks that can be easily quantified and unlikely to change, but which haven’t already been quantified/aren’t easy to quantify just by research. They’re equivalent to the picture above, with an opaque bag that you know contains a static amount of a certain type of marble, but not the ratio of marbles to each other. As long as I’m sure that the bag contains only three types of marbles, and that the distribution is relatively static, a simple strategy for dealing with these risks emerges.
How to Mitigate Opaque Risks: Determine the Distribution
The simple strategy for opaque risks is to figure out the distribution. For instance, by pulling a few marbles at random out of the bag, you can over time become more and more sure about the distribution in the bag, at which point you’re now dealing with transparent risks. The best resource I know of for techniques to determine the distribution of opaque risks is How to Measure Anything by Douglas Hubbard.
Sampling involves repeatedly drawing from the distribution in order to get an idea of what the distribution is. In the picture above, it would involve simply reaching your hand in and pulling a few marbles out. The bigger your sample, the more sure you can be about the underlying distribution.
Modelling involves breaking down the factors that create the distribution, into as transparent pieces as possible. The classic example from fermi estimation is how many piano tuners there are in Chicago—that number may be opaque to you, but the number of people in Chicago is relatively transparent, as is the percentage of people that own pianos, the likelihood that someone will want their piano tuned, and the amount of money that someone needs to make a business worthwhile. These more transparent factors can be used to estimate the opaque factor of piano tuners.
Opaque Risks in Real Life
Choosing a Career You Don’t Like
In the personal domain, opaque risks often take the form of very personal things that have never been measured because they’re unique to you. As a career coach, I often saw people leaping forward into careers that were smart from a global perspective (likely to grow, good pay, etc) but ignored the more personal factors. The solution was a two tier sampling solution: Do a series of informational interviews for the top potential job titles and potential industries, and then for the top 1-3 careers/industries, see if you can do a form of job shadowing. This significantly helped cut down the risk by making an opaque choice much more transparent.
Building a Product Nobody Wants
In the business domain, solutions that are products(in Wardley Mapping terms) but are not yet commoditized often qualify as opaque risks. In this case, simply talking to customers, showing them a solution, and asking if they’ll pay, can save a significant amount of time and expense before actually building the product. Material on “lean startup” is all about how to do efficient sampling in these situations.
Knightian risks are those risks that exist in environments with distributions that are actively resistant to the methods used with opaque risks. There are three types of Knightian Risks: Black Swans, Dynamic Environments, and Adversarial Environments.
A good portion of “actually trying to get things done in the real world” involves working with Knightian risks, and so most of the rest of this essay will focus on breaking them own into their various types, and talking about the various solutions to them.
Types of Knightian Risks
A black swan risk is an unlikely, but very negative event that can occur in the game you choose to play.
In the example above, you could do a significant amount of sampling without ever pulling the dynamite. However, this is quite likely a game you would want to avoid given the presence of the dynamite in the bag. You’re likely to severely overestimate the expected value of any given opportunity, and then be wiped out by a single black swan. Modelling isn’t useful because very unlikely events probably have causes that don’t enter into your model, and it’s impossible to know you’re missing them because your model will appear to be working accurately (until the black swan hits). A great resource for learning about Black Swans is the eponymous Black Swan, by Nassim Taleb.
When your risks are changing faster than you can sample or model them, you’re in a dynamic environment. This is a function of how big the underlying population size is, how good you are at sampling/modelling, and how quickly the distribution is changing.
A traditional sampling strategy as described above involves first sampling, finding out your risks in different situations, then finally “choosing your game” by making a decision based on your sample. However, when the underlying distribution is changing rapidly, this strategy is rendered moot as the information your decision was based on quickly becomes outdated. The same argument applies to a modelling strategy as well.
There’s not a great resource I know of to really grok dynamic environments, but an ok resource is Thinking in Systems by Donella Meadows (great book, but only ok for grokking the inability to model dynamic environments).
When your environment is actively (or passively) working to block your attempts to understand it and mitigate risks, you’re in an adversarial environment.
Markets are a typical example of an Adversarial Environment, as are most other zero sum games with intelligent opponents. They’ll be actively working to change the game so that you lose, and any change in your strategy will change their strategy as well.
Ways to Mitigate Knightian Risks
Antifragility is a term coined by Nassim Taleb to describe systems that gain from disorder. If you think of the games described above as being composed of distributions, and then payoff rules that describe how you react to this distributions, anti-fragility is a look at how to create flexible payoff rules that can handle Knightian risks. Taleb has an excellent book on anti-fragility that I recommend if you’d like to learn more.
In terms of the “marbles in a bag” metaphor, antifragility is a strategy where pulling out marbles that hurt you makes sure you get less and less hurt over time.
Optionality is a heuristic that says you should choose those options which allow you to take more options in the future. The idea here is to choose policies that lower you’re intertia and switching costs between strategies. Avoiding huge bets and long time horizons that can make our break you, while developing agile and nimble processes that can quickly change. This is the principle from which all other anti-fragile principles are generated.
This helps with black swans by allowing you to quickly change strategies when your old strategy is rendered moot by a black swan. It helps with dynamic environments by allowing your strategy to change as quickly as the distribution does. It helps with adversarial environments by giving you more moves to use against changing opponents.
Going with the bag of marbles example, imagine there are multiple bags of marbles, and the distributions are changing over time. Originally, it costs quite a lot to switch between bags. The optionality strategy says you should be focused on lowering the cost of switching between bags over time.
Hormesis is a heuristic that says that when negative outcomes befall you, you should work to make that class of outcomes less likely to hurt you in the future. When something makes you weak temporarily, you should ultimately use that to make yourself stronger in the long run.
This helps with Black Swans by gradually building up resistance to certain classes of black swans BEFORE they hit you. It helps with rapidly changing distributions by continually adapting to the underlying changes with hormetic responses.
In the bag of marbles example, imagine that at the start pulling a red marble was worth -$10. Every time you pulled a red marble, you worked to reduce that harm of red things by 1⁄10. This would mean that in an environment with lots of red marbles, you would quickly become immune to them. It would also mean that if you eventually did pull out that stick of dynamic, your general ability to handle red things would mean that it would hurt you less.
(I get that the above example is a bit silly, but the general pattern of immunity to small events helping you with immunity to black swans in the same class is quite common).
The evolution heuristic says that you should constantly be creating multiple variations on your current strategies, and keeping those that avoid negative consequences over time. Just like biological evolution, you’re looking to find strategies that are very good at survival. Of course, you should be careful about calling up blind idiot gods, and be cautious about being tempted to optimize gains instead of minimize downside risk (as it should be used).
This helps with black swans in a number of ways. Firstly, by diversifying your strategies, it’s unlikely that all of them will be hit by black swans. Secondly, it has an effect similar to hormesis in which immunity to small effects can build up immunity to black swans in the same class. Finally, by having strategies that outlive several black swans, you develop general survival characteristics that help against black swans in general. It helps with dynamic environments by having several strategies, some of which will hopefully be favorable to the environmental changes.
The Barbell Strategy
The barbell strategy refers to a strategy of splitting your activities between those that are very safe, with low downside, and those that are very risky, with high upside.Previously, Benquo has argued against the barbell strategy, arguing that there is no such thing a riskless strategy. I agree with this general idea, but think that the framework I’ve provided in this post gives a clearer way to talk about what Nassim means: Split your activities between transparent risks with low downsides, and Knightian risks with high upsides.
The transparent risks obviously aren’t riskless (that’s why they’re called risk), but they behave relatively predictably over long time scales. When they DON’T behave predictably is when there’s black swans, or an equilibrium is broken such that a relatively stable environment becomes an environment of rapid change. That’s exactly when the transparent risks with high upside tend to perform well (because they’re designed to take advantage of these situations). That’s also why this strategy is great for handling black swans and dynamic environments. It’s less effective at handling adversarial environments, unless there’s local incentives in the adversarial environment to think more short term than this strategy does.
Via negativa is a principle that says to continually chip away at sources of downside risk, working to remove the bad instead of increase the good. It also says to avoid games that have obviously large sources of downside risk. The principle here is that downside risk is unavoidable, but by making it a priority to remove sources of downside risks over time, you can significantly improve your chances.
In the bag of marbles example, this might look like getting a magnet that can over time begin to suck all the red marbles/items out of the bag, so you’re left with only the positive value marbles. For a more concrete example, this would involve paying off debt before investing in new equipment for a business, even if the rate of return from the new equipment would be higher than the rate of interest on the loan. The loan is a downside risk that could be catastrophic in the case of a black swan that prevented that upside potential from emerging.
This helps deal with black swans, dynamic environments, and adversarial environments by making sure you don’t lose more than you can afford given that the distribution takes a turn for the worse.
Skin in the Game
Skin in the game is a principle that comes from applying anti-fragility on a systems level. It says that in order to encourage individuals and organizations to create anti-fragile systems, they must be exposed to the downside risk that they create.
If I can create downside risk for others that I am not exposed to, I can create a locally anti-fragile environment that nonetheless increases fragility globally. The Skin in the game principle aims to combat two forces that create molochian anti-fragile environments- moral hazards and negative externalities.
Effectuation is a term coined by Saras Sarasvathy to describe a particular type of proactive strategy she found when studying expert entrepreneurs. Instead of looking to mitigate risks by choosing strategies that were flexible in the presence of large downsides risks (antifragility), these entrepreneurs instead worked to shift the distribution such that there were no downside risks, or shift the rules such that the risks were no longer downsides. There’s not a book a can recommend that’s great at explaining effectuation, but two OK ones are Effectuation by Saras Sarasvathy and Zero to One by Peter Thiel. This 3-page infographic on effectuation is also decent.
Note that Effectuation and Antifragility explicitly trade off against each other. Antifragility trades away certainty for flexibility while Effectuation does the opposite.
In terms of the “marbles in a bag” metaphor, Effectaution can be seen as pouring a lot of marbles that are really helpful to you into the bag, then reaching in and pulling them out.
The pilot-in-plane principle is a general way of thinking that says control is better than both prediction and anti-fragility. The pilot-in-plane principle emphasizes proactively shaping risks and rewards, instead of creating a system that can deal with unknown or shifting risks and rewards. The quote that best summarizes this principle is the Peter Drucker quote “The best way to predict the future is to create it.”
This principle also isn’t much use with black swans. It deals with dynamic environments by seizing control of the forces that shape those dynamic environments. It deals with adversarial environments by shaping the adversarial landscape.
Affordable Loss Principle
The affordable loss principle simply says that you shouldn’t risk more than you’re willing to lose on any given bet. It’s Effectuation’s answer to Via Negativa principle.
The difference is that while Via negativa recommends policies that search for situations with affordable downside, and focus on mitigating unavoidable downside, Affordable loss focuses on using your resources to shape situations in which the loss of all parties is affordable.
It’s not enough to just make bets you can afford to lose, you have to figure out how to do this while maximizing upside. Can you get a bunch of people to band together to put in a little, so that everyone can afford to lose what they’re putting in, but you have a seat at the table? Can you have someone else shoulder the risk who can afford to lose more> Can you get guarantees or insurance to minimize downside risk while still getting the upside? Many of these principles break the Skin in the Game principle needed for anti-fragility, but work perfectly (without calling up Moloch) when using an effectuative strategy. This is the affordable loss principle.
It helps with black swans by creating buffers that protect catastrophic loss. It helps with dynamic environments by keeping what can you lose constant even as the environment changes. It helps with adversarial environments by making sure you can afford to lose to your adversary.
The bird-in-hand principle says that you should use your existing knowledge, expertise, connections, and resources to shift the distribution in your favor. It also says that you should only choose to play games where you have enough of these existing resources to shift the distribution. Peter Thiel says to ask the question “What do I believe that others do not?” Saras Sarasvathy says to look at who you are, what you know, and who you know.
This helps with Black Swans by preventing some of them from happening. It helps with dynamic environments by seizing control of the process that is causing the environment to change, making most of the change come from you. It helps with adversarial environments by ensuring that you have an unfair advantage in the game.
The lemonade principle that says when the unexpected happens, you should use that as an opportunity to re-evaluate the game you’re playing, and seeing if there’s a more lucrative game you should be playing instead. Again, the idea of “make the most of a bad situation” might seem obvious, but through the creative and proactive lens of effectuation, it’s taken to the extreme. Instead of saying “What changes can I make to my current approach given this new situation?” the lemonade principle says to ask “Given this new situation, what’s the best approach to take?”
This helps with Black Swans by using them as lucrative opportunities for gaining utility. It helps with dynamic environments by constantly finding the best opportunity given the current landscape. It helps with adversarial environments by refusing to play losing games.
Patchwork Quilt Principle
The patchwork quilt principle says that you should trade flexibility for certainty by bringing on key partners. The partners get to have more of a say in the strategies you use, but in turn you get access to their resources and the certainty that they’re on board.
While the original work on effectuation paints this principle as only having to do with partnerships, I like to think of it as a general principle where you should be willing to limit your options if it limits your downside risk and volatility more. The inverse of the optionality principle from antifragile strategies.
This strategy doesn’t really help with black swans that much. It helps with dynamic environment by making the environment less dynamic through commitments. It helps with Adversarial environments by turning potential adversaries into allies.
Capability enhancement is a general strategy of trying to improve capabilities such that knightian risks are turned into opaque risks (which are then turned into transparent risks through sampling and modelling). Unlike the previous to ways to mitigate knightian risk, this is more a class of strategies than a strategy in its’ own right. In terms of the “marbles in a bag” analogy, capability enhancement might be building x-ray googles to look through the bag, or getting really good at shaking it to figure out the distribution.
Black Swans can be turned opaque by knowing more (and having less unknown unknowns. Dynamic environments can be turned opaque by increasing the speed of sampling or modelling, or the accuracy or applicability of models. Adversarial environments can be turned opaque by giving better strategies to model or face adversaries (and their interactions with each other).
There are numerous classification schemes one could use for all the various types of capability enhancement. Instead of trying to choose one, I’ll simply list a few ways that I see people trying to approach this, with no attempt at completeness or consistent levels of abstraction.
Personal Psychology Enhancement
By making people think better, work more, be more effective, an individual can increase the class of problems that become opaque to them. This is one approach that CFAR and Leverage are taking.
By creating better models of how the world works, risks that were previously knightian to you become opaque. I would put Leverage, FHI, and MIRI into the class of organizations that are taking this approach to capability enhancement. The sequences could fit here as well.
Better Thinking Tools
Improving Group Dynamics
By figuring out how to work together better, organizations can turn risks from knightian to opaque. Team Team on Leverage, and CFARs work on group rationality both fit into this category.
Collective Intelligence and Crowdsourcing
By figuring out how to turn a group of people into a single directed agent, you can often shore up individuals weaknesses and amplify their strengths. This allows risks that were previously knightian to individuals become opaque to the collective.
Knightian Risks in Real Life
0 to 1 Companies
When a company is creating something entirely new (in the Wardley Mapping sense), it’s taking a Knightian risk. Sampling is fairly useless here because people don’t know they want what doesn’t exist, and naive approaches to modelling won’t work because your inputs are all junk data that exists without your product in the market.
How would each of these strategies handle this situation?
Start your company in an industry where you have pre-existing connections, and in which you have models or information that others don’t (“What do you believe that others do not?”). Before building the product, get your contacts to pay up front to get you to build it, therefore limiting risk. If something goes wrong in the building of the product, take all the information you’ve gathered and the alliances you’ve already made, and figure out what the best opportunity is with that information and resources.
Create a series of small experiments with prototypes of your products. Keep the ones that succeed, and branch them off into more variations, only keeping the ones that do well. Avoid big contracts like in the effectuation example, only taking small contracts that can let you pivot at a moments notice if needed.
Create a forecasting tournament for the above product variations. Test only the ones that have positive expected value. Over time, you’ll have less and less failed experiments as your reputation measures get better. Eventually, you may be able to skip many experiments all together and just trust the forecasting data. If you’re interested in this type of thing we should really chat.
At first glance, it seems like many of these strategies such as Effectuation apply more to individual or group risks than global risks. It’s not clear for instance how an effectual strategy of shifting the risks to people who can handle them applies on a society wide scale. I do however think that this categorization scheme has something to say about existential risk, and will illustrate with a few examples of ways to mitigate AGI Risk. I recognize that many of these examples are incredibly simplified and unrealistic. The aim is simply to show how this categorization scheme can be used to meaningfully think about existential risk, not to make actual policy suggestions or leaps forward.
How might we mitigate AI risk using the strategies discussed here?
A capability enhancement/sampling/modelling strategy might be to get a bunch of experts together and forecast how soon we’ll get AGI. Then, get a bunch of forecasting experts together and create a function that determines how long it takes to develop benevolent AGI given the amount of AI safety researchers. Finally, create a plan to hire enough AI safety researchers that we develop the ability to create safe AGI before we develop the ability to develop unsafe AGI. If we find that there’s simply no way to discover AI safety fast enough given current methods, create tools to get better at working on AI safety. If you find that the confidence intervals on AGI timelines are too wide, create tools that can allow you to narrow them.
An anti-fragile strategy might look like developing a system of awareness of AI risk and enough funding such that you can create a strategy where two AI safety researchers are hired for every non-safety AI researcher that is hired. Thus, the more you expose yourself to the existential risk of AGI, the faster you create the mechanism that protects you from that risk. This might be pared with a system that tries different approaches to AI safety, and splits off the groups that are doing the best every few years into two groups, these evolving a system that increases the effectiveness of AI safety researchers over time.
The effectual strategy, instead of taking the timeline for AI as a given, would instead ask “How can we change this timeline such that there’s less risk?” Having asked that question, and recognizing that pretty much any answer exists in an adversarial environment, the question becomes “What game can we play that we, as effective altruists, have a comparative advantage at compared to our adversaries?” If the answer is something like “We have an overbundance of smart, capabable people who are willing to forgo both money and power for altruistic reasons,” then maybe the game we play is getting a bunch of effective altruists to run for local offices in municipal elections, and influence policy from the ground up by coordinating laws on a municipal level to create a large effect of requiring safety teams for ML teams (among many other small policies). Obviously a ridiculous plan, but it does illustrate how the different risk mitigation strategies can suggest vastly different object level policies.
Exercise for the reader: Robin Hanson worries about a series of catastrophic risks that tax humanity beyond it’s resources (I can’t find the article to link here but if someone knows it let me know in the comments). We might be able to handle climate change, or an asteroid, or an epidemic on their own, but if by chance they hit together, we pass a critical threshold that we simply can’t recover from.
How would you analyze and mitigate this situation of “stacked catastrophic risks” using the framework above?
Thanks to Linda Linsefors for reviewing early drafts.