How to Understand and Mitigate Risk (Crosspost from LessWrong)

This post wouldn’t be pos­si­ble with­out sup­port from the EA Ho­tel.

Epistemic Sta­tus: Fairly cer­tain these dis­tinc­tions are point­ing at real things, less cer­tain that the cat­e­gories are ex­actly right. There’s still things I don’t know how to fit into this model, such as us­ing Nash Equil­ibria as a strat­egy for ad­ver­sar­ial en­vi­ron­ments.

In­stru­men­tal Sta­tus: Very con­fi­dent that you’ll get bet­ter out­comes if you start us­ing these dis­tinc­tions where pre­vi­ously you had less nu­anced mod­els of risk.

(larger image link)

Trans­par­ent Risks

Trans­par­ent risks are those risks that can be eas­ily quan­tified and known, in ad­vance. They’re equiv­a­lent to the pic­ture above, with a trans­par­ent bag where I can count the ex­act amount of mar­bles in each bag. If I’m also cer­tain about how much each mar­ble is worth, then I have a sim­ple strat­egy for deal­ing with risks in this situ­a­tion.

How to Miti­gate Trans­par­ent Risks: Do the Math

The sim­ple strat­egy for trans­par­ent risks like the one above is to do the math.

Ex­pected Value

Ex­pected value is a sim­ple bit of prob­a­bil­ity the­ory that says you should mul­ti­ply the like­li­hood of an event hap­pen­ing by the pay­off to get your long run value over time. It’s a sim­ple way to figure out if the risk is worth the re­ward in any given situ­a­tion. The best in­tro­duc­tion I know to ex­pected value is here.

Kelly Criterion

The Kelly crite­rion is helpful when los­ing your en­tire bankroll is worse than other

out­comes. I don’t fully un­der­stand it, but you should, and Zvi wrote a post in it here. (If some­one would be will­ing to walk me through a few ex­am­ples and show me where all the num­bers in the equa­tion come from, I’d be very grate­ful.)

Trans­par­ent Risks in Real Life

Drunk Driving

Driv­ing drunk is a sim­ple, well stud­ied risk on which you can quickly find prob­a­bil­ities of crash, in­jury and death to your­self and oth­ers. By com­par­ing these costs to the costs of cab fare (and the the time needed to get your car in the morn­ing if you left it), you can make a rel­a­tively trans­par­ent and easy es­ti­mate whether it’s worth driv­ing at your Blood Al­co­hol Con­tent level (spoiler alert, No if your BAC is any­where near .08 on ei­ther side.) The same method can be used for any well-stud­ied risks that ex­ist within tight, slow chang­ing bounds.

Com­mod­ity and Utility Markets

While most busi­ness op­por­tu­ni­ties are not trans­par­ent risks, an ex­cep­tion ex­ists for com­modi­ties and util­ities (in the sense mean’t by Wardley Map­ping). It’s quite easy to re­search the cost of cre­at­ing a rice farm, or a power plant, as well as get a tight bounded prob­a­bil­ity dis­tri­bu­tion for the ex­pected price you can sell your rice or elec­tric­ity at af­ter mak­ing the ini­tial in­vest­ment. Th­ese mar­kets are very ma­ture and there’s un­likely to be wild swings or un­ex­pected in­no­va­tions that sig­nifi­cantly change the mar­ket. How­ever, be­cause these risks are trans­par­ent it also means that com­pe­ti­tion drives mar­gins down. The win­ners are those which can squeeze a lit­tle ex­tra mar­gin through economies of scale or other monopoly effects like reg­u­la­tory cap­ture.

Edit: After be­ing pointed to the data on com­modi­ties, I no longer lump them in with util­ities as trans­par­ent risks and would call them more Knigh­tian.

Opaque Risks

Opaque risks are those risks that can be eas­ily quan­tified and un­likely to change, but which haven’t already been quan­tified/​aren’t easy to quan­tify just by re­search. They’re equiv­a­lent to the pic­ture above, with an opaque bag that you know con­tains a static amount of a cer­tain type of mar­ble, but not the ra­tio of mar­bles to each other. As long as I’m sure that the bag con­tains only three types of mar­bles, and that the dis­tri­bu­tion is rel­a­tively static, a sim­ple strat­egy for deal­ing with these risks emerges.

How to Miti­gate Opaque Risks: Deter­mine the Distribution

The sim­ple strat­egy for opaque risks is to figure out the dis­tri­bu­tion. For in­stance, by pul­ling a few mar­bles at ran­dom out of the bag, you can over time be­come more and more sure about the dis­tri­bu­tion in the bag, at which point you’re now deal­ing with trans­par­ent risks. The best re­source I know of for tech­niques to de­ter­mine the dis­tri­bu­tion of opaque risks is How to Mea­sure Any­thing by Dou­glas Hub­bard.

Sampling

Sam­pling in­volves re­peat­edly draw­ing from the dis­tri­bu­tion in or­der to get an idea of what the dis­tri­bu­tion is. In the pic­ture above, it would in­volve sim­ply reach­ing your hand in and pul­ling a few mar­bles out. The big­ger your sam­ple, the more sure you can be about the un­der­ly­ing dis­tri­bu­tion.

Modelling

Model­ling in­volves break­ing down the fac­tors that cre­ate the dis­tri­bu­tion, into as trans­par­ent pieces as pos­si­ble. The clas­sic ex­am­ple from fermi es­ti­ma­tion is how many pi­ano tuners there are in Chicago—that num­ber may be opaque to you, but the num­ber of peo­ple in Chicago is rel­a­tively trans­par­ent, as is the per­centage of peo­ple that own pi­anos, the like­li­hood that some­one will want their pi­ano tuned, and the amount of money that some­one needs to make a busi­ness worth­while. Th­ese more trans­par­ent fac­tors can be used to es­ti­mate the opaque fac­tor of pi­ano tuners.

Opaque Risks in Real Life

Choos­ing a Ca­reer You Don’t Like

In the per­sonal do­main, opaque risks of­ten take the form of very per­sonal things that have never been mea­sured be­cause they’re unique to you. As a ca­reer coach, I of­ten saw peo­ple leap­ing for­ward into ca­reers that were smart from a global per­spec­tive (likely to grow, good pay, etc) but ig­nored the more per­sonal fac­tors. The solu­tion was a two tier sam­pling solu­tion: Do a se­ries of in­for­ma­tional in­ter­views for the top po­ten­tial job ti­tles and po­ten­tial in­dus­tries, and then for the top 1-3 ca­reers/​in­dus­tries, see if you can do a form of job shad­ow­ing. This sig­nifi­cantly helped cut down the risk by mak­ing an opaque choice much more trans­par­ent.

Build­ing a Product No­body Wants

In the busi­ness do­main, solu­tions that are prod­ucts(in Wardley Map­ping terms) but are not yet com­modi­tized of­ten qual­ify as opaque risks. In this case, sim­ply talk­ing to cus­tomers, show­ing them a solu­tion, and ask­ing if they’ll pay, can save a sig­nifi­cant amount of time and ex­pense be­fore ac­tu­ally build­ing the product. Ma­te­rial on “lean startup” is all about how to do effi­cient sam­pling in these situ­a­tions.

Knigh­tian Risks

Knigh­tian risks are those risks that ex­ist in en­vi­ron­ments with dis­tri­bu­tions that are ac­tively re­sis­tant to the meth­ods used with opaque risks. There are three types of Knigh­tian Risks: Black Swans, Dy­namic En­vi­ron­ments, and Ad­ver­sar­ial En­vi­ron­ments.

A good por­tion of “ac­tu­ally try­ing to get things done in the real world” in­volves work­ing with Knigh­tian risks, and so most of the rest of this es­say will fo­cus on break­ing them own into their var­i­ous types, and talk­ing about the var­i­ous solu­tions to them.

Milan Griffes has writ­ten about Knigh­tian risks in an EA con­text on the EA fo­rum, call­ing them “clue­less­ness”.

Types of Knigh­tian Risks

Black Swans

A black swan risk is an un­likely, but very nega­tive event that can oc­cur in the game you choose to play.

In the ex­am­ple above, you could do a sig­nifi­cant amount of sam­pling with­out ever pul­ling the dy­na­mite. How­ever, this is quite likely a game you would want to avoid given the pres­ence of the dy­na­mite in the bag. You’re likely to severely over­es­ti­mate the ex­pected value of any given op­por­tu­nity, and then be wiped out by a sin­gle black swan. Model­ling isn’t use­ful be­cause very un­likely events prob­a­bly have causes that don’t en­ter into your model, and it’s im­pos­si­ble to know you’re miss­ing them be­cause your model will ap­pear to be work­ing ac­cu­rately (un­til the black swan hits). A great re­source for learn­ing about Black Swans is the epony­mous Black Swan, by Nas­sim Taleb.

Dy­namic Environments

When your risks are chang­ing faster than you can sam­ple or model them, you’re in a dy­namic en­vi­ron­ment. This is a func­tion of how big the un­der­ly­ing pop­u­la­tion size is, how good you are at sam­pling/​mod­el­ling, and how quickly the dis­tri­bu­tion is chang­ing.

A tra­di­tional sam­pling strat­egy as de­scribed above in­volves first sam­pling, find­ing out your risks in differ­ent situ­a­tions, then fi­nally “choos­ing your game” by mak­ing a de­ci­sion based on your sam­ple. How­ever, when the un­der­ly­ing dis­tri­bu­tion is chang­ing rapidly, this strat­egy is ren­dered moot as the in­for­ma­tion your de­ci­sion was based on quickly be­comes out­dated. The same ar­gu­ment ap­plies to a mod­el­ling strat­egy as well.

There’s not a great re­source I know of to re­ally grok dy­namic en­vi­ron­ments, but an ok re­source is Think­ing in Sys­tems by Donella Mead­ows (great book, but only ok for grokking the in­abil­ity to model dy­namic en­vi­ron­ments).

Ad­ver­sar­ial Environments

When your en­vi­ron­ment is ac­tively (or pas­sively) work­ing to block your at­tempts to un­der­stand it and miti­gate risks, you’re in an ad­ver­sar­ial en­vi­ron­ment.

Mar­kets are a typ­i­cal ex­am­ple of an Ad­ver­sar­ial En­vi­ron­ment, as are most other zero sum games with in­tel­li­gent op­po­nents. They’ll be ac­tively work­ing to change the game so that you lose, and any change in your strat­egy will change their strat­egy as well.

Ways to Miti­gate Knigh­tian Risks

Antifragility

An­tifrag­ility is a term coined by Nas­sim Taleb to de­scribe sys­tems that gain from di­s­or­der. If you think of the games de­scribed above as be­ing com­posed of dis­tri­bu­tions, and then pay­off rules that de­scribe how you re­act to this dis­tri­bu­tions, anti-frag­ility is a look at how to cre­ate flex­ible pay­off rules that can han­dle Knigh­tian risks. Taleb has an ex­cel­lent book on anti-frag­ility that I recom­mend if you’d like to learn more.

In terms of the “mar­bles in a bag” metaphor, an­tifrag­ility is a strat­egy where pul­ling out mar­bles that hurt you makes sure you get less and less hurt over time.

  • Optionality

Op­tion­al­ity is a heuris­tic that says you should choose those op­tions which al­low you to take more op­tions in the fu­ture. The idea here is to choose poli­cies that lower you’re in­ter­tia and switch­ing costs be­tween strate­gies. Avoid­ing huge bets and long time hori­zons that can make our break you, while de­vel­op­ing ag­ile and nim­ble pro­cesses that can quickly change. This is the prin­ci­ple from which all other anti-frag­ile prin­ci­ples are gen­er­ated.

This helps with black swans by al­low­ing you to quickly change strate­gies when your old strat­egy is ren­dered moot by a black swan. It helps with dy­namic en­vi­ron­ments by al­low­ing your strat­egy to change as quickly as the dis­tri­bu­tion does. It helps with ad­ver­sar­ial en­vi­ron­ments by giv­ing you more moves to use against chang­ing op­po­nents.

Go­ing with the bag of mar­bles ex­am­ple, imag­ine there are mul­ti­ple bags of mar­bles, and the dis­tri­bu­tions are chang­ing over time. Origi­nally, it costs quite a lot to switch be­tween bags. The op­tion­al­ity strat­egy says you should be fo­cused on low­er­ing the cost of switch­ing be­tween bags over time.

  • Hormesis

Horme­sis is a heuris­tic that says that when nega­tive out­comes be­fall you, you should work to make that class of out­comes less likely to hurt you in the fu­ture. When some­thing makes you weak tem­porar­ily, you should ul­ti­mately use that to make your­self stronger in the long run.

This helps with Black Swans by grad­u­ally build­ing up re­sis­tance to cer­tain classes of black swans BEFORE they hit you. It helps with rapidly chang­ing dis­tri­bu­tions by con­tinu­ally adapt­ing to the un­der­ly­ing changes with hormetic re­sponses.

In the bag of mar­bles ex­am­ple, imag­ine that at the start pul­ling a red mar­ble was worth -$10. Every time you pul­led a red mar­ble, you worked to re­duce that harm of red things by 110. This would mean that in an en­vi­ron­ment with lots of red mar­bles, you would quickly be­come im­mune to them. It would also mean that if you even­tu­ally did pull out that stick of dy­namic, your gen­eral abil­ity to han­dle red things would mean that it would hurt you less.

(I get that the above ex­am­ple is a bit silly, but the gen­eral pat­tern of im­mu­nity to small events helping you with im­mu­nity to black swans in the same class is quite com­mon).

  • Evolution

The evolu­tion heuris­tic says that you should con­stantly be cre­at­ing mul­ti­ple vari­a­tions on your cur­rent strate­gies, and keep­ing those that avoid nega­tive con­se­quences over time. Just like biolog­i­cal evolu­tion, you’re look­ing to find strate­gies that are very good at sur­vival. Of course, you should be care­ful about call­ing up blind idiot gods, and be cau­tious about be­ing tempted to op­ti­mize gains in­stead of min­i­mize down­side risk (as it should be used).

This helps with black swans in a num­ber of ways. Firstly, by di­ver­sify­ing your strate­gies, it’s un­likely that all of them will be hit by black swans. Se­condly, it has an effect similar to horme­sis in which im­mu­nity to small effects can build up im­mu­nity to black swans in the same class. Fi­nally, by hav­ing strate­gies that out­live sev­eral black swans, you de­velop gen­eral sur­vival char­ac­ter­is­tics that help against black swans in gen­eral. It helps with dy­namic en­vi­ron­ments by hav­ing sev­eral strate­gies, some of which will hope­fully be fa­vor­able to the en­vi­ron­men­tal changes.

  • The Bar­bell Strategy

The bar­bell strat­egy refers to a strat­egy of split­ting your ac­tivi­ties be­tween those that are very safe, with low down­side, and those that are very risky, with high up­side.Pre­vi­ously, Ben­quo has ar­gued against the bar­bell strat­egy, ar­gu­ing that there is no such thing a riskless strat­egy. I agree with this gen­eral idea, but think that the frame­work I’ve pro­vided in this post gives a clearer way to talk about what Nas­sim means: Split your ac­tivi­ties be­tween trans­par­ent risks with low down­sides, and Knigh­tian risks with high up­sides.

The trans­par­ent risks ob­vi­ously aren’t riskless (that’s why they’re called risk), but they be­have rel­a­tively pre­dictably over long time scales. When they DON’T be­have pre­dictably is when there’s black swans, or an equil­ibrium is bro­ken such that a rel­a­tively sta­ble en­vi­ron­ment be­comes an en­vi­ron­ment of rapid change. That’s ex­actly when the trans­par­ent risks with high up­side tend to perform well (be­cause they’re de­signed to take ad­van­tage of these situ­a­tions). That’s also why this strat­egy is great for han­dling black swans and dy­namic en­vi­ron­ments. It’s less effec­tive at han­dling ad­ver­sar­ial en­vi­ron­ments, un­less there’s lo­cal in­cen­tives in the ad­ver­sar­ial en­vi­ron­ment to think more short term than this strat­egy does.

  • Via Negativa

Via nega­tiva is a prin­ci­ple that says to con­tinu­ally chip away at sources of down­side risk, work­ing to re­move the bad in­stead of in­crease the good. It also says to avoid games that have ob­vi­ously large sources of down­side risk. The prin­ci­ple here is that down­side risk is un­avoid­able, but by mak­ing it a pri­or­ity to re­move sources of down­side risks over time, you can sig­nifi­cantly im­prove your chances.

In the bag of mar­bles ex­am­ple, this might look like get­ting a mag­net that can over time be­gin to suck all the red mar­bles/​items out of the bag, so you’re left with only the pos­i­tive value mar­bles. For a more con­crete ex­am­ple, this would in­volve pay­ing off debt be­fore in­vest­ing in new equip­ment for a busi­ness, even if the rate of re­turn from the new equip­ment would be higher than the rate of in­ter­est on the loan. The loan is a down­side risk that could be catas­trophic in the case of a black swan that pre­vented that up­side po­ten­tial from emerg­ing.

This helps deal with black swans, dy­namic en­vi­ron­ments, and ad­ver­sar­ial en­vi­ron­ments by mak­ing sure you don’t lose more than you can af­ford given that the dis­tri­bu­tion takes a turn for the worse.

  • Skin in the Game

Skin in the game is a prin­ci­ple that comes from ap­ply­ing anti-frag­ility on a sys­tems level. It says that in or­der to en­courage in­di­vi­d­u­als and or­ga­ni­za­tions to cre­ate anti-frag­ile sys­tems, they must be ex­posed to the down­side risk that they cre­ate.

If I can cre­ate down­side risk for oth­ers that I am not ex­posed to, I can cre­ate a lo­cally anti-frag­ile en­vi­ron­ment that nonethe­less in­creases frag­ility globally. The Skin in the game prin­ci­ple aims to com­bat two forces that cre­ate molochian anti-frag­ile en­vi­ron­ments- moral haz­ards and nega­tive ex­ter­nal­ities.

Effectuation

Effec­tu­a­tion is a term coined by Saras Saras­vathy to de­scribe a par­tic­u­lar type of proac­tive strat­egy she found when study­ing ex­pert en­trepreneurs. In­stead of look­ing to miti­gate risks by choos­ing strate­gies that were flex­ible in the pres­ence of large down­sides risks (an­tifrag­ility), these en­trepreneurs in­stead worked to shift the dis­tri­bu­tion such that there were no down­side risks, or shift the rules such that the risks were no longer down­sides. There’s not a book a can recom­mend that’s great at ex­plain­ing effec­tu­a­tion, but two OK ones are Effec­tu­a­tion by Saras Saras­vathy and Zero to One by Peter Thiel. This 3-page in­fo­graphic on effec­tu­a­tion is also de­cent.

Note that Effec­tu­a­tion and An­tifrag­ility ex­plic­itly trade off against each other. An­tifrag­ility trades away cer­tainty for flex­i­bil­ity while Effec­tu­a­tion does the op­po­site.

In terms of the “mar­bles in a bag” metaphor, Effec­tau­tion can be seen as pour­ing a lot of mar­bles that are re­ally helpful to you into the bag, then reach­ing in and pul­ling them out.

  • Pilot-in-Plane Principle

The pi­lot-in-plane prin­ci­ple is a gen­eral way of think­ing that says con­trol is bet­ter than both pre­dic­tion and anti-frag­ility. The pi­lot-in-plane prin­ci­ple em­pha­sizes proac­tively shap­ing risks and re­wards, in­stead of cre­at­ing a sys­tem that can deal with un­known or shift­ing risks and re­wards. The quote that best sum­ma­rizes this prin­ci­ple is the Peter Drucker quote “The best way to pre­dict the fu­ture is to cre­ate it.”

This prin­ci­ple also isn’t much use with black swans. It deals with dy­namic en­vi­ron­ments by seiz­ing con­trol of the forces that shape those dy­namic en­vi­ron­ments. It deals with ad­ver­sar­ial en­vi­ron­ments by shap­ing the ad­ver­sar­ial land­scape.

  • Afford­able Loss Principle

The af­ford­able loss prin­ci­ple sim­ply says that you shouldn’t risk more than you’re will­ing to lose on any given bet. It’s Effec­tu­a­tion’s an­swer to Via Nega­tiva prin­ci­ple.
The differ­ence is that while Via nega­tiva recom­mends poli­cies that search for situ­a­tions with af­ford­able down­side, and fo­cus on miti­gat­ing un­avoid­able down­side, Afford­able loss fo­cuses on us­ing your re­sources to shape situ­a­tions in which the loss of all par­ties is af­ford­able.

It’s not enough to just make bets you can af­ford to lose, you have to figure out how to do this while max­i­miz­ing up­side. Can you get a bunch of peo­ple to band to­gether to put in a lit­tle, so that ev­ery­one can af­ford to lose what they’re putting in, but you have a seat at the table? Can you have some­one else shoulder the risk who can af­ford to lose more> Can you get guaran­tees or in­surance to min­i­mize down­side risk while still get­ting the up­side? Many of these prin­ci­ples break the Skin in the Game prin­ci­ple needed for anti-frag­ility, but work perfectly (with­out call­ing up Moloch) when us­ing an effec­tu­a­tive strat­egy. This is the af­ford­able loss prin­ci­ple.

It helps with black swans by cre­at­ing buffers that pro­tect catas­trophic loss. It helps with dy­namic en­vi­ron­ments by keep­ing what can you lose con­stant even as the en­vi­ron­ment changes. It helps with ad­ver­sar­ial en­vi­ron­ments by mak­ing sure you can af­ford to lose to your ad­ver­sary.

  • Bird-in-Hand Principle

The bird-in-hand prin­ci­ple says that you should use your ex­ist­ing knowl­edge, ex­per­tise, con­nec­tions, and re­sources to shift the dis­tri­bu­tion in your fa­vor. It also says that you should only choose to play games where you have enough of these ex­ist­ing re­sources to shift the dis­tri­bu­tion. Peter Thiel says to ask the ques­tion “What do I be­lieve that oth­ers do not?” Saras Saras­vathy says to look at who you are, what you know, and who you know.

This helps with Black Swans by pre­vent­ing some of them from hap­pen­ing. It helps with dy­namic en­vi­ron­ments by seiz­ing con­trol of the pro­cess that is caus­ing the en­vi­ron­ment to change, mak­ing most of the change come from you. It helps with ad­ver­sar­ial en­vi­ron­ments by en­sur­ing that you have an un­fair ad­van­tage in the game.

  • Le­mon­ade Principle

The lemon­ade prin­ci­ple that says when the un­ex­pected hap­pens, you should use that as an op­por­tu­nity to re-eval­u­ate the game you’re play­ing, and see­ing if there’s a more lu­cra­tive game you should be play­ing in­stead. Again, the idea of “make the most of a bad situ­a­tion” might seem ob­vi­ous, but through the cre­ative and proac­tive lens of effec­tu­a­tion, it’s taken to the ex­treme. In­stead of say­ing “What changes can I make to my cur­rent ap­proach given this new situ­a­tion?” the lemon­ade prin­ci­ple says to ask “Given this new situ­a­tion, what’s the best ap­proach to take?”

This helps with Black Swans by us­ing them as lu­cra­tive op­por­tu­ni­ties for gain­ing util­ity. It helps with dy­namic en­vi­ron­ments by con­stantly find­ing the best op­por­tu­nity given the cur­rent land­scape. It helps with ad­ver­sar­ial en­vi­ron­ments by re­fus­ing to play los­ing games.

  • Patch­work Quilt Principle

The patch­work quilt prin­ci­ple says that you should trade flex­i­bil­ity for cer­tainty by bring­ing on key part­ners. The part­ners get to have more of a say in the strate­gies you use, but in turn you get ac­cess to their re­sources and the cer­tainty that they’re on board.

While the origi­nal work on effec­tu­a­tion paints this prin­ci­ple as only hav­ing to do with part­ner­ships, I like to think of it as a gen­eral prin­ci­ple where you should be will­ing to limit your op­tions if it limits your down­side risk and volatility more. The in­verse of the op­tion­al­ity prin­ci­ple from an­tifrag­ile strate­gies.

This strat­egy doesn’t re­ally help with black swans that much. It helps with dy­namic en­vi­ron­ment by mak­ing the en­vi­ron­ment less dy­namic through com­mit­ments. It helps with Ad­ver­sar­ial en­vi­ron­ments by turn­ing po­ten­tial ad­ver­saries into al­lies.

Ca­pa­bil­ity Enhancement

Ca­pa­bil­ity en­hance­ment is a gen­eral strat­egy of try­ing to im­prove ca­pa­bil­ities such that knigh­tian risks are turned into opaque risks (which are then turned into trans­par­ent risks through sam­pling and mod­el­ling). Un­like the pre­vi­ous to ways to miti­gate knigh­tian risk, this is more a class of strate­gies than a strat­egy in its’ own right. In terms of the “mar­bles in a bag” anal­ogy, ca­pa­bil­ity en­hance­ment might be build­ing x-ray googles to look through the bag, or get­ting re­ally good at shak­ing it to figure out the dis­tri­bu­tion.

Black Swans can be turned opaque by know­ing more (and hav­ing less un­known un­knowns. Dy­namic en­vi­ron­ments can be turned opaque by in­creas­ing the speed of sam­pling or mod­el­ling, or the ac­cu­racy or ap­pli­ca­bil­ity of mod­els. Ad­ver­sar­ial en­vi­ron­ments can be turned opaque by giv­ing bet­ter strate­gies to model or face ad­ver­saries (and their in­ter­ac­tions with each other).

There are nu­mer­ous clas­sifi­ca­tion schemes one could use for all the var­i­ous types of ca­pa­bil­ity en­hance­ment. In­stead of try­ing to choose one, I’ll sim­ply list a few ways that I see peo­ple try­ing to ap­proach this, with no at­tempt at com­plete­ness or con­sis­tent lev­els of ab­strac­tion.

  • Per­sonal Psy­chol­ogy Enhancement

By mak­ing peo­ple think bet­ter, work more, be more effec­tive, an in­di­vi­d­ual can in­crease the class of prob­lems that be­come opaque to them. This is one ap­proach that CFAR and Lev­er­age are tak­ing.

  • Bet­ter Models

By cre­at­ing bet­ter mod­els of how the world works, risks that were pre­vi­ously knigh­tian to you be­come opaque. I would put Lev­er­age, FHI, and MIRI into the class of or­ga­ni­za­tions that are tak­ing this ap­proach to ca­pa­bil­ity en­hance­ment. The se­quences could fit here as well.

  • Bet­ter Think­ing Tools

By cre­at­ing tools that can them­selves help you model things, you can make risks opaque that were pre­vi­ously Knigh­tian. I would put Ver­ity, Guessti­mate, and Roam in this cat­e­gory.

  • Im­prov­ing Group Dynamics

By figur­ing out how to work to­gether bet­ter, or­ga­ni­za­tions can turn risks from knigh­tian to opaque. Team Team on Lev­er­age, and CFARs work on group ra­tio­nal­ity both fit into this cat­e­gory.

  • Col­lec­tive In­tel­li­gence and Crowdsourcing

By figur­ing out how to turn a group of peo­ple into a sin­gle di­rected agent, you can of­ten shore up in­di­vi­d­u­als weak­nesses and am­plify their strengths. This al­lows risks that were pre­vi­ously knigh­tian to in­di­vi­d­u­als be­come opaque to the col­lec­tive.

I would put Me­tac­u­lus, Ver­ity, and LessWrong into this cat­e­gory.

Knigh­tian Risks in Real Life

0 to 1 Companies

When a com­pany is cre­at­ing some­thing en­tirely new (in the Wardley Map­ping sense), it’s tak­ing a Knigh­tian risk. Sam­pling is fairly use­less here be­cause peo­ple don’t know they want what doesn’t ex­ist, and naive ap­proaches to mod­el­ling won’t work be­cause your in­puts are all junk data that ex­ists with­out your product in the mar­ket.

How would each of these strate­gies han­dle this situ­a­tion?

  • Effectuation

Start your com­pany in an in­dus­try where you have pre-ex­ist­ing con­nec­tions, and in which you have mod­els or in­for­ma­tion that oth­ers don’t (“What do you be­lieve that oth­ers do not?”). Be­fore build­ing the product, get your con­tacts to pay up front to get you to build it, there­fore limit­ing risk. If some­thing goes wrong in the build­ing of the product, take all the in­for­ma­tion you’ve gath­ered and the al­li­ances you’ve already made, and figure out what the best op­por­tu­nity is with that in­for­ma­tion and re­sources.

  • Anti-Fragility

Create a se­ries of small ex­per­i­ments with pro­to­types of your prod­ucts. Keep the ones that suc­ceed, and branch them off into more vari­a­tions, only keep­ing the ones that do well. Avoid big con­tracts like in the effec­tu­a­tion ex­am­ple, only tak­ing small con­tracts that can let you pivot at a mo­ments no­tice if needed.

  • Ca­pa­bil­ity Enhancement

Create a fore­cast­ing tour­na­ment for the above product vari­a­tions. Test only the ones that have pos­i­tive ex­pected value. Over time, you’ll have less and less failed ex­per­i­ments as your rep­u­ta­tion mea­sures get bet­ter. Even­tu­ally, you may be able to skip many ex­per­i­ments all to­gether and just trust the fore­cast­ing data. If you’re in­ter­ested in this type of thing we should re­ally chat.

AGI Risk

At first glance, it seems like many of these strate­gies such as Effec­tu­a­tion ap­ply more to in­di­vi­d­ual or group risks than global risks. It’s not clear for in­stance how an effec­tual strat­egy of shift­ing the risks to peo­ple who can han­dle them ap­plies on a so­ciety wide scale. I do how­ever think that this cat­e­go­riza­tion scheme has some­thing to say about ex­is­ten­tial risk, and will illus­trate with a few ex­am­ples of ways to miti­gate AGI Risk. I rec­og­nize that many of these ex­am­ples are in­cred­ibly sim­plified and un­re­al­is­tic. The aim is sim­ply to show how this cat­e­go­riza­tion scheme can be used to mean­ingfully think about ex­is­ten­tial risk, not to make ac­tual policy sug­ges­tions or leaps for­ward.

How might we miti­gate AI risk us­ing the strate­gies dis­cussed here?

  • Ca­pa­bil­ity En­hance­ment/​Model­ling/​Sampling

A ca­pa­bil­ity en­hance­ment/​sam­pling/​mod­el­ling strat­egy might be to get a bunch of ex­perts to­gether and fore­cast how soon we’ll get AGI. Then, get a bunch of fore­cast­ing ex­perts to­gether and cre­ate a func­tion that de­ter­mines how long it takes to de­velop benev­olent AGI given the amount of AI safety re­searchers. Fi­nally, cre­ate a plan to hire enough AI safety re­searchers that we de­velop the abil­ity to cre­ate safe AGI be­fore we de­velop the abil­ity to de­velop un­safe AGI. If we find that there’s sim­ply no way to dis­cover AI safety fast enough given cur­rent meth­ods, cre­ate tools to get bet­ter at work­ing on AI safety. If you find that the con­fi­dence in­ter­vals on AGI timelines are too wide, cre­ate tools that can al­low you to nar­row them.

  • Anti-fragility

An anti-frag­ile strat­egy might look like de­vel­op­ing a sys­tem of aware­ness of AI risk and enough fund­ing such that you can cre­ate a strat­egy where two AI safety re­searchers are hired for ev­ery non-safety AI re­searcher that is hired. Thus, the more you ex­pose your­self to the ex­is­ten­tial risk of AGI, the faster you cre­ate the mechanism that pro­tects you from that risk. This might be pared with a sys­tem that tries differ­ent ap­proaches to AI safety, and splits off the groups that are do­ing the best ev­ery few years into two groups, these evolv­ing a sys­tem that in­creases the effec­tive­ness of AI safety re­searchers over time.

  • Effectuation

The effec­tual strat­egy, in­stead of tak­ing the timeline for AI as a given, would in­stead ask “How can we change this timeline such that there’s less risk?” Hav­ing asked that ques­tion, and rec­og­niz­ing that pretty much any an­swer ex­ists in an ad­ver­sar­ial en­vi­ron­ment, the ques­tion be­comes “What game can we play that we, as effec­tive al­tru­ists, have a com­par­a­tive ad­van­tage at com­pared to our ad­ver­saries?” If the an­swer is some­thing like “We have an over­bun­dance of smart, ca­pa­bable peo­ple who are will­ing to forgo both money and power for al­tru­is­tic rea­sons,” then maybe the game we play is get­ting a bunch of effec­tive al­tru­ists to run for lo­cal offices in mu­ni­ci­pal elec­tions, and in­fluence policy from the ground up by co­or­di­nat­ing laws on a mu­ni­ci­pal level to cre­ate a large effect of re­quiring safety teams for ML teams (among many other small poli­cies). Ob­vi­ously a ridicu­lous plan, but it does illus­trate how the differ­ent risk miti­ga­tion strate­gies can sug­gest vastly differ­ent ob­ject level poli­cies.

Ex­er­cise for the reader: Robin Han­son wor­ries about a se­ries of catas­trophic risks that tax hu­man­ity be­yond it’s re­sources (I can’t find the ar­ti­cle to link here but if some­one knows it let me know in the com­ments). We might be able to han­dle cli­mate change, or an as­ter­oid, or an epi­demic on their own, but if by chance they hit to­gether, we pass a crit­i­cal thresh­old that we sim­ply can’t re­cover from.

How would you an­a­lyze and miti­gate this situ­a­tion of “stacked catas­trophic risks” us­ing the frame­work above?

Thanks to Linda Linse­fors for re­view­ing early drafts.