FHI Report: The Windfall Clause: Distributing the Benefits of AI for the Common Good

Full Report

Sum­mary for AIES

Over the long run, tech­nol­ogy has im­proved the hu­man con­di­tion. Nev­er­the­less, the eco­nomic progress from tech­nolog­i­cal in­no­va­tion has not ar­rived equitably or smoothly. While in­no­va­tion of­ten pro­duces great wealth, it has also of­ten been dis­rup­tive to la­bor, so­ciety, and world or­der. In light of on­go­ing ad­vances in ar­tifi­cial in­tel­li­gence (“AI”), we should pre­pare for the pos­si­bil­ity of ex­treme dis­rup­tion, and act to miti­gate its nega­tive im­pacts. This re­port in­tro­duces a new policy lever to this dis­cus­sion: the Wind­fall Clause.

What is the Wind­fall Clause?

The Wind­fall Clause is an ex ante com­mit­ment by AI firms to donate a sig­nifi­cant amount of any even­tual ex­tremely large prof­its. By “ex­tremely large prof­its,” or “wind­fall,” we mean prof­its that a firm could not earn with­out achiev­ing fun­da­men­tal, eco­nom­i­cally trans­for­ma­tive break­throughs in AI ca­pa­bil­ities. It is un­likely, but not im­plau­si­ble, that such a wind­fall could oc­cur; as such, the Wind­fall Clause is de­signed to ad­dress a set of low-prob­a­bil­ity fu­ture sce­nar­ios which, if they come to pass, would be un­prece­dent­edly dis­rup­tive. By “ex ante,” we mean that we seek to have the Clause in effect be­fore any in­di­vi­d­ual AI firm has a se­ri­ous prospect of earn­ing such ex­tremely large prof­its. “Donate” means, roughly, that the donated por­tion of the wind­fall will be used to benefit hu­man­ity broadly.

Motivations

Prop­erly en­acted, the Wind­fall Clause could ad­dress sev­eral po­ten­tial prob­lems with AI-driven eco­nomic growth. The dis­tri­bu­tion of prof­its could com­pen­sate those ren­dered faultlessly un­em­ployed due to ad­vances in tech­nol­ogy, miti­gate po­ten­tial in­creases in in­equal­ity, and smooth the eco­nomic tran­si­tion for the most vuln­er­a­ble. It pro­vides AI labs with a cred­ible, tan­gible mechanism to demon­strate their com­mit­ment to pur­su­ing ad­vanced AI for the com­mon global good. Fi­nally, it pro­vides a con­crete sug­ges­tion that may stim­u­late other pro­pos­als and dis­cus­sion about how best to miti­gate AI-driven dis­rup­tion.

Mo­ti­va­tions Spe­cific to Effec­tive Altruism

Most EA AI re­sources to-date have been fo­cused on ex­tinc­tion risks from AI. One might won­der whether the prob­lems ad­dressed by the Wind­fall Clause are re­ally as press­ing as these.

How­ever, a long-term fu­ture in which ad­vanced forms of AI like AGI or TAI ar­rive but pri­mar­ily benefit a small por­tion of hu­man­ity is still highly sub­op­ti­mal. Failure to en­sure ad­vanced AI benefits all could “dras­ti­cally cur­tail” the po­ten­tial of Earth-origi­nat­ing in­tel­li­gent life. In­ten­tional or ac­ci­den­tal value lock-in could re­sult if, for ex­am­ple, a TAI does not cause ex­tinc­tion but is pro­grammed to pri­mar­ily benefit share­hold­ers of the cor­po­ra­tion that de­vel­ops it. The Wind­fall Clause thus rep­re­sents a le­gal re­sponse to this sort of sce­nario.

Limitations

There re­main sig­nifi­cant un­re­solved is­sues re­gard­ing the ex­act con­tent of an even­tual Wind­fall Clause, and the way in which it would be im­ple­mented. We in­tend this re­port to spark a pro­duc­tive dis­cus­sion, and recom­mend that these un­cer­tain­ties be ex­plored through pub­lic and ex­pert de­liber­a­tion. Crit­i­cally, the Wind­fall Clause is only one of many pos­si­ble solu­tions to the prob­lem of con­cen­trated wind­fall prof­its in an era defined by AI-driven growth and dis­rup­tion. In pub­lish­ing this re­port, our hope is not only to en­courage con­struc­tive crit­i­cism of this par­tic­u­lar solu­tion, but more im­por­tantly to in­spire open-minded dis­cus­sion about the full set of solu­tions in this vein. In par­tic­u­lar, while a po­ten­tial strength of the Wind­fall Clause is that it ini­tially does not re­quire gov­ern­men­tal in­ter­ven­tion, we ac­knowl­edge and are thor­oughly sup­port­ive of pub­lic solu­tions.

Next steps

We hope to con­tribute an am­bi­tious and novel policy pro­posal to an already rich dis­cus­sion on this sub­ject. More im­por­tant than this policy it­self, though, we look for­ward to con­tin­u­ously con­tribut­ing to a broader con­ver­sa­tion on the eco­nomic promises and challenges of AI, and how to en­sure AI benefits hu­man­ity as a whole. Over the com­ing months, we will be work­ing with the Part­ner­ship on AI and OpenAI to push such con­ver­sa­tions for­ward. If you work in eco­nomics, poli­ti­cal sci­ence, or AI policy and strat­egy, please con­tact me to get in­volved.

Error

The value
  NIL
is not of type
  LOCAL-TIME:TIMESTAMP