FHI Report: Stable Agreements in Turbulent Times

Link post

I have just pub­lished the linked Tech­ni­cal Re­port for the Cen­ter for the Gover­nance of AI at the Fu­ture of Hu­man­ity In­sti­tute. I have re­pro­duced the In­tro­duc­tion here:

Introduction

This cen­tury, ad­vanced ar­tifi­cial in­tel­li­gence (“Ad­vanced AI”) tech­nolo­gies could rad­i­cally change eco­nomic or poli­ti­cal power. Such changes pro­duce a ten­sion that is the fo­cus of this Re­port. On the one hand, the prospect of rad­i­cal change pro­vides the mo­ti­va­tion to craft, ex ante, agree­ments that pos­i­tively shape those changes. On the other hand, a rad­i­cal tran­si­tion in­creases the difficulty of form­ing such agree­ments since we are in a poor po­si­tion to know what the tran­si­tion pe­riod will en­tail or pro­duce. The difficulty and im­por­tance of craft­ing such agree­ments is pos­i­tively cor­re­lated with the mag­ni­tude of the changes from Ad­vanced AI. The difficulty of craft­ing long-term agree­ments in the face of rad­i­cal changes from Ad­vanced AI is the “tur­bu­lence” with which this Re­port is con­cerned. This Re­port at­tempts to give read­ers a toolkit for mak­ing sta­ble agree­ments—ones that pre­serve the in­tent of their drafters—in light of this tur­bu­lence.

Many agree­ments deal with similar prob­lems to some ex­tent. Agree­ments shape fu­ture rights and du­ties, but are made with im­perfect knowl­edge of what this fu­ture will be like. To take a real-life ex­am­ple, the out­break of war could lead to night­time light­ing re­stric­tions, ren­der­ing a long-term rental of neon sig­nage sud­denly use­less to the renter. Had the renter fore­seen such re­stric­tions, he would have surely en­tered into a differ­ent agree­ment. Much of con­tract law is aimed at ad­dress­ing similar prob­lems.

How­ever, tur­bu­lence is par­tic­u­larly prob­le­matic for pre-Ad­vanced AI agree­ments that aim to shape the post-Ad­vanced AI world. More speci­fi­cally, tur­bu­lence is a prob­lem for such agree­ments for three main rea­sons:

1. Uncer­tainty: Not know­ing what the post-Ad­vanced AI state of the world will be (even if all the pos­si­bil­ities are known);
2. In­de­ter­mi­nacy: Not know­ing what the pos­si­ble post-Ad­vanced AI states of the world are; and
3. Un­fa­mil­iar­ity: The pos­si­bil­ity that the post-Ad­vanced AI world will be very un­fa­mil­iar to those craft­ing agree­ments pre-Ad­vanced AI.

The po­ten­tial speed of a tran­si­tion be­tween pre- and post-Ad­vanced AI states ex­ac­er­bates these is­sues.

In­de­ter­mi­nacy and un­fa­mil­iar­ity are par­tic­u­larly prob­le­matic for pre-Ad­vanced AI agree­ments. Un­der un­cer­tainty alone (and as­sum­ing the num­ber of pos­si­ble out­comes is man­age­able), it is easy to spec­ify rights and du­ties un­der each pos­si­ble out­come. How­ever, it is much more difficult to plan for an in­de­ter­mi­nate set of pos­si­ble out­comes, or a set of pos­si­ble out­comes con­tain­ing un­fa­mil­iar el­e­ments.

A com­mon jus­tifi­ca­tion for the rule of law is that it pro­motes sta­bil­ity by in­creas­ing pre­dictabil­ity and there­fore the abil­ity to plan. Le­gal tools, then, should provide a means of min­i­miz­ing dis­rup­tion of pre-Ad­vanced AI plans dur­ing the tran­si­tion to a post-Ad­vanced AI world.

Of course, hu­man­ity has limited ex­pe­rience with Ad­vanced AI-level tran­si­tions. Although anal­y­sis of how le­gal ar­range­ments and in­sti­tu­tions weath­ered similar tran­si­tional pe­ri­ods would be valuable, this Re­port does not offer it. Rather, this Re­port sur­veys the le­gal land­scape and iden­ti­fies com­mon tools and doc­trines that could re­duce dis­rup­tion of pre-Ad­vanced AI agree­ments dur­ing the tran­si­tion to a post-Ad­vanced AI world. Speci­fi­cally, it iden­ti­fies com­mon con­trac­tual tools and doc­trines that could faith­fully pre­serve the goals of pre-Ad­vanced AI plans, even if un­fore­seen and un­fore­see­able so­cietal changes from Ad­vanced AI ren­der the for­mal con­tent of such plans ir­rele­vant, in­co­her­ent, or sub­op­ti­mal.

A key con­clu­sion of this Re­port is this: sta­ble preser­va­tion of pre-Ad­vanced AI agree­ments could re­quire par­ties to agree ex ante to be bound by some de­ci­sions made post-Ad­vanced AI, with the benefit of in­creased knowl­edge. By trans­mit­ting (some) key, bind­ing de­ci­sion points for­ward in time, ac­tors can miti­gate the risk of be­ing locked into naïve agree­ments that have un­de­sir­able con­se­quences when ap­plied liter­ally in un­con­tem­plated cir­cum­stances. Par­ties can of­ten con­strain those ex post choices by set­ting stan­dards for them ex ante.

This Re­port aims to help non­lawyer read­ers de­velop a le­gal toolkit to ac­com­plish what I am call­ing “con­strained tem­po­ral de­ci­sion trans­mis­sion.” All mechanisms ex­am­ined herein al­low par­ties to be bound by fu­ture de­ci­sions, as de­scribed above; this is “tem­po­ral de­ci­sion trans­mis­sion.” How­ever, as this Re­port demon­strates, these choices must be con­strained be­cause bind­ing agree­ments re­quire a de­gree of cer­tainty suffi­cient to de­ter­mine par­ties’ rights and du­ties. As a corol­lary, this Re­port largely does not ad­dress solely ex ante tools for sta­bi­liza­tion, such as risk anal­y­sis, sta­bi­liza­tion clauses, or fully con­tin­gent con­tract­ing. For each po­ten­tial tool, this Re­port sum­ma­rizes its rele­vant fea­tures and then ex­plain how it ac­com­plishes con­strained tem­po­ral de­ci­sion trans­mis­sion.

My aim is not to provide a com­pre­hen­sive overview of each rele­vant tool or doc­trine, but to provide read­ers in­for­ma­tion that en­ables them to de­cide whether to in­ves­ti­gate a given tool fur­ther. Read­ers should there­fore con­sider this Re­port more of a se­ries of sign­posts to po­ten­tially use­ful tools than a com­plete, ready-to-de­ploy toolkit. As a corol­lary, de­ploy­ment of any tool in the con­text of a par­tic­u­lar agree­ment ne­ces­si­tates care­ful de­sign and im­ple­men­ta­tion with spe­cial at­ten­tion to how the gov­ern­ing law treats that tool. Fi­nally, this Re­port of­ten fo­cuses on how tools are most fre­quently de­ployed. Depend­ing on the spe­cific tool and ju­ris­dic­tion, how­ever, read­ers might very well be able to de­ploy tools in non-stan­dard ways. They should be aware, how­ever, that there is a trade­off be­tween nov­elty in tool sub­stance and le­gal pre­dictabil­ity.

The tools ex­am­ined here are:

Op­tions—A con­trac­tual mechanism that pre­vents an offeror from re­vok­ing her offer, and thereby al­lows the offeree to ac­cept at a later date;
Im­pos­si­bil­ity doc­trines—Back­ground rules of con­tract and treaty law that re­lease par­ties from their obli­ga­tions when cir­cum­stances dra­mat­i­cally change;
Con­trac­tual stan­dards—Im­pre­cise con­trac­tual lan­guage that de­ter­mines par­ties’ obli­ga­tions in vary­ing cir­cum­stances;
Rene­go­ti­a­tion—Re­leas­ing par­ties from obli­ga­tions un­der cer­tain cir­cum­stances with the ex­pec­ta­tion that they will agree on al­ter­na­tive obli­ga­tions; and
Third-party re­s­olu­tion—Sub­mit­ting dis­putes to a third-party with au­thor­ity to is­sue bind­ing de­ter­mi­na­tions.

Although the tools stud­ied here typ­i­cally do not con­tem­plate changes as rad­i­cal as Ad­vanced AI, they will hope­fully still be use­ful in pre-Ad­vanced AI agree­ments. By care­fully de­ploy­ing these tools (in­di­vi­d­u­ally or in con­junc­tion), read­ers should be able to en­sure that the spirit of any pre-Ad­vanced AI agree­ments sur­vives a po­ten­tially tur­bu­lent tran­si­tion to a post-Ad­vanced AI world.

A per­ma­nent archive of the Re­port can be found here.

No comments.