Effective policy? Requiring liability insurance for dual-use research

Hi all,

I thought peo­ple might be in­ter­ested in some of the policy work the Global Pri­ori­ties Pro­ject has been look­ing into. Below I’m cross-post­ing some notes on one policy idea. I’ve talked to sev­eral peo­ple with ex­per­tise in biosafety and had pos­i­tive feed­back, and am cur­rently look­ing into how best to push fur­ther on this (it will in­volve talk­ing to peo­ple in the in­surance in­dus­try).

In gen­eral quite a bit of policy is de­signed by tech­nocrats and already quite effec­tive. Some other ar­eas are gov­erned by pub­lic opinion which makes it very hard to have any trac­tion. When we’ve looked into policy, we’ve been in­ter­ested in find­ing ar­eas which nav­i­gate be­tween these ex­tremes—and also don’t sound too out­landish, so that they have some rea­son­able chance of broad sup­port.

I’d be in­ter­ested in hear­ing feed­back on this from EAs. Crit­i­cisms and sug­ges­tions also very much wel­come!

---

Re­quiring li­a­bil­ity in­surance for dual-use re­search with po­ten­tially catas­trophic consequences

Th­ese are notes on a policy pro­posal aimed at re­duc­ing catas­trophic risk. They cover some of the ad­van­tages and dis­ad­van­tages of the idea at a gen­eral level; they do not yet con­sti­tute a pro­posal for a spe­cific ver­sion of the policy.

Re­search pro­duces large benefits. In some cases it may also pose novel risks, for in­stance work on po­ten­tial pan­demic pathogens. There is wide­spread agree­ment that such ‘dual use re­search of con­cern’ poses challenges for reg­u­la­tion.

There is a con­vinc­ing case that we should avoid re­search with large risks if we can ob­tain the benefits just as effec­tively with safer ap­proaches. How­ever, there do not cur­rently ex­ist nat­u­ral mechanisms to en­force such de­ci­sions. Govern­ment anal­y­sis of the risk of differ­ent branches of re­search is a pos­si­ble mechanism, but it must be performed anew for each risk area, and may be open to poli­ti­cal dis­tor­tion and ac­cu­sa­tions of bias.

We pro­pose that all lab­o­ra­to­ries perform­ing dual-use re­search with po­ten­tially catas­trophic con­se­quences should be re­quired by law to hold in­surance against dam­ag­ing con­se­quences of their re­search.

This mar­ket-based ap­proach would force re­searcher in­sti­tu­tions to in­ter­nal­ise some of the ex­ter­nal­ities and thereby:

  • En­courage uni­ver­sity de­part­ments and pri­vate lab­o­ra­to­ries to work on safer re­search, when the benefits are similar;

  • In­cen­tivise the in­surance in­dus­try to pro­duce ac­cu­rate as­sess­ments of the risks;

  • In­cen­tivise sci­en­tists and en­g­ineers to and de­vise effec­tive safety pro­to­cols that could be adopted by re­search in­sti­tu­tions to re­duce their in­surance pre­miums.

Cur­rent safety records do not always re­flect an ap­pro­pri­ate level of risk tol­er­ance. For ex­am­ple, the eco­nomic dam­age caused by the es­cape of the foot and mouth virus from a BSL-3 or BSL-4 lab in Bri­tain in 2007 was high (mostly through trade bar­ri­ers) and could have been much higher (the pre­vi­ous out­break in 2001 caused £8 billion of dam­age). If the lab had known they were li­able for some of these costs, they might have taken even more stringent safety pre­cau­tions. In the case of po­ten­tial pan­demic pathogen re­search, in­sur­ers might re­quire it to take place in BSL-4 or to im­ple­ment other tech­ni­cal safety im­prove­ments such as “molec­u­lar bio­con­tain­ment”.

Pos­si­ble crit­i­cisms and responses

  • The po­ten­tial risks are too large, and no­body would be will­ing to in­sure against them.

    • We could avoid this by plac­ing an ap­pro­pri­ate limit on the amount of in­surance that could be re­quired. If it were a suffi­ciently large sum (per­haps in the billions of dol­lars), the effect should be more ap­pro­pri­ate risk aver­sion, even if the tail risk for the in­surer were not fully in­ter­nal­ised.

  • The risks are too hard to model, and no­body would be will­ing to in­sure against them.

    • There are in­surance mar­kets for some risks that are ar­guably harder to model and have equally high po­ten­tial losses, such as ter­ror­ism.

  • We already have de­mand­ing safety stan­dards, so this wouldn’t re­duce risk.

    • Much of the cur­rent safety stan­dards are fo­cused on oc­cu­pa­tional health and safety of the lab work­ers rather than the gen­eral pub­lic.

    • The mar­ket-driven ap­proach pro­posed would fo­cus at­ten­tion on whichever steps were ra­tio­nally be­lieved to have the largest effect on re­duc­ing risk, and re­duce other bu­reau­cratic hur­dles.

    • Li­a­bil­ity has been use­ful at im­prov­ing be­havi­our in other do­mains, for ex­am­ple in in­dus­trial safety.

  • It is hard to draw a line around the harm­ful effects of re­search. Should we pun­ish re­search which en­ables oth­ers to perform harm­ful acts?

    • There is a hard ques­tion, but we think we would get much of the benefit by us­ing the sim­ple rule that labs are only li­able for sim­ple di­rect con­se­quences of their work. For ex­am­ple, the re­lease – ac­ci­den­tal or de­liber­ate – of a pathogenic virus man­u­fac­tured in that lab.

  • Re­search has pos­i­tive ex­ter­nal­ities, and it is un­fair if they have to in­ter­nal­ise only the nega­tive ones.

    • This is true, al­though re­search re­ceives fund­ing for pre­cisely this rea­son.

    • If we don’t make an at­tempt to in­tro­duce li­a­bil­ity then we are effec­tively sub­si­dis­ing un­safe re­search rel­a­tive to safe re­search.

  • Why re­quire in­surance rather than just im­pose li­a­bil­ity? Shouldn’t this be a de­ci­sion for the in­di­vi­d­u­als?

    • Some work may be suffi­ciently risky that the ac­tors can­not af­ford to self-in­sure. In such cir­cum­stances it makes sense to re­quire in­surance (just as we re­quire car in­surance for drivers).

    • This will help to en­sure that ap­pro­pri­ate anal­y­sis of the risks is performed.

  • Which re­search should this ap­ply to? How can we draw a line?

    • The bu­reau­cracy would be too costly to im­pose this re­quire­ment on all re­search. We should adapt ex­ist­ing guidelines for which ar­eas need ex­tra over­sight. In the first place, po­ten­tial pan­demic pathogen re­search is an ob­vi­ous area which should be in­cluded.