Effective policy? Requiring liability insurance for dual-use research

Hi all,

I thought people might be interested in some of the policy work the Global Priorities Project has been looking into. Below I’m cross-posting some notes on one policy idea. I’ve talked to several people with expertise in biosafety and had positive feedback, and am currently looking into how best to push further on this (it will involve talking to people in the insurance industry).

In general quite a bit of policy is designed by technocrats and already quite effective. Some other areas are governed by public opinion which makes it very hard to have any traction. When we’ve looked into policy, we’ve been interested in finding areas which navigate between these extremes—and also don’t sound too outlandish, so that they have some reasonable chance of broad support.

I’d be interested in hearing feedback on this from EAs. Criticisms and suggestions also very much welcome!

---

Requiring liability insurance for dual-use research with potentially catastrophic consequences

These are notes on a policy proposal aimed at reducing catastrophic risk. They cover some of the advantages and disadvantages of the idea at a general level; they do not yet constitute a proposal for a specific version of the policy.

Research produces large benefits. In some cases it may also pose novel risks, for instance work on potential pandemic pathogens. There is widespread agreement that such ‘dual use research of concern’ poses challenges for regulation.

There is a convincing case that we should avoid research with large risks if we can obtain the benefits just as effectively with safer approaches. However, there do not currently exist natural mechanisms to enforce such decisions. Government analysis of the risk of different branches of research is a possible mechanism, but it must be performed anew for each risk area, and may be open to political distortion and accusations of bias.

We propose that all laboratories performing dual-use research with potentially catastrophic consequences should be required by law to hold insurance against damaging consequences of their research.

This market-based approach would force researcher institutions to internalise some of the externalities and thereby:

  • Encourage university departments and private laboratories to work on safer research, when the benefits are similar;

  • Incentivise the insurance industry to produce accurate assessments of the risks;

  • Incentivise scientists and engineers to and devise effective safety protocols that could be adopted by research institutions to reduce their insurance premiums.

Current safety records do not always reflect an appropriate level of risk tolerance. For example, the economic damage caused by the escape of the foot and mouth virus from a BSL-3 or BSL-4 lab in Britain in 2007 was high (mostly through trade barriers) and could have been much higher (the previous outbreak in 2001 caused £8 billion of damage). If the lab had known they were liable for some of these costs, they might have taken even more stringent safety precautions. In the case of potential pandemic pathogen research, insurers might require it to take place in BSL-4 or to implement other technical safety improvements such as “molecular biocontainment”.

Possible criticisms and responses

  • The potential risks are too large, and nobody would be willing to insure against them.

    • We could avoid this by placing an appropriate limit on the amount of insurance that could be required. If it were a sufficiently large sum (perhaps in the billions of dollars), the effect should be more appropriate risk aversion, even if the tail risk for the insurer were not fully internalised.

  • The risks are too hard to model, and nobody would be willing to insure against them.

    • There are insurance markets for some risks that are arguably harder to model and have equally high potential losses, such as terrorism.

  • We already have demanding safety standards, so this wouldn’t reduce risk.

    • Much of the current safety standards are focused on occupational health and safety of the lab workers rather than the general public.

    • The market-driven approach proposed would focus attention on whichever steps were rationally believed to have the largest effect on reducing risk, and reduce other bureaucratic hurdles.

    • Liability has been useful at improving behaviour in other domains, for example in industrial safety.

  • It is hard to draw a line around the harmful effects of research. Should we punish research which enables others to perform harmful acts?

    • There is a hard question, but we think we would get much of the benefit by using the simple rule that labs are only liable for simple direct consequences of their work. For example, the release – accidental or deliberate – of a pathogenic virus manufactured in that lab.

  • Research has positive externalities, and it is unfair if they have to internalise only the negative ones.

    • This is true, although research receives funding for precisely this reason.

    • If we don’t make an attempt to introduce liability then we are effectively subsidising unsafe research relative to safe research.

  • Why require insurance rather than just impose liability? Shouldn’t this be a decision for the individuals?

    • Some work may be sufficiently risky that the actors cannot afford to self-insure. In such circumstances it makes sense to require insurance (just as we require car insurance for drivers).

    • This will help to ensure that appropriate analysis of the risks is performed.

  • Which research should this apply to? How can we draw a line?

    • The bureaucracy would be too costly to impose this requirement on all research. We should adapt existing guidelines for which areas need extra oversight. In the first place, potential pandemic pathogen research is an obvious area which should be included.