Excellent idea! The rest of this comment is going to be negative, but my balance of opinion is not reflected in the balance of words.
One potential downside is that dangerous research would move to other countries. However, this effect would be reduced by the dominance of the anglosphere in many areas of research. Additionally, some research with the potential to cause local but not global disasters represents a national but not international externality, in which case other countries are also appropriately incentivised to adopt sensible precautions. So on net this does not seem to be a very big concern.
Another is the lack of any appreciation for public choice in this argument. Yes, I agree this policy would be good if implemented exactly as described. But policies that are actually implemented rarely bare much resemblance to the policies originally advocated by economists. Witness the huge gulf between the sort of financial reform that anyone actually advocates and Dodd-Frank, which as far as I’m aware satisfied basically no-one who was familiar with it. The relevant literature is the public choice literature. So here are some ways this could be misimplimented:
Politically unpopular research is crushed by being deemed dangerous. Obvious targets include research into nuclear power, racial differences, or GMOs.
Regulatory capture means that incumbants are allowed to get away with under-insuring, while new entrants are stifled. (In much the same way that financial regulation has benefited large banks at the expense of small ones).
The regulations are applied by risk-averse regulators who under-approve, resulting in too much deterrent to risky work, like with the FDA.
The regulators are not clever enough to work out what is actually risky, in the same way that financial regulators have proved themselves incapable of identifying systematic risks ahead of time, and central banks incapable of spotting asset bubbles. As such, the relationship between the level of liability researchers had to insure against and the true level of liability would be poor.
Why require insurance rather than just impose liability? Shouldn’t this be a decision for the individuals?
Some work may be sufficiently risky that the actors cannot afford to self-insure. In such circumstances it makes sense to require insurance (just as we require car insurance for drivers).
Drivers are generally individuals, whereas research is generally done by institutions. It seems plausible to me that creditworthy institutions/individuals should not have to take out car insurance. If Oxford faced a potential liability in the billions, I’m sure it would insure. I guess the main threat comes from small, limited liability institutions whose only purpose is to do this one kind of research, and are thus unconcerned with the downside. Or large institutions with poor internal governance.
It seems plausible to me that creditworthy institutions/individuals should not have to take out car insurance. If Oxford faced a potential liability in the billions, I’m sure it would insure. I guess the main threat comes from small, limited liability institutions whose only purpose is to do this one kind of research, and are thus unconcerned with the downside. Or large institutions with poor internal governance.
I agree that in general it’s fine for creditworthy institutions to self-insure. The issue is that the scale of possible liability is large enough (billions of dollars, perhaps hundreds of billions of dollars) that even institutions which routinely self-insure against all risks as a matter of course may not be creditworthy against the worst outcomes. In some cases they are explicitly or implicitly state-backed, but if nobody in the chain has considered the possible liability you don’t get the proper incentive effects. If there were a market so that the risk of the research were priced, I’d expect better governance even at institutions which self-insured.
I agree that there are some issues regarding the version of the policy that would actually be implemented. This is a large factor for requiring insurance rather than direct state regulation, and I think this offers a robustness which goes some way towards defusing your concerns.
For example:
Politically unpopular research is crushed by being deemed dangerous. Obvious targets include research into nuclear power, racial differences, or GMOs.
If there’s just an insurance requirement, it’s hard for extra costs to swell much above the true expected externalities (if it’s safe, they should be able to find someone willing to insure it cheaply).
If Oxford faced a potential liability in the billions, I’m sure it would insure.
The managers of Harvard’s endowment circa 2008 would beg to differ, I think. (It lost about $10 billion, nearly a third of its value.)
It seems like for some of these institutions, how long of a view they take is substantially determined by contingent factors like who’s the university president at the time.
I’m sorry, I don’t quite understand your point. There’s a huge difference between investment risk, for which you are paid the equity risk premium, and the sorts of things people insure against.
Excellent idea! The rest of this comment is going to be negative, but my balance of opinion is not reflected in the balance of words.
One potential downside is that dangerous research would move to other countries. However, this effect would be reduced by the dominance of the anglosphere in many areas of research. Additionally, some research with the potential to cause local but not global disasters represents a national but not international externality, in which case other countries are also appropriately incentivised to adopt sensible precautions. So on net this does not seem to be a very big concern.
Another is the lack of any appreciation for public choice in this argument. Yes, I agree this policy would be good if implemented exactly as described. But policies that are actually implemented rarely bare much resemblance to the policies originally advocated by economists. Witness the huge gulf between the sort of financial reform that anyone actually advocates and Dodd-Frank, which as far as I’m aware satisfied basically no-one who was familiar with it. The relevant literature is the public choice literature. So here are some ways this could be misimplimented:
Politically unpopular research is crushed by being deemed dangerous. Obvious targets include research into nuclear power, racial differences, or GMOs.
Regulatory capture means that incumbants are allowed to get away with under-insuring, while new entrants are stifled. (In much the same way that financial regulation has benefited large banks at the expense of small ones).
The regulations are applied by risk-averse regulators who under-approve, resulting in too much deterrent to risky work, like with the FDA.
The regulators are not clever enough to work out what is actually risky, in the same way that financial regulators have proved themselves incapable of identifying systematic risks ahead of time, and central banks incapable of spotting asset bubbles. As such, the relationship between the level of liability researchers had to insure against and the true level of liability would be poor.
Drivers are generally individuals, whereas research is generally done by institutions. It seems plausible to me that creditworthy institutions/individuals should not have to take out car insurance. If Oxford faced a potential liability in the billions, I’m sure it would insure. I guess the main threat comes from small, limited liability institutions whose only purpose is to do this one kind of research, and are thus unconcerned with the downside. Or large institutions with poor internal governance.
I agree that in general it’s fine for creditworthy institutions to self-insure. The issue is that the scale of possible liability is large enough (billions of dollars, perhaps hundreds of billions of dollars) that even institutions which routinely self-insure against all risks as a matter of course may not be creditworthy against the worst outcomes. In some cases they are explicitly or implicitly state-backed, but if nobody in the chain has considered the possible liability you don’t get the proper incentive effects. If there were a market so that the risk of the research were priced, I’d expect better governance even at institutions which self-insured.
I agree.
I agree that there are some issues regarding the version of the policy that would actually be implemented. This is a large factor for requiring insurance rather than direct state regulation, and I think this offers a robustness which goes some way towards defusing your concerns.
For example:
If there’s just an insurance requirement, it’s hard for extra costs to swell much above the true expected externalities (if it’s safe, they should be able to find someone willing to insure it cheaply).
Yup, I agree again. Though there is still the risk that the political system might manufacture externalities to accuse the researchers of.
The managers of Harvard’s endowment circa 2008 would beg to differ, I think. (It lost about $10 billion, nearly a third of its value.)
It seems like for some of these institutions, how long of a view they take is substantially determined by contingent factors like who’s the university president at the time.
I’m sorry, I don’t quite understand your point. There’s a huge difference between investment risk, for which you are paid the equity risk premium, and the sorts of things people insure against.