Bioinfohazards

Authors: Me­gan Crawford, Fi­nan Adam­son, Jeffrey Ladish

Spe­cial Thanks to Ge­or­gia Ray for Editing

Biorisk

Most in the effec­tive al­tru­ism com­mu­nity are aware of a pos­si­ble ex­is­ten­tial threat from biolog­i­cal tech­nol­ogy but not much be­yond that. The form biolog­i­cal threats could take is un­clear. Is the pri­mary threat from state bioweapon pro­grams? Or su­per­or­ganisms ac­ci­den­tally re­leased from syn­thetic biol­ogy labs? Or some­thing else en­tirely?

If you’re not already an ex­pert, you’re en­couraged to stay away from this topic. You’re told that spec­u­lat­ing about pow­er­ful biolog­i­cal weapons might in­spire ter­ror­ists or rogue states, and sim­ply ar­tic­u­lat­ing these threats won’t make us any safer. The cry of “Info haz­ard!” shuts down dis­cus­sion by fiat, and the rea­sons can­not be ex­plained since these might also be info haz­ards. If con­cerned, in­tel­li­gent peo­ple can­not ar­tic­u­late their rea­sons for cen­sor­ship, can­not co­or­di­nate around prin­ci­ples of in­for­ma­tion man­age­ment, then that it­self is a cause for con­cern. Dis­cus­sions may sim­ply move to un­reg­u­lated fo­rums, and dan­ger­ous ideas will prop­a­gate through well in­ten­tioned ig­no­rance.

We be­lieve that well rea­soned prin­ci­ples and heuris­tics can help solve this co­or­di­na­tion prob­lem. The goal of this post is to carve up the in­for­ma­tion land­scape into ar­eas of rel­a­tive dan­ger and safety; to illu­mi­nate some of the is­lands in the mire that con­tain more trea­sures than traps, and to help you judge where you’re likely to find dis­cus­sion more de­struc­tive than con­struc­tive.

Use­ful things to know already if you’re read­ing this post:

Much of the ma­te­rial in this also over­laps with Gre­gory Lewis’ In­for­ma­tion Hazards in Biotech­nol­ogy ar­ti­cle, which we recom­mend.

Risks of In­for­ma­tion Sharing

We’ve di­vided this pa­per into two broad cat­e­gories: risks from in­for­ma­tion shar­ing, and risks from se­crecy. First we will go over the ways in which shar­ing in­for­ma­tion can cause harm, and then how keep­ing in­for­ma­tion se­cret can cause harm.

We be­lieve con­sid­er­ing both is im­por­tant for de­ter­min­ing whether or not to share a par­tic­u­lar thought or pa­per. To keep things rel­a­tively tar­geted and con­crete, we provide illus­tra­tive toy ex­am­ples, or some­times even real ex­am­ples.

This sec­tion cat­e­go­rizes ways that shar­ing in­for­ma­tion in the biolog­i­cal sci­ences can be risky.

A topic cov­ered in other In­for­ma­tion Hazard posts that we chose not to fo­cus on here is that differ­ent au­di­ences can pre­sent sub­stan­tially differ­ent risk pro­files for the same idea.

With some ideas, al­most all of the benefits and de-risk­ing as­so­ci­ated with shar­ing can be achieved by only men­tion­ing your idea to one key re­searcher, or shar­ing find­ings in a jour­nal as­so­ci­ated with some ob­scure sub­field, while si­mul­ta­neously dodg­ing most of the risk of these ideas find­ing their way to a fool­ish or bad ac­tor.

If you’re in­ter­ested in that topic, Gre­gory Lewis’ pa­per In­for­ma­tion Hazards in Biotech­nol­ogy is a good place to read about it.

Bad con­cep­tual ideas to bad actors

A bad ac­tor gets an idea they did not pre­vi­ously have

Some ways this could man­i­fest:

  • A bad ac­tor uses these new ideas to cre­ate novel biolog­i­cal weapons or strate­gies.

  • State bioweapons pro­grams or bioter­ror­ists gain new re­search di­rec­tions or ideas.

Why might this be im­por­tant?

State or non-state ac­tors may have trou­ble de­vel­op­ing ideas on their own. Model gen­er­a­tion can be quite difficult, so gen­er­at­ing or shar­ing clever new mod­els can be risky. In par­tic­u­lar, we are con­cerned about the pos­si­bil­ity of ideas mov­ing from biol­ogy re­searchers to bioter­ror­ists or state ac­tors. Biose­cu­rity re­searchers are of­ten bet­ter-ed­u­cated and/​or more cre­ative than most bad ac­tors. There are also prob­a­bly many more re­searchers than peo­ple in­ter­ested in bioter­ror­ism; the differ­ence in num­bers could be even more im­pact­ful. If there are more biose­cu­rity re­searchers than there are bad ac­tors, re­searchers are likely to come up with many more ideas.

Examples

  • Toy ex­am­ple: Biose­cu­rity re­searcher writes and pub­lishes a pa­per about vuln­er­a­bil­ities in the wa­ter sup­ply of Ex­em­plan­dia and a biolog­i­cal agent, Sick­ma­ni­a­sis, that could be used to ter­ror­ize Ex­em­plan­dia. Bioter­ror­ists read the pa­per, and de­cide to carry out an at­tack. A bioter­ror­ist does re­search in how to man­u­fac­ture Sick­ma­ni­a­sis, and how to dis­sem­i­nate Sick­ma­ni­a­sis into the wa­ter sup­ply of Ex­em­plan­dia, and car­ries out the at­tack.

Bad con­cep­tual ideas to care­less actors

A care­less ac­tor gets an idea they did not pre­vi­ously have

Some ways this could man­i­fest:

  • A care­less ac­tor de­cides to ei­ther ex­plore an idea pub­li­cly in fur­ther de­tail, or de­cides to im­ple­ment the idea, not re­al­iz­ing or car­ing about the dam­age it could cause.

Why might this be im­por­tant?

Care­less ac­tors may be un­likely to have a given in­ter­est­ing idea on their own, but might have the in­cli­na­tion and abil­ity to im­ple­ment an idea if they hear about it from some­one else. One rea­son this might be true is that biose­cu­rity re­searchers could speci­fi­cally be look­ing for in­ter­est­ing pos­si­ble threats, so the “in­ter­est­ing idea” space they ex­plore will fo­cus more heav­ily on risky ideas.

Examples

  • Toy ex­am­ple 1: Biose­cu­rity re­searcher pub­lishes a re­port about vuln­er­a­bil­ities in the wa­ter sup­ply of Ex­em­plan­dia and a biolog­i­cal agent, Sick­ma­ni­a­sis, that could be used to ter­ror­ize Ex­em­plan­dia. Another re­searcher writes a pa­per that ex­plores spe­cific pos­si­ble im­ple­men­ta­tions of Sick­ma­ni­a­sis, in­clud­ing se­quence in­for­ma­tion and lab pro­ce­dures for gen­er­at­ing Sick­ma­ni­a­sis. In this case of the Unila­ter­ist’s Curse, both se­cu­rity re­searchers were mo­ti­vated by the de­sire to pre­vent some kind of harm, but the first re­searcher was speci­fi­cally more care­ful about pub­lish­ing meth­ods.

  • Toy ex­am­ple 2: Re­searcher pub­lishes re­port on how to use a gene drive to drive an in­sect species ex­tinct. A care­less re­searcher uses this re­port to cre­ate a gene drive in a lab on a test pop­u­la­tion of an in­sect species. Some in­sects es­cape from the lab, and the wild in­sect species pop­u­la­tion crashes. Even though the origi­nal re­searcher’s lab was very care­ful with test im­ple­men­ta­tions of their gene drives, the in­for­ma­tion they pro­duce led to a care­less lab crash­ing the pop­u­la­tion of a whole species.

  • Real Ex­am­ple: In 1997, rab­bit hem­or­rhagic dis­ease (RHD) be­gan to spread through New Zealand. It is be­lieved by au­thor­i­ties that New Zealand farm­ers smug­gled the dis­ease into the coun­try and re­leased it in­ten­tion­ally as an an­i­mal con­trol mea­sure. RHD was used in Aus­tralia as a bio­con­trol tool, and or­ga­ni­za­tions had at­tempted to get the New Zealand gov­ern­ment to ap­prove it for use. The virus be­gan to spread af­ter their ap­pli­ca­tion was de­nied. This is a case where au­thor­i­ties that re­viewed a biolog­i­cal tool for use de­cided it was a bad idea. De­spite their dis­ap­proval, some­one re­leased it. This wasn’t a hu­man pathogen, but the demon­strated po­ten­tial for a unilat­eral ac­tor to de­cide to re­lease a banned dis­ease agent and suc­ceed is trou­bling all the same. We’d like to re­it­er­ate that un­sanc­tioned pest con­trol us­ing dis­ease is A BAD IDEA!

Im­ple­men­ta­tion de­tails to bad actors

A bad ac­tor gains ac­cess to de­tails (but not an origi­nal idea) on how to cre­ate a harm­ful biolog­i­cal agent

Some ways this could man­i­fest:

  • A bad ac­tor ex­ploits this newly available in­for­ma­tion to cre­ate a weapon they pre­vi­ously did not have the knowl­edge or abil­ity to cre­ate, even though they already knew of the po­ten­tial at­tack vec­tor.

  • Some­one with the in­tent to pro­duce a po­ten­tially-dan­ger­ous agent, but not the means or knowl­edge, is granted ac­cess to sup­plies and/​or knowl­edge that al­lows them to de­velop a dan­ger­ous biolog­i­cal product.

Why might this be im­por­tant?

The bad ac­tor would not have been able to eas­ily gen­er­ate the in­struc­tions to cre­ate the harm­ful agent with­out the new source of in­for­ma­tion. As DNA syn­the­sis & lab au­toma­tion tech­nol­ogy im­proves, the bot­tle­neck to the cre­ation of a harm­ful agent is in­creas­ingly knowl­edge & in­for­ma­tion rather than ap­plied skill. Tech­ni­cal knowl­edge and pre­cise im­ple­men­ta­tion de­tails have his­tor­i­cally been a bot­tle­neck for bioweapons pro­grams, par­tic­u­larly ter­ror­ist or poorly-funded pro­grams (see Bar­ri­ers to Bioweapons by So­nia Ben Oua­grham-Gorm­ley).

Examples

  • Toy ex­am­ple: A re­searcher pub­lishes the in­for­ma­tion for how to re­con­struct an ex­tinct & deadly hu­man virus. A bioter­ror­ist or state bioweapon pro­gram uses this in­for­ma­tion to recre­ate an ex­tinct virus and weaponizes it.

  • Real Ex­am­ple: It’s no se­cret that the smal­l­pox genome is available on­line. It’s quite con­ceiv­able that a coun­try could fund a pro­gram to re­con­struct it from this in­for­ma­tion. It’s also not im­pos­si­ble that this has already hap­pened in se­cret.

Im­ple­men­ta­tion de­tails to care­less actors

A lit­tle knowl­edge is a dan­ger­ous thing

Some ways this could man­i­fest:

  • Care­less ac­tors who might oth­er­wise have had very lit­tle like­li­hood of cre­at­ing or re­leas­ing any­thing par­tic­u­larly haz­ardous, gain ac­cess to meth­ods or equip­ment that in­crease this likelihood

  • A care­ful re­searcher offhand­edly men­tions a po­ten­tially-valuable line of re­search, which they chose not to pur­sue due to its po­ten­tially catas­trophic down­sides, which might in­spire an overly-op­ti­mistic col­league to pur­sue it

Why might this be im­por­tant?

Many new tech­nolo­gies (es­pe­cially in biol­ogy) may have un­in­tended side effects. Micro­scopic or­ganisms can pro­lifer­ate, and that may get out of hand if pro­ce­dures are not fol­lowed care­fully. Some­times a ten­ta­tive plan, which might or might not be a good idea, is per­ceived as a great plan by some­one less fa­mil­iar with its risks. The more care­less ac­tor may then take steps to im­ple­ment a plan with­out con­sid­er­ing the ex­ter­nal­ities.

As ad­vanced lab equip­ment be­comes cheaper and more ac­cessible, and as more non-aca­demic labs open up with­out the highly-cau­tious pro-safety in­cen­tives of academia, we might ex­pect to see more ex­per­i­menters who ne­glect to prac­tice ap­pro­pri­ate safety pro­ce­dures. We might even see more ex­per­i­menters who fell through the cracks, and never learned these pro­ce­dures in the first place. How bad a de­vel­op­ment this is de­pends on pre­cisely what those labs are work­ing on, and the qual­ity of their self-su­per­vi­sion.

Se­cond-de­gree var­i­ant: Danger­ous im­ple­men­ta­tion knowl­edge is given to some­one who is likely to dis­tribute it, which might later re­sult in a con­ver­gence of in­tent and means in a sin­gle in­di­vi­d­ual, ei­ther a care­less or mal­i­cious ac­tor, who pro­duces a dan­ger­ous biolog­i­cal product. Some ex­am­ples of pos­si­ble dis­trib­u­tors might be a per­son whose job re­wards the dis­sem­i­na­tion of in­for­ma­tion, or a per­son who chron­i­cally un­der­es­ti­mates risks.

This risk means it is im­por­tant to keep in mind what in­cen­tives peo­ple have to share in­for­ma­tion, and whether that might in­cline them to share in­for­ma­tion haz­ards.

Examples

  • Toy Ex­am­ple 1: A civilian hears about how CRISPR can re­move viruses from cells, buys him­self some tools, and in­jects him­self with an untested DIY Her­pes ‘cure.’ He doesn’t ac­tu­ally cure his her­pes, but he does ac­ci­den­tally edit his germline or give him­self can­cer. There is a mas­sive so­cial back­lash to­wards syn­thetic biol­ogy, and the FDA shuts down mul­ti­ple sci­en­tific at­tempts at a Her­pes cure that used su­perfi­cially-similar meth­ods but had much higher odds of suc­cess.

  • Toy Ex­am­ple 2: An un­der­grad lab as­sis­tant tests out adding a plas­mid to E. coli for a novel pro­tein that she heard about at a con­fer­ence. She fails to note that the origi­nal pa­per in­cluded a few non-promi­nent sen­tences on the ne­ces­sity of only trans­fect­ing va­ri­eties with a ge­netic kill-switch, due to a strong sus­pi­cion that this gene con­sid­er­ably in­creases the har­di­ness of E. coli. Fur­ther care­less­ness re­sults in this E. coli get­ting out and mul­ti­ply­ing out­side of the lab. Even­tu­ally, this har­di­ness gene is picked up by a hu­man pathogen.

  • Real Ex­am­ple: A bio­hacker, among other ex­ploits, in­jected him­self with an agent meant to en­hance mus­cle growth. This likely spurred oth­ers to take dan­ger­ous risks and the CEO of a biotech com­pany ended up in­ject­ing him­self with an untested her­pes treat­ment.

  • Toy Ex­am­ple (Se­cond De­gree Var­i­ant): A re­searcher dis­cov­ers a way to make Azure Death trans­mis­si­ble from guinea pigs to hu­mans and tells a jour­nal­ist to warn pet own­ers. The jour­nal­ist spreads the re­searcher’s work, want­ing to credit them for the dis­cov­ery, widely spread­ing their meth­ods.

In­for­ma­tion vuln­er­a­ble to fu­ture advances

In­for­ma­tion that is not cur­rently dan­ger­ous be­comes dangerous

Some ways this could man­i­fest:

  • Fu­ture tech could turn pre­vi­ously safe in­for­ma­tion into dan­ger­ous in­for­ma­tion.

  • Tech­nolog­i­cal ad­vances or economies of scale could al­ter the ca­pa­bil­ities we could rea­son­ably ex­pect even a low-com­pe­tence ac­tor to have ac­cess to

Why might this be im­por­tant?

Tech­nolog­i­cal progress can be difficult to pre­dict. Some­times there are ma­jor ad­vances in tech­nol­ogy that al­low for new ca­pa­bil­ities, such as rapidly se­quenc­ing and copy­ing genomes. Could the in­for­ma­tion you share be dan­ger­ous in 5 years? 10? 100? How does this weigh against how use­ful the in­for­ma­tion is, or how likely it is to be­come pub­lic soon any­way?

Examples

  • Toy Ex­am­ple 1: After fu­ture tech­nol­ogy makes the dis­cov­ery of new and func­tional en­zymes much eas­ier, con­cep­tual ideas of bioweapons that pre­vi­ously re­quired highly spe­cial­ized knowl­edge to im­ple­ment are now ex­tremely haz­ardous.

  • Toy Ex­am­ple 2: A new cul­tur­ing tech­nique makes it dras­ti­cally eas­ier and cheaper to grow not only harm­less bac­te­rial cells, but also pathogenic ones. Sud­denly, a pa­per pub­lished on the highly-spe­cific cul­tur­ing pro­ce­dures for a finicky but dan­ger­ous pathogen is use­ful to non-spe­cial­ists.

  • Real ex­am­ple: The Smal­lpox genome was pub­lished on­line. Later, DNA print­ing be­came cheap and easy to use. The pub­lish­ing of the smal­l­pox genome on­line wasn’t par­tic­u­larly dan­ger­ous when it hap­pened. Hu­man­ity hadn’t yet de­vel­oped the tech­nol­ogy to print or­ganisms from scratch, and ge­netic en­g­ineer­ing meth­ods were much less pre­cise. Now, ac­cess to the smal­l­pox genome could be used by bad ac­tors with suffi­cient knowhow and tech­nol­ogy to print it and use it as a bioweapon.

Risk of Idea Inoculation

Pre­sent­ing an idea causes peo­ple to dis­miss risks

Some ways this could man­i­fest:

  • Pre­sent­ing a bad ver­sion of a good idea can cause peo­ple to dis­miss it pre­ma­turely and not take it se­ri­ously even when it’s pre­sented in a bet­ter form

Why might this be im­por­tant?

Try­ing to change norms can back­fire. If the first peo­ple pre­sent­ing a mea­sure to re­duce the pub­li­ca­tion of risky re­search are too low-pres­tige to be taken se­ri­ously, no effect might ac­tu­ally be the best-case sce­nario. An idea that is as­so­ci­ated with dis­rep­utable peo­ple or hard-to-swal­low ar­gu­ments may it­self start be­ing treated as dis­rep­utable, and face much higher skep­ti­cism and hos­tility than if bet­ter, proven ar­gu­ments had been pre­sented first.

This is al­most the in­verse of the Streisand effect, which ap­pears to de­rive from similar psy­cholog­i­cal prin­ci­ples. In the case of the Streisand Effect, at­tempts to re­move in­for­ma­tion are what cat­a­pult it into pub­lic con­scious­ness. In the case of idea in­oc­u­la­tion, at­tempts to pub­li­cize an idea en­sure that the con­cept is ig­nored or dis­missed out-of-hand, with no fur­ther con­sid­er­a­tion given to it.

It also con­nects in in­ter­est­ing ways with Bostrom’s Schema[1]

Examples

  • Toy Ex­am­ple 1: A bio­hacker at­tempts us­ing CRISPR to al­ter their genome to pro­duce more of the hor­mone in­cre­dulin. It doesn’t work and they give them­selves can­cer. The story gets pop­u­larized in me­dia and law­mak­ers pre­vent use­ful re­search on the uses of CRISPR.

  • Toy Ex­am­ple 2: An overly-en­thu­si­as­tic crack­pot biol­o­gist over-promises some huge ad­vance­ment in the next 2 years, and ends up plas­tered across the me­dia. Once he’s re­vealed as a fraud, sud­denly no fund­ing agen­cies want to touch the field even though other peo­ple in this spe­cialty are still do­ing mean­ingful, re­al­is­tic work.

Some Other Risk Categories

This list is not ex­haus­tive, and we chose to lean con­crete rather than ab­stract.

There were a few im­por­tant-but-ab­stract risk cat­e­gories that we didn’t think we could eas­ily do jus­tice while keep­ing them suc­cinct and con­crete. We felt that sev­eral were already im­plied in a more con­crete way by the cat­e­gories we did keep, but that they en­com­pass some edge-cases that our schemas don’t cap­ture. They at least war­rant a men­tion and de­scrip­tion.

One is the “Risk of In­creased At­ten­tion,” what Bostrom calls “At­ten­tion Hazard.” This is nat­u­rally im­plied by the four “ideas/​ac­tors” cat­e­gories, but in fact cov­ers a broader set of cases. A zone we fo­cused less on are the cir­cum­stances in which even use­ful ideas, com­bined with smart ac­tors, can even­tu­ally lead to un­in­tu­itive but catas­trophic con­se­quences if given enough at­ten­tion and fund­ing. This is best ex­em­plified in the fears about the rate of de­vel­op­ment and in­vest­ment in AI. It’s also par­tially ex­em­plified in “In­for­ma­tion vuln­er­a­ble to fu­ture ad­vances.”

The other is “In­for­ma­tion Sev­eral In­fer­en­tial Dis­tances Out Is Hazardous.” This is a su­per­set of “In­for­ma­tion vuln­er­a­ble to fu­ture ad­vances,” but it also en­com­passes cases where it’s merely a mat­ter of ex­tend­ing an idea out a few fur­ther log­i­cal steps, not just tech­nolog­i­cal ones.

For both, we felt they par­tially-over­lapped with the ex­am­ples already given, and leaned a bit too ab­stract and hard-to-model for this post’s fo­cus on con­crete ex­am­ples. How­ever, we think there’s still a lot of value in these im­por­tant, ab­stract, and com­plete (but harder-to-use) schemas.

Risks from Secrecy

We’ve talked above about many of the risks in­volved in in­for­ma­tion haz­ards. We take the risks of shar­ing in­for­ma­tion haz­ards se­ri­ously, and think oth­ers should as well. But in the Effec­tive Altru­ist com­mu­nity, it has been our ob­ser­va­tion that peo­ple don’t ob­serve the flip­side of this.

Con­ver­sa­tions about risks from biol­ogy get shut down and turn into dis­cus­sions of in­fo­haz­ards, even when the in­for­ma­tion be­ing shared is already available. There is some­thing to be said for not spread­ing in­for­ma­tion fur­ther, but shut­ting down the dis­cus­sion of peo­ple look­ing for solu­tions also has down­sides.

Leav­ing it to the ex­perts is not enough when there may not be a group of ex­perts think­ing and com­ing up with solu­tions. We en­courage peo­ple that want to work on biorisks to think about the value and risks in shar­ing po­ten­tially dan­ger­ous in­for­ma­tion. Below we will go through the risks or loss of value from not shar­ing in­for­ma­tion.

A holis­tic model of in­for­ma­tion shar­ing will in­clude weigh­ing both the risks and benefits of shar­ing in­for­ma­tion. A de­ci­sion should be made hav­ing con­sid­ered how the in­for­ma­tion might be used by bad or care­less ac­tors AND how valuable the in­for­ma­tion is for good ac­tors to fur­ther re­search or co­or­di­nate to solve a prob­lem.

Risk of Lost Progress

Closed re­search cul­ture stifles innovation

Some ways this could man­i­fest:

  • Ig­no­rance is the de­fault out­come. If se­cre­tive­ness en­sures that noth­ing is added to the knowl­edge and work of a field, benefi­cial progress is un­likely to be made.

Why might this be im­por­tant?

Good ac­tors need in­for­ma­tion to de­velop use­ful coun­ter­mea­sures. In a world where re­searchers can­not com­mu­ni­cate their ideas with each other it makes model gen­er­a­tion more difficult and re­duces the abil­ity of the field to build up good defen­sive sys­tems.

Examples

  • Toy Ex­am­ple 1: New in­for­ma­tion is learned about a re­cently-dis­cov­ered virus, which in­di­cate it is more dan­ger­ous and has greater pan­demic po­ten­tial than origi­nally thought. This in­for­ma­tion is not shared on the grounds that it could in­spire oth­ers to weaponize it. As a re­sult, lab safety pro­ce­dures for work­ing with the virus are not up­dated.

  • Toy Ex­am­ple 2: Vac­cines are not pro­duced be­cause re­searchers don’t have ac­cess to in­for­ma­tion about dan­ger­ous or­ganisms.

  • Toy Ex­am­ple 3: A dan­ger­ous sce­nario is never dis­cussed among good ac­tors avoid­ing in­fo­haz­ards. Bad ac­tors don’t avoid think­ing about in­fo­haz­ards, so they cre­ate novel bioweapons that could have been pre­pared for if a dis­cus­sion had oc­curred.

  • Toy Ex­am­ple 4: The pub­lic is un­aware of risks, so poli­ti­ci­ans don’t fund pro­grams that de­velop crit­i­cal in­fras­truc­ture to­wards defend­ing against pathogens (see US gov de­fund­ing pro­grams like the USDA).

Danger­ous work is not stopped

In­for­ma­tion is not shared, so risky work is not stopped

Some ways this could man­i­fest:

  • Areas with stronger pri­vacy norms, such as in­dus­try, may have in­cen­tives to hide de­tails about their work. If the risks as­so­ci­ated with a par­tic­u­lar pro­ject are not open in­for­ma­tion, these risks may be missed or ig­nored by oth­ers en­gag­ing in the same work.

  • If a high stan­dard of se­crecy is main­tained by labs by de­fault, it can be hard for gov­ern­men­tal or aca­demic over­sight to no­tice which labs should re­ceive more over­sight.

Why might this be im­por­tant?

Some fields of re­search are dan­ger­ous, or may even­tu­ally be­come dan­ger­ous. It is much harder to pre­vent a class of re­search if the dan­gers posed by that re­search can­not be dis­cussed pub­li­cly.

In­for­mal so­cial checks on the stan­dards or be­hav­ior of oth­ers seems to serve an im­por­tant, and of­ten un­der­es­ti­mated, func­tion as a mon­i­tor­ing and re­port­ing sys­tem against un­eth­i­cal or un­safe be­hav­iors. It can be easy to un­der­es­ti­mate how much the ob­jec­tions of a friend can shift the way you view the safety of your re­search, as they may bring up a con­cern you didn’t even think to ask about.

There are also en­tities with a man­date to do for­mal checks, and it is dan­ger­ous if they are left in the dark. Work en­vi­ron­ments, labs, or even en­tire fields can de­velop their own un­usual work cul­tures. Some­times, these cul­tures sys­tem­at­i­cally un­der­value a type of risk be­cause of its dis­pro­por­tionate benefits to them, even if the gen­eral pop­u­lace would have ob­jec­tions. Law en­force­ment, law­mak­ers, pub­lic dis­cus­sion, re­port­ing, and en­tities like eth­i­cal re­view boards are in­tended to in­ter­vene in these sorts of cases, but have no way to do so if they never hear about a prob­lem.

Each of these en­tities have their strengths and weak­nesses, but a world with­out whistle­blow­ers, or one where no one can ac­cess any­one ca­pa­ble of chang­ing these en­vi­ron­ments, is likely to be a more dan­ger­ous world.

Examples

  • Toy Ex­am­ple: An aca­demic de­cides not to pub­lish a pa­per about the risks of re­search­ing a par­tic­u­lar strain of bac­te­ria due to high rates of es­cape from seem­ingly quaran­tined labs. Re­searchers el­se­where be­gin re­search on the bac­te­ria, but with lax con­tain­ment be­cause they were un­aware of the risks.

  • Real Al­most-Ex­am­ple: In 1972 -a year be­fore the Asilo­mar Con­fer­ence- grad stu­dent Janet Metz men­tioned to other grad stu­dents that her lab might try to use a virus to put bac­te­rial DNA into mam­malian cells. Pol­lack told Berg (her su­per­vi­sor) he should “put genes into a phage that doesn’t grow in a bug that grows in your gut,” and re­minded him that SV40 is a small-an­i­mal tu­mor virus that trans­forms hu­man cells in cul­ture and makes them look like tu­mor cells. Prior to that dis­cus­sion, her lab had not fully thought through the po­ten­tial dan­ger­ous im­pli­ca­tions of that re­search.

  • Real Ex­am­ple: The true source of the Ra­jneeshee Sal­monella poi­son­ings was only un­cov­ered when a leader of the cult pub­li­cly ex­pressed con­cern about the be­hav­ior of one of its mem­bers, and ex­plic­itly re­quested an in­ves­ti­ga­tion into their lab­o­ra­tory.

Risk of In­for­ma­tion Siloing

Silo­ing in­for­ma­tion leaves in­di­vi­d­ual work­ers blind to the over­all goal accomplished

Some ways this could man­i­fest:

  • It can be more difficult to pre­vent harm when the sys­tems ca­pa­ble of pro­duc­ing it are not well un­der­stood by the par­ti­ci­pants. If you have pro­cesses of pro­duc­tion or re­search where la­bor is spe­cial­ized and dis­tributed, moral ac­tors may not no­tice when they are pro­duc­ing some­thing harm­ful.

Why might this be im­por­tant?

Lab work seems to be in­creas­ingly get­ting au­to­mated, or out­sourced piece­meal. At the same time, the biotech­nol­ogy in­dus­try has an in­cen­tive to be se­cre­tive with any pre-patent in­for­ma­tion they un­cover. Without ad­di­tional pre­cau­tions be­ing taken, se­cre­tive as­sem­bly-line-es­que offer­ings in­crease the like­li­hood that some­one could or­der a se­ries of steps that look harm­less in iso­la­tion, but cre­ate some­thing dan­ger­ous when com­bined.

Cat­a­lyst Biosummit

By the way, the au­thors are part of the or­ga­niz­ing team for the Cat­a­lyst Biose­cu­rity Sum­mit. It will bring to­gether syn­thetic biol­o­gists and poli­cy­mak­ers, aca­demics and bio­hack­ers, and a broad range of pro­fes­sion­als in­vested in biose­cu­rity for a day of col­lab­o­ra­tive prob­lem-solv­ing. It will be in Fe­bru­ary 2020. We haven’t locked down a spe­cific date yet, but you can sign up for up­dates here.

Examples

  • Toy Ex­am­ple 1: A plat­form out­sources lab work while grant­ing buy­ers a high de­gree of pri­vacy. No in­di­vi­d­ual worker in the as­sem­bly line was able to piece to­gether that they were pro­duc­ing a dan­ger­ous biolog­i­cal agent un­til it had already been pro­duced and re­leased.

  • Toy Ex­am­ple 2: Di­ag­no­sis of novel dis­eases takes longer be­cause knowl­edge of dis­eases was hid­den.

  • Real Ex­am­ple 1: Re­searchers put to­gether a bird flu that was air­borne and kil­led fer­rets. They didn’t cre­ate any mu­ta­tions that didn’t ex­ist in the wild already, they just put them to­gether in a way that na­ture hadn’t yet, but could hap­pen nat­u­rally through re­com­bi­na­tion. Amer­i­can and Dutch gov­ern­ments banned pub­li­ca­tion of pa­pers with their meth­ods. Had they been al­lowed to pub­lish their re­search, it could have given other sci­en­tists more in­for­ma­tion with which to de­velop a vac­cine. Amer­i­cans have since re­versed their de­ci­sion on the ban.

  • Real Ex­am­ple 2: The Guardian suc­cess­fully or­dered part of the smal­l­pox genome to a res­i­den­tial ad­dress from a bio­print­ing com­pany.

  • Real Ex­am­ple 3: A DOD lab ac­ci­den­tally sent weapons grade an­thrax to many labs. The CDC and other orgs have made similar mis­takes.

Bar­ri­ers to Fund­ing and New Talent

Ta­lented peo­ple don’t go into seem­ingly empty or un­der­funded fields

Some ways this could man­i­fest:

  • A cul­ture of se­crecy can serve as a stum­bling-block for early-ca­reer re­searchers in­ter­ested in en­ter­ing a field. It can make it more challeng­ing to lo­cate in­for­ma­tion, fund­ing, and al­igned men­tors, and these can serve to de­ter peo­ple who might oth­er­wise be in­ter­ested in mak­ing a ca­reer solv­ing an im­por­tant prob­lem.

Why might this be im­por­tant?

While many re­searchers and policy mak­ers work in biose­cu­rity, there is a short­age of tal­ent ap­plied to longer term and more ex­treme biose­cu­rity prob­lems. There have been only limited efforts to suc­cess­fully at­tract top tal­ent to this nascent field.

This may be chang­ing. The Open Philan­thropy Pro­ject has be­gun fund­ing pro­jects fo­cused on Global Catas­trophic Biorisk, and has pro­vided fund­ing for many in­di­vi­d­u­als be­gin­ning their ca­reers in the field of biose­cu­rity.

Poli­cies that re­quire a lot of over­sight or add on pro­ce­dures that in­crease the cost of do­ing re­search cause there to be fewer op­por­tu­ni­ties for peo­ple who want to make a pos­i­tive differ­ence.

Examples

  • Toy Ex­am­ple: A tal­ented biol­ogy grad­u­ate looks at EA dis­cus­sions and no­tices a lack of en­gage­ment with the most im­por­tant biose­cu­rity risks for the far fu­ture. They de­cide the EA com­mu­nity isn’t tak­ing far fu­ture con­cerns se­ri­ously and ap­ply their skills el­se­where.

  • Real Ex­am­ple: Labs opt out of valuable pathogen re­search be­cause reg­u­la­tions in­crease op­er­at­ing costs and time costs of work­ers (Wurtz, et al). This leads to fewer places to learn and fewer job op­por­tu­ni­ties for peo­ple that want to pre­vent harm­ful pathogens.

Streisand Effect

Sup­press­ing in­for­ma­tion can cause it to spread

Some ways this could man­i­fest:

  • At­tempt­ing to sup­press in­for­ma­tion can some­times cause in­for­ma­tion to spread fur­ther than it would have oth­er­wise. Many peo­ple’s re­sponse to even well-ad­vised at­tempts at in­for­ma­tion sup­pres­sion is to di­rectly or in­di­rectly in­crease the visi­bil­ity of the event by dis­cussing it or spread­ing the un­der­ly­ing in­for­ma­tion it­self.

Why might this be im­por­tant?

The Streisand effect is named af­ter an in­ci­dent where at­tempts to have pho­tographs taken down led to a me­dia spotlight and wide­spread dis­cus­sion of those same pho­tos. The pho­tos had pre­vi­ously been posted in a con­text where only 1 or 2 peo­ple had taken enough of an in­ter­est to ac­cess it.

Some­thing analo­gous could very eas­ily hap­pen with a pa­per out­lin­ing some­thing haz­ardous in a re­search jour­nal, or with an on­line dis­cus­sion. The au­di­ence may have origi­nally been quite tar­geted sim­ply due to the nich­eness or the ob­scu­rity of its origi­nal con­text. But an at­tempt at call­ing for in­ter­ven­tion leads to a pub­lic dis­cus­sion, which spreads the origi­nal in­for­ma­tion. This could be viewed as one of the pos­si­ble nega­tive out­comes of poorly-tar­geted whistle­blow­ing.

As men­tioned in the sec­tion on idea in­oc­u­la­tion, this effect is func­tion­ally idea in­oc­u­la­tion’s in­verse and is based on similar prin­ci­ples.

Examples

  • Toy ex­am­ple: An on­line dis­cus­sion group has poli­cies for han­dling in­for­ma­tion that some view as overly re­stric­tive. The frus­trated peo­ple start a new on­line dis­cus­sion group with overly-per­mis­sive in­fo­haz­ard guidelines.

  • Real Ex­am­ples of the Streisand effect: Bar­bra Streisand’s at­tempts to re­move pho­tos of her seaside man­sion from a large database of Cal­ifor­nia coastline pho­tos cat­a­pulted said pho­to­graph to fame. See also: The Roko’s Basilisk In­ci­dent, “Why the Lucky Stiff”’s Infosuicide

  • Real Bio Ex­am­ples of the Streisand effect: In all like­li­hood, more peo­ple know that the smal­l­pox genome is/​was pub­lic due to the at­tempts to sup­press it than from or­ganic searches. Re­lat­edly, some dan­ger­ous peo­ple might have as­sumed that printed DNA was care­fully and suc­cess­fully mon­i­tored if there weren’t so many ar­ti­cles about how some­times it’s not.

Conclusion

Over­all, we think biose­cu­rity in the con­text of catas­trophic risks has been un­der­funded and un­der­dis­cussed. There has been pos­i­tive de­vel­op­ment in the time since we started on this pa­per; the Open Philan­thropy Pro­ject is aware of fund­ing prob­lems in the realm of biose­cu­rity and has been fund­ing a va­ri­ety of pro­jects to make progress on biose­cu­rity.

It can be difficult to know where to start helping in biose­cu­rity. In the EA com­mu­nity, we have the de­sire to weigh the costs and benefits of philan­thropic ac­tions, but that is made more difficult in biose­cu­rity by the need for se­crecy.

We hope we’ve given you a place to start and fac­tors to weigh when de­cid­ing to share or not share a par­tic­u­lar piece of in­for­ma­tion in the realm of biose­cu­rity. We think the EA com­mu­nity has some­times erred too much on the side of shut­ting down dis­cus­sions of biol­ogy by turn­ing them into dis­cus­sions about in­fo­haz­ards. It’s pos­si­ble EA is be­ing left out of con­ver­sa­tions and de­ci­sion mak­ing pro­cesses that could benefit from an EA per­spec­tive. We’d like to see col­lab­o­ra­tive dis­cus­sion aimed to­wards pos­si­ble ac­tions or im­prove­ments in biose­cu­rity with risks and benefits of the in­for­ma­tion con­sid­ered, but not the cen­tral point of the con­ver­sa­tion.

It’s a big world with many prob­lems to fo­cus on. If you pre­fer to fo­cus your efforts el­se­where, feel free to do so. But if you do choose to en­gage with biose­cu­rity, we hope you can weigh risks ap­pro­pri­ately and choose the con­ver­sa­tions that will lead to many tal­ented col­lab­o­ra­tors and a world safer from biolog­i­cal risks.

Sources


  1. Con­nect­ing “Risk of Idea Inoc­u­la­tion” with Bostrom’s Schema: this could be seen as a sub­set of At­ten­tion Hazard and a dis­tant cousin of Know­ing-Too-Much Hazard. At­ten­tion Hazard en­com­passes any situ­a­tion where draw­ing too much at­ten­tion to a set of known facts in­creases risk, and the link is ob­vi­ous. In Know­ing-Too-Much Hazard, the pres­ence of knowl­edge makes cer­tain peo­ple a tar­get of dis­like. How­ever, in Idea Inoc­u­la­tion, peo­ple’s dis­like for your in­com­plete ver­sion of the idea rubs that dis­like off onto the idea it­self ↩︎