Why making asteroid deflection tech might be bad

In this post I will quan­tify the risk of nat­u­ral as­ter­oid/​comet im­pacts, and sum­marise the ar­gu­ment made by Carl Sa­gan and Steven Ostro that de­vel­op­ing as­ter­oid deflec­tion tech­nol­ogy could be a net harm, as it en­ables us to ac­ci­den­tally or in­ten­tion­ally deflect harm­less as­ter­oids in to Earth. They ar­gue that this in­creased risk likely out­weighs the nat­u­ral risk of as­ter­oid im­pacts.

Cross posted from my blog here.

A video ver­sion of this is available here.

Introduction

Ap­prox­i­mately 66 mil­lion years ago, a 10 km sized body struck Earth, and was likely one of the main con­trib­u­tors to the ex­tinc­tion of many species at the time. Bodies the size of 5 km or larger im­pact Earth on av­er­age ev­ery 20 mil­lion years (one might say we are over­due for one, but then one wouldn’t un­der­stand statis­tics). As­teroids 1 km or larger im­pact Earth ev­ery 500,000 years on av­er­age. Smaller bod­ies which can still do con­sid­er­able lo­cal dam­age oc­cur much more fre­quently (10 m wide bod­ies im­pact Earth on av­er­age ev­ery 10 years). It seems rea­son­able to say that only the first cat­e­gory (>~5 km) pose an ex­is­ten­tial threat, how­ever many oth­ers pose ma­jor catas­trophic threats*.

Given the like­li­hood of an as­ter­oid im­pact (I use the word as­ter­oid in­stead of as­ter­oid and/​or comet from here for sake of brevity), some ar­gue that fur­ther im­prov­ing de­tec­tion and deflec­tion tech­nol­ogy are crit­i­cal. Ma­theny (2007) es­ti­mates that, even if as­ter­oid ex­tinc­tion events are im­prob­a­ble, due to the loss of fu­ture hu­man gen­er­a­tions if one were to oc­cur, as­ter­oid de­tec­tion/​deflec­tion re­search and de­vel­op­ment could save a hu­man life-year for $2.50 (US). As­teroid im­pact miti­ga­tion is not thought to be the most press­ing ex­is­ten­tial threat (e.g. ar­tifi­cial in­tel­li­gence or global pan­demics), and yet it already seems to have bet­ter re­turn on in­vest­ment than the best now-cen­tric hu­man char­i­ties (though not non-hu­man char­i­ties – I am largely ig­nor­ing non-hu­mans here for sim­plic­ity and sake of ar­gu­ment).

The pur­pose of this ar­ti­cle is to ex­plore a de­press­ing cau­tion­ary note in the field of as­ter­oid im­pact miti­ga­tion. As we im­prove our abil­ity to de­tect and (es­pe­cially) deflect as­ter­oids with an Earth-in­ter­sect­ing or­bit away from Earth, we also im­prove our abil­ity to deflect as­ter­oids with­out an Earth-in­ter­sect­ing or­bit in to Earth. This idea was first ex­plored by Steven Ostro and Carl Sa­gan, and I will sum­marise their ar­gu­ment be­low.

As­teroid deflec­tion as a DURC

A dual use re­search of con­cern (DURC) refers to re­search in the life sci­ences that, while in­tended for pub­lic benefit, could also be re­pur­posed to cause pub­lic harm. One promi­nent ex­am­ple is that of dis­ease and con­ta­gion re­search (can im­prove dis­ease con­trol, but can also be used to spread dis­ease more effec­tively, ei­ther ac­ci­den­tally or mal­i­ciously). I will ar­gue here that DURC can and should be ap­pli­ca­ble to any tech­nol­ogy that has a po­ten­tial dual use such as this.

Ostro and Sa­gan (1998) pro­posed that as­ter­oid im­pacts could act as a dou­ble edged ex­pla­na­tion for the Fermi para­dox (why don’t we see any ev­i­dence of ex­trater­res­trial civil­i­sa­tions?). The ar­gu­ment goes as fol­lows: Those species that don’t de­velop as­ter­oid deflec­tion tech­nol­ogy even­tu­ally go ex­tinct due to some large im­pact, while those that do even­tu­ally go ex­tinct be­cause they ac­ci­den­tally or mal­i­ciously deflect a large as­ter­oid into their planet. This has since been termed the ‘deflec­tion dilemma‘.

The ques­tion arises: does the like­li­hood of a large im­pact in­crease as as­ter­oid deflec­tion tech­nol­ogy is de­vel­oped, rather than de­crease? The most press­ing ex­is­ten­tial and catas­trophic threats to­day seem to be those that were cre­ated by tech­nol­ogy (ar­tifi­cial in­tel­li­gence, nu­clear weapons, global health pan­demics, an­thro­pogenic global warm­ing) rather than nat­u­ral events (as­ter­oid im­pacts, su­per­vol­ca­noes, gamma ray bursts). Hu­man­ity has sur­vived for mil­lions of years (de­pend­ing on how you define hu­man­ity), yet in the last 70 years have seen the ad­vent of nu­clear weapons and other tech­nol­ogy that could mean­ingfully cause a catas­trophic at any time. It seems pos­si­ble there­fore that the big­ger risk will be that caused by tech­nol­ogy, not the nat­u­ral risk.

Ostro and Sa­gan (1994) ar­gue that de­vel­op­ment of as­ter­oid deflec­tion tech­nol­ogy is at the time of writ­ing (and pre­sum­ably to­day) pre­ma­ture, given the track record of global poli­tics.

Who would mal­i­ciously deflect an as­ter­oid?

Ig­nor­ing ac­ci­den­tal deflec­tion, which might oc­cur when an as­ter­oid is moved to an Earth or Lu­nar or­bit for re­search or min­ing pur­poses (see this now scrapped pro­posal to bring a small as­ter­oid in to Lu­nar or­bit), there are two cat­e­gories of ac­tors that might mal­i­ciously deflect such a body; state ac­tors and ter­ror­ist groups.

A state ac­tor might be in­cen­tivised to au­tho­rise an as­ter­oid strike on an en­emy or po­ten­tial en­emy in situ­a­tions where they wouldn’t nec­es­sar­ily au­tho­rise a nu­clear strike or con­ven­tional in­va­sion. For ex­am­ple, let us con­sider an as­ter­oid of around 20 m in di­ame­ter. Near Earth or­bit as­ter­oids of around this size are of­ten only de­tected sev­eral hours or days be­fore pass­ing be­tween Earth and the Moon. If a state ac­tor is able to iden­tify an as­ter­oid that will pass near Earth in se­cret be­fore the global com­mu­nity has, they can fea­si­bly send a mis­sion to al­ter its or­bit to in­ter­sect with Earth in a way such that it would not be de­tected un­til it is much too late. As­sum­ing the state ac­tor did its job well enough, it would be im­pos­si­ble for any­one to lay blame on them, let alone even guess that it might have been caused by mal­i­cious in­tent.

An as­ter­oid of this size would be ex­pected to have enough en­ergy to cause an ex­plo­sion 30 times the strength of the nu­clear bomb dropped over Hiroshima in WWII.

We can tem­per the like­li­hood of this sce­nario by spec­u­lat­ing that it is un­likely for some state ac­tor to covertly dis­cover a new as­ter­oid and track its or­bit with­out any other ac­tor dis­cov­er­ing it, con­sid­er­ing there are trans­par­ent or­gani­sa­tions work­ing on track­ing them. How­ever, is it pos­si­ble that a gov­ern­ment or­gani­sa­tion (e.g. NASA) could be or­dered to not share in­for­ma­tion about a new as­ter­oid?

What to do about this problem

Even if we don’t di­rectly de­velop as­ter­oid deflec­tion tech­nol­ogy, as other tech­nolo­gies progress (e.g. launch­ing pay­loads be­comes cheaper, propul­sion sys­tems be­come more effi­cient), it will be­come eas­ier over time any­way. Other space weapons, such as anti-satel­lite weapons (di­rect as­cent ki­netic kill pro­jec­tiles or di­rected en­ergy weapons), space stored nu­clear weapons, and ki­netic bom­bard­ment (rods from god) will all be­come eas­ier with gen­eral im­prove­ments in rele­vant tech­nol­ogy.

The ques­tion arises – even if a small group of peo­ple were to de­cide that de­vel­op­ing as­ter­oid deflec­tion tech­nol­ogy causes more harm than good, what can they mean­ingfully do about it? The idea that de­vel­op­ing as­ter­oid deflec­tion tech­nol­ogy is good is so en­trenched in pop­u­lar opinion that it seems like ar­gu­ing for less or no spend­ing in the area might be a bad idea. This seems like a similar situ­a­tion to where AI safety re­searchers find them­selves. Ad­vo­cat­ing for less fund­ing and de­vel­op­ment of AI seems rel­a­tively in­tractable, so they in­stead work on solu­tions to make AI safer. Another similar ex­am­ple is that of pan­demics re­search – it has ob­vi­ous benefits in build­ing re­silience to nat­u­ral pan­demics, but may also en­able a mal­i­cious or ac­ci­den­tal out­break of an en­g­ineered pathogen.

Fi­nal thoughts

I have not con­sid­ered the pos­si­bil­ity of al­ter­ing the or­bit of an ex­tinc­tion class body (~10 km di­ame­ter or greater) in to an Earth in­ter­sect­ing or­bit. While the dam­age of this would ob­vi­ously be much greater, even ig­nor­ing con­sid­er­a­tions about fu­ture gen­er­a­tions that would be lost, it would be sig­nifi­cantly harder to al­ter the or­bit of such a body. Also, we be­lieve we have dis­cov­ered all of the bod­ies of this size in a near Earth or­bit (Hueb­ner et al 2009), and so it would be much harder to do this covertly and with­out risk­ing re­tal­i­a­tion (e.g. mu­tu­ally as­sured de­struc­tion via nu­clear weapons). The pos­si­bil­ity of al­ter­ing the or­bit of such bod­ies should still be con­sid­ered, as it poses an ex­is­ten­tial/​catas­trophic risk while smaller bod­ies do not.

I have also cho­sen to largely not fo­cus on other types of space weapons (see this book for an overview of space weapons gen­er­ally) for similar rea­sons – the po­ten­tial for dual-use is less clear, thus in the­ory mak­ing it harder to set up such tech­nolo­gies in space. It would also be more difficult to make the util­i­sa­tion of such weapons look like an ac­ci­dent.

Fu­ture work

A cost benefit anal­y­sis that ex­am­ines the pros and cons of de­vel­op­ing as­ter­oid deflec­tion tech­nol­ogy in a rigor­ous and nu­mer­i­cal way should be a high pri­or­ity. Such an anal­y­sis would con­sider the ex­pected value of dam­age of nat­u­ral as­ter­oid im­pacts in com­par­i­son with the in­creased risk from de­vel­op­ing tech­nol­ogy (and pos­si­bly ex­am­ine the op­por­tu­nity cost of what could oth­er­wise be done with the R&D fund­ing). An ex­am­ple of such an anal­y­sis ex­ists in the space of global health pan­demics re­search, which would be a good start­ing point. I be­lieve it is un­clear at this time whether the benefits out­weigh the risks, or vice versa (though at this time I lean to­wards the risks out­weigh­ing the benefits – an un­for­tu­nate con­clu­sion for a PhD can­di­date re­search­ing as­ter­oid ex­plo­ra­tion and deflec­tion to come to).

Re­search re­gard­ing the tech­ni­cal fea­si­bil­ity of deflect­ing an as­ter­oid into a spe­cific tar­get (e.g. a city) should be ex­am­ined, how­ever this anal­y­sis comes with draw­backs (see sec­tion on in­for­ma­tion haz­ards).

We should also con­sider policy and in­ter­na­tional co­op­er­a­tion solu­tions that can be set in place to­day to re­duce the like­li­hood of ac­ci­den­tal and mal­i­cious as­ter­oid deflec­tion oc­cur­ring.

In­for­ma­tion haz­ard disclaimer

An in­for­ma­tion haz­ard is “a risk that arises from the dis­sem­i­na­tion or the po­ten­tial dis­sem­i­na­tion of (true) in­for­ma­tion that may cause harm or en­able some agent to cause harm.” Much of the re­search in to the risk side of DURCs could be con­sid­ered an in­for­ma­tion haz­ard. For ex­am­ple, a pa­per that demon­strates how easy it might be to en­g­ineer and re­lease an ad­vanced pathogen with the in­tent of rais­ing con­cern could make it eas­ier for some­one to do just that. It even seems plau­si­ble that pub­lish­ing such a pa­per could cause more harm than good. Similar re­search into as­ter­oids as a DURC would have the same is­sue (in­deed, this post it­self could be an in­for­ma­tion haz­ard).

* An ‘ex­is­ten­tial threat’ typ­i­cally refers to an event that could kill ei­ther all hu­man life, or all life in gen­eral. A ‘catas­trophic threat’ refers to an event that would cause sub­stan­tial dam­age and suffer­ing, but wouldn’t be ex­pected to kill all hu­man life, which would even­tu­ally re­build.