Information security careers for GCR reduction

Up­date 2019-12-14: There is now a Face­book group for dis­cus­sion of in­fosec ca­reers in EA (in­clud­ing for GCR re­duc­tion); join here

This post was writ­ten by Claire Za­bel and Luke Muehlhauser, based on their ex­pe­riences as Open Philan­thropy Pro­ject staff mem­bers work­ing on global catas­trophic risk re­duc­tion, though this post isn’t in­tended to rep­re­sent an offi­cial po­si­tion of Open Phil.

Summary

In this post, we sum­ma­rize why we think in­for­ma­tion se­cu­rity (pre­vent­ing unau­tho­rized users, such as hack­ers, from ac­cess­ing or al­ter­ing in­for­ma­tion) may be an im­pact­ful ca­reer path for some peo­ple who are fo­cused on re­duc­ing global catas­trophic risks (GCRs).

If you’d like to hear about job op­por­tu­ni­ties in in­for­ma­tion se­cu­rity and global catas­trophic risk, you can fill out this form cre­ated by 80,000 Hours, and their staff will get in touch with you if some­thing might be a good fit.

In brief, we think:

  • In­for­ma­tion se­cu­rity (in­fosec) ex­per­tise may be cru­cial for ad­dress­ing catas­trophic risks re­lated to AI and biose­cu­rity.

  • More gen­er­ally, se­cu­rity ex­per­tise may be use­ful for those at­tempt­ing to re­duce GCRs, be­cause such work some­times in­volves en­gag­ing with in­for­ma­tion that could do harm if mi­sused.

  • We have thus far found it difficult to hire se­cu­rity pro­fes­sion­als who aren’t mo­ti­vated by GCR re­duc­tion to work with us and some of our GCR-fo­cused grantees, due to the high de­mand for se­cu­rity ex­perts and the un­con­ven­tional na­ture of our situ­a­tion and that of some of our grantees.

  • More broadly, we ex­pect there to con­tinue to be a deficit of GCR-fo­cused se­cu­rity ex­per­tise in AI and biose­cu­rity, and that this deficit will re­sult in sev­eral GCR-spe­cific challenges and con­cerns be­ing un­der-ad­dressed by de­fault.

  • It’s more likely than not that within 10 years, there will be dozens of GCR-fo­cused roles in in­for­ma­tion se­cu­rity, and some or­ga­ni­za­tions are already look­ing for can­di­dates that fit their needs (and would hire them now, if they found them).

  • It’s plau­si­ble that some peo­ple fo­cused on high-im­pact ca­reers (as many effec­tive al­tru­ists are) would be well-suited to helping meet this need by gain­ing in­fosec ex­per­tise and ex­pe­rience and then mov­ing into work at the rele­vant or­ga­ni­za­tions.

  • If peo­ple who try this don’t get a di­rect work job but gain the rele­vant skills, they could still end up in a highly lu­cra­tive ca­reer in which their skil­lset would be in high de­mand.

We ex­plain be­low.

Risks from Ad­vanced AI

As AI ca­pa­bil­ities im­prove, lead­ing AI pro­jects will likely be tar­gets of in­creas­ingly so­phis­ti­cated and well-re­sourced cy­ber­at­tacks (by states and other ac­tors) which seek to steal AI-re­lated in­tel­lec­tual prop­erty. If these at­tacks are not miti­gated by teams of highly skil­led and ex­pe­rienced se­cu­rity pro­fes­sion­als, then such at­tacks seem likely to (1) in­crease the odds that TAI /​ AGI is first de­ployed by mal­i­cious or in­cau­tious ac­tors (who ac­quired world-lead­ing AI tech­nol­ogy by theft), and also seem likely to (2) ex­ac­er­bate and desta­bi­lize po­ten­tial AI tech­nol­ogy races which could lead to dan­ger­ously hasty de­ploy­ment of TAI /​ AGI, leav­ing in­suffi­cient time for al­ign­ment re­search, ro­bust­ness checks, etc.[1]

As far as we know, this is a com­mon view among those who have stud­ied ques­tions of TAI /​ AGI al­ign­ment and strat­egy for sev­eral years, though there re­mains much dis­agree­ment about the de­tails, and about the rel­a­tive mag­ni­tudes of differ­ent risks.

Given this, we think a mem­ber of such a se­cu­rity team could do a lot of good, if they are bet­ter than their re­place­ment and/​or they un­der­stand the full na­ture of the AI safety and se­cu­rity challenge bet­ter than their re­place­ment (e.g. be­cause they have spent many years think­ing about AI from a GCR-re­duc­tion an­gle). Fur­ther­more, be­ing a mem­ber of such a team may be a good op­por­tu­nity to have a more gen­eral pos­i­tive in­fluence on a lead­ing AI pro­ject, for ex­am­ple by pro­vid­ing ad­di­tional de­mand and ca­pac­ity for ad­dress­ing ac­ci­dent risks in ad­di­tion to mi­suse risks.

Some­what sep­a­rately, there may be sub­stan­tial use for se­cu­rity ex­per­tise in a re­search con­text (rather than im­ple­men­ta­tion con­text). For ex­am­ple:

  • Some re­searchers think that se­cu­rity ex­per­tise and/​or a “se­cu­rity mind­set” of the sort of­ten pos­sessed by se­cu­rity pro­fes­sion­als (per­haps in part as a re­sult of pro­fes­sional train­ing and ex­pe­rience) is im­por­tant for AI al­ign­ment re­search in a fairly gen­eral sense.[2]

  • Some re­searchers think that one of the most plau­si­ble pre-AGI paths by which AI might have “trans­for­ma­tive”-scale im­pact is via the au­toma­tion of cy­ber offense and cy­ber defense (and per­haps one more than the other), and GCR-fo­cused re­searchers with se­cu­rity ex­per­tise could be es­pe­cially use­ful for in­ves­ti­gat­ing this pos­si­bil­ity and re­lated strate­gic ques­tions.

  • Safe and benefi­cial de­vel­op­ment and de­ploy­ment of TAI /​ AGI may re­quire sig­nifi­cant trust and co­op­er­a­tion be­tween mul­ti­ple AI pro­jects and states. Some re­searchers think that such co­op­er­a­tive ar­range­ments may benefit from (po­ten­tially novel) cryp­to­graphic solu­tions for demon­strat­ing to oth­ers (and ver­ify­ing for one­self) im­por­tant prop­er­ties of lead­ing AI pro­jects (e.g. how com­pute is be­ing used). Po­ten­tially rele­vant tech­niques in­clude zero knowl­edge proofs, se­cure multi-party com­pu­ta­tion, differ­en­tial pri­vacy meth­ods, or smart con­tracts.[3] (E.g. see the ex­plo­ra­tions in Mar­tic et al. 2018.)

Biose­cu­rity and biorisk

Efforts to re­duce biorisks may in­volve work­ing with in­for­ma­tion on par­tic­u­lar po­ten­tial risks and strate­gies for re­duc­ing them. In gen­eral, in­for­ma­tion[4] gen­er­ated for the pur­pose of pre­dict­ing the ac­tions of or thwart­ing a bad ac­tor may be of in­ter­est to that ac­tor. This in­for­ma­tion could cause harm if po­ten­tial bioter­ror­ists or states aiming to ad­vance or ini­ti­ate bioweapons pro­grams ob­tain it. Con­cerns about these kinds of in­for­ma­tion haz­ards ham­per our and our grantees’ abil­ity to study im­por­tant as­pects of biorisk.[5]

For ex­am­ple, some­one study­ing coun­ter­mea­sure re­search and de­vel­op­ment for differ­ent types of pathogens might un­cover and take note of vuln­er­a­bil­ities in ex­ist­ing sys­tems for the pur­poses of patch­ing those vuln­er­a­bil­ities, but could in­ad­ver­tently in­form a bad ac­tor about weak­nesses in the cur­rent sys­tem.

Our im­pres­sion is that many peo­ple in the na­tional se­cu­rity com­mu­nity that fo­cus on biose­cu­rity be­lieve that some state bioweapon pro­grams are cur­rently op­er­at­ing[6] and we worry that these pro­grams may ex­pand as ad­vances in syn­thetic biol­ogy fa­cil­i­tate the de­vel­op­ment of more so­phis­ti­cated and/​or in­ex­pen­sive bioweapons (mak­ing these pro­grams more ap­peal­ing from the per­spec­tive of a state). We also think state ac­tors are the ones most likely to ex­e­cute so­phis­ti­cated cy­ber­at­tacks.

Be­cause of the above, we ex­pect se­cu­rity work in this space to be very im­por­tant but po­ten­tially very challeng­ing.

Our experience

Open Phil be­gan a pre­limi­nary search for a full-time in­for­ma­tion se­cu­rity ex­pert to help our grantees with the above is­sues in Fe­bru­ary 2018. We hoped to find some­one who could work on as­sess­ing the fea­si­bil­ity of differ­ent se­cu­rity mea­sures and their plau­si­ble effect size as de­ter­rents, as­sist­ing grantees in im­ple­ment­ing se­cu­rity mea­sures, and helping build up the field of in­fosec ex­perts try­ing to re­duce GCRs. So far, our search has been un­suc­cess­ful.

Why do we think our pre­limi­nary search has been challeng­ing, and why do we ex­pect that to con­tinue, and ap­ply to our grantees?

  • We’ve con­sis­tently heard, from rel­a­tively se­nior se­cu­rity pro­fes­sion­als and can­di­dates for our role, that it’s a “sel­ler’s mar­ket”, and thus gen­er­ally challeng­ing and ex­pen­sive (in funds and time) to at­tract top tal­ent.

  • Speci­fi­cally, our im­pres­sion is that tal­ented se­cu­rity ex­perts of­ten have many at­trac­tive job op­tions to choose from, of­ten in­volv­ing man­ag­ing large teams to han­dle se­cu­rity needs of very large-scale, in­tel­lec­tu­ally en­gag­ing pro­jects, and pay in the range of six to seven figures.

  • Our situ­a­tion and needs (and that of some of our grantees) are un­con­ven­tional, and those likely won’t con­fer as much pres­tige or ca­reer cap­i­tal in the field, com­pared to other op­tions we’d ex­pect a tal­ented po­ten­tial hire to have (e.g. tak­ing a job at a large tech com­pany).

  • Our needs are also varied, and may not cleanly map to a well-rec­og­nized job pro­file (e.g. Se­cu­rity An­a­lyst or Chief In­for­ma­tion Se­cu­rity Officer), mak­ing the op­tion less at­trac­tive to risk-averse can­di­dates.

  • Our con­text in the field is limited, which makes at­tract­ing and eval­u­at­ing can­di­dates more challeng­ing for us. (An ad­di­tional benefit of more GCR-fo­cused peo­ple en­ter­ing the space is that we’d likely end up with trusted ad­vi­sors who un­der­stand our situ­a­tion and con­straints, and can help us as­sess the tal­ent and fit of oth­ers).

  • We’re par­tic­u­larly cau­tious about hiring some­one we think is likely to end up with ac­cess to sen­si­tive in­for­ma­tion and knowl­edge of the vuln­er­a­bil­ities of rele­vant sys­tems.

  • And, as a fun­der, Open Phil runs the spe­cial risk of in­ad­ver­tently pres­sur­ing grantees to in­ter­act with some­one we hire, even if they have mis­giv­ings. This makes us want to be more cau­tious than if we were hiring some­one that only we would work with on sen­si­tive pro­jects.

Po­ten­tial fit for GCR-fo­cused people

In brief, se­cu­rity ex­perts may be able to ad­dress the con­cerns listed above by:

  • Devel­op­ing threat mod­els to iden­tify, e.g., prob­a­ble at­tack­ers and their ca­pa­bil­ities, po­ten­tial at­tack vec­tors, and which as­sets are most vuln­er­a­ble/​de­sir­able and in need of pro­tec­tion.

  • Eval­u­at­ing and pri­ori­tiz­ing sys­tems, poli­cies, and prac­tices to defend against po­ten­tial threats.

  • Assess­ing fea­si­ble lev­els of risk re­duc­tion to in­form choices about lines of re­search to pur­sue for a given level of ac­cept­able risk

  • Im­ple­ment­ing, main­tain­ing, and au­dit­ing those sys­tems, poli­cies, and prac­tices.

Ad­di­tion­ally, we think GCR-fo­cused peo­ple who en­ter the field for the pur­pose of di­rect work might be es­pe­cially helpful, com­pared to po­ten­tial hires with similar lev­els of ex­pe­rience and in­nate tal­ent, but with­out pre­ex­ist­ing in­ter­est in GCR re­duc­tion. For ex­am­ple:

  • For both AI and bio, they might fo­cus rel­a­tively more on strate­gies for re­sist­ing state ac­tors.

  • On AI, they might fo­cus rel­a­tively more on is­sues of spe­cial rele­vance to TAI /​ AGI al­ign­ment and strat­egy.

  • On biorisks, they might fo­cus rel­a­tively more on work­ing with aca­demics and think tanks.

  • They might be more fa­mil­iar with and skil­led at de­ploy­ing epistemic tools like mak­ing pre­dic­tions, cal­ibra­tion train­ing, ex­plicit cost-effec­tive­ness analy­ses, ad­just­ments for the unilat­er­al­ist’s curse, and scope-sen­si­tive ap­proaches to risk re­duc­tion, which might be use­ful on the ob­ject level as well as for in­ter­act­ing with some other staff at the rele­vant or­ga­ni­za­tions.

We ex­pect se­cu­rity work on GCR re­duc­tion to be more at­trac­tive to GCR-fo­cused peo­ple with se­cu­rity ex­per­tise than it would be to oth­er­wise-similar se­cu­rity ex­perts, and the down­sides to weigh less heav­ily. We also ex­pect the “sel­ler’s mar­ket” dy­namic for se­cu­rity pro­fes­sion­als to be ad­van­ta­geous for peo­ple who are in­fluenced by this post to pur­sue this path effec­tively; even if they don’t find a role do­ing di­rect work on GCR re­duc­tion, they could find them­selves in a lu­cra­tive pro­fes­sion do­ing in­tel­lec­tu­ally en­gag­ing work.

We’re un­sure how many roles re­quiring sig­nifi­cant se­cu­rity ex­per­tise and ex­pe­rience will even­tu­ally be available in the GCR re­duc­tion space, but we think:

  • There’s prob­a­bly cur­rently de­mand for ~3-15 such peo­ple (mostly in AI-re­lated roles),

  • It’s more likely than not that in 10 years, there will be de­mand for >25 se­cu­rity ex­perts in GCR-re­duc­tion-fo­cused roles, and

  • It’s at least “plau­si­ble” that in 10 years there will be de­mand for >75 se­cu­rity ex­perts in GCR-re­duc­tion-fo­cused roles, if TAI/​AGI pro­jects grow and cy­ber­at­tacks against them in­ten­sify sharply and in­crease in so­phis­ti­ca­tion.

Ten­ta­tive takeaways

We think it’s worth fur­ther ex­plor­ing se­cu­rity as a po­ten­tial ca­reer path for GCR-fo­cused peo­ple, and if that in­ves­ti­ga­tion bears out the ba­sic rea­son­ing above, we hope peo­ple who think they might be a fit for this work se­ri­ously con­sider mov­ing into the space. That said, we ex­pect the train­ing to be very challeng­ing, and we’re un­sure what it would in­volve or how many peo­ple would suc­ceed (of those who try), so given our un­cer­tain­ties we’re es­pe­cially wary of mak­ing strong recom­men­da­tions. We’ve dis­cussed this rea­son­ing with staff at 80,000 Hours, who are cur­rently con­sid­er­ing re­search into en­ter­ing this ca­reer path.

Th­ese roles seem most promis­ing to con­sider for some­one who already has a tech­ni­cal back­ground, could train in in­for­ma­tion se­cu­rity rel­a­tively quickly, and might be in­ter­ested in work­ing in the field even if they don’t end up work­ing di­rectly in GCR re­duc­tion. Ad­di­tional desider­ata in­clude a se­cu­rity mind­set, dis­cre­tion, and com­fort do­ing con­fi­den­tial work for ex­tended pe­ri­ods of time.

Our cur­rent best guess is that peo­ple who are in­ter­ested should con­sider seek­ing se­cu­rity train­ing in a top team in in­dus­try, such as by work­ing on se­cu­rity at Google or an­other ma­jor tech com­pany, or maybe in rele­vant roles in gov­ern­ment (such as in the NSA or GCHQ). Some large se­cu­rity com­pa­nies and gov­ern­ment en­tities offer grad­u­ate train­ing for peo­ple with a tech­ni­cal back­ground. How­ever, note that peo­ple we’ve dis­cussed this with have had differ­ing views on this topic.

How­ever, please bear in mind that we haven’t done much in­ves­ti­ga­tion into the de­tails of how best to pur­sue this path. If you’re con­sid­er­ing mak­ing a switch, we’d sug­gest do­ing your own re­search into how best to do it and your likely de­gree of fit. We’d also only sug­gest mak­ing the switch if you’d be com­fortable with the risk of not land­ing a job di­rectly rele­vant to GCR re­duc­tion within the next cou­ple of years.

If you’re in­ter­ested in pur­su­ing this ca­reer path, or already have ex­pe­rience in in­for­ma­tion se­cu­rity, you can fill out this form (man­aged by 80,000 Hours, and ac­cessible to some staff at 80,000 Hours and Open Philan­thropy), and 80,000 Hours may be able to provide ad­di­tional ad­vice or in­tro­duc­tions at some point in the fu­ture.

Acknowledgments

Many thanks to staff at 80,000 Hours, CSET, FHI, MIRI, OpenAI, and Open Phil, as well as Ethan Alley, James Ea­ton-Lee, Jeffrey Ladish, Kevin Esvelt, and Paul Crowley, for their feed­back on this post.


  1. For ex­am­ple, even if an AI pro­ject has enough of a lead over its com­peti­tors to not be wor­ried about be­ing “scooped” (over some time frame, with re­spect to some set of ca­pa­bil­ities), its lead­er­ship will prob­a­bly be more will­ing to in­vest in ex­ten­sive safety and val­i­da­tion checks if they are also con­fi­dent the tech­nol­ogy won’t be stolen while those checks are con­ducted. ↩︎

  2. See e.g. AI Risk and the Se­cu­rity Mind­set, Se­cu­rity and AI al­ign­ment, AI safety mind­set, and two di­alogues by Eliezer Yud­kowsky. ↩︎

  3. This para­graph is es­pe­cially in­spired by some think­ing on this topic by Miles Brundage. ↩︎

  4. We’re here refer­ring to deskwork, as op­posed to bench re­search on biolog­i­cal agents, which seems to us to be sub­stan­tially more risky over­all and re­quires a differ­ent set of ex­per­tise (ex­per­tise in biosafety) to do safely, in ad­di­tion to in­for­ma­tion se­cu­rity ex­per­tise. ↩︎

  5. In­for­ma­tion haz­ards aren’t a big con­cern for nat­u­ral biorisks, but our work so far sug­gests that an­thro­pogenic out­breaks, es­pe­cially those gen­er­ated by state ac­tors, con­sti­tute much of the risk of a globally catas­trophic biolog­i­cal event. ↩︎

  6. See e.g. the Arms Con­trol As­so­ci­a­tion’s Chem­i­cal and Biolog­i­cal Weapons Sta­tus at a Glance and the Septem­ber 18 2018 Press Briefing on the Na­tional Biodefense Strat­egy (ctrl+f “con­ven­tion” to find the rele­vant com­ments quickly) for pub­lic com­ments on this claim. But, we think our as­ser­tion here is not con­tro­ver­sial in the na­tional se­cu­rity com­mu­nity work­ing on biose­cu­rity, and con­ver­sa­tions with peo­ple in that com­mu­nity were also im­por­tant for per­suad­ing us that state BW pro­grams are prob­a­bly on­go­ing. ↩︎