Information security careers for GCR reduction
Update 2019-12-14: There is now a Facebook group for discussion of infosec careers in EA (including for GCR reduction); join here
This post was written by Claire Zabel and Luke Muehlhauser, based on their experiences as Open Philanthropy Project staff members working on global catastrophic risk reduction, though this post isn’t intended to represent an official position of Open Phil.
Summary
In this post, we summarize why we think information security (preventing unauthorized users, such as hackers, from accessing or altering information) may be an impactful career path for some people who are focused on reducing global catastrophic risks (GCRs).
If you’d like to hear about job opportunities in information security and global catastrophic risk, you can fill out this form created by 80,000 Hours, and their staff will get in touch with you if something might be a good fit.
In brief, we think:
Information security (infosec) expertise may be crucial for addressing catastrophic risks related to AI and biosecurity.
More generally, security expertise may be useful for those attempting to reduce GCRs, because such work sometimes involves engaging with information that could do harm if misused.
We have thus far found it difficult to hire security professionals who aren’t motivated by GCR reduction to work with us and some of our GCR-focused grantees, due to the high demand for security experts and the unconventional nature of our situation and that of some of our grantees.
More broadly, we expect there to continue to be a deficit of GCR-focused security expertise in AI and biosecurity, and that this deficit will result in several GCR-specific challenges and concerns being under-addressed by default.
It’s more likely than not that within 10 years, there will be dozens of GCR-focused roles in information security, and some organizations are already looking for candidates that fit their needs (and would hire them now, if they found them).
It’s plausible that some people focused on high-impact careers (as many effective altruists are) would be well-suited to helping meet this need by gaining infosec expertise and experience and then moving into work at the relevant organizations.
If people who try this don’t get a direct work job but gain the relevant skills, they could still end up in a highly lucrative career in which their skillset would be in high demand.
We explain below.
Risks from Advanced AI
As AI capabilities improve, leading AI projects will likely be targets of increasingly sophisticated and well-resourced cyberattacks (by states and other actors) which seek to steal AI-related intellectual property. If these attacks are not mitigated by teams of highly skilled and experienced security professionals, then such attacks seem likely to (1) increase the odds that TAI / AGI is first deployed by malicious or incautious actors (who acquired world-leading AI technology by theft), and also seem likely to (2) exacerbate and destabilize potential AI technology races which could lead to dangerously hasty deployment of TAI / AGI, leaving insufficient time for alignment research, robustness checks, etc.[1]
As far as we know, this is a common view among those who have studied questions of TAI / AGI alignment and strategy for several years, though there remains much disagreement about the details, and about the relative magnitudes of different risks.
Given this, we think a member of such a security team could do a lot of good, if they are better than their replacement and/or they understand the full nature of the AI safety and security challenge better than their replacement (e.g. because they have spent many years thinking about AI from a GCR-reduction angle). Furthermore, being a member of such a team may be a good opportunity to have a more general positive influence on a leading AI project, for example by providing additional demand and capacity for addressing accident risks in addition to misuse risks.
Somewhat separately, there may be substantial use for security expertise in a research context (rather than implementation context). For example:
Some researchers think that security expertise and/or a “security mindset” of the sort often possessed by security professionals (perhaps in part as a result of professional training and experience) is important for AI alignment research in a fairly general sense.[2]
Some researchers think that one of the most plausible pre-AGI paths by which AI might have “transformative”-scale impact is via the automation of cyber offense and cyber defense (and perhaps one more than the other), and GCR-focused researchers with security expertise could be especially useful for investigating this possibility and related strategic questions.
Safe and beneficial development and deployment of TAI / AGI may require significant trust and cooperation between multiple AI projects and states. Some researchers think that such cooperative arrangements may benefit from (potentially novel) cryptographic solutions for demonstrating to others (and verifying for oneself) important properties of leading AI projects (e.g. how compute is being used). Potentially relevant techniques include zero knowledge proofs, secure multi-party computation, differential privacy methods, or smart contracts.[3] (E.g. see the explorations in Martic et al. 2018.)
Biosecurity and biorisk
Efforts to reduce biorisks may involve working with information on particular potential risks and strategies for reducing them. In general, information[4] generated for the purpose of predicting the actions of or thwarting a bad actor may be of interest to that actor. This information could cause harm if potential bioterrorists or states aiming to advance or initiate bioweapons programs obtain it. Concerns about these kinds of information hazards hamper our and our grantees’ ability to study important aspects of biorisk.[5]
For example, someone studying countermeasure research and development for different types of pathogens might uncover and take note of vulnerabilities in existing systems for the purposes of patching those vulnerabilities, but could inadvertently inform a bad actor about weaknesses in the current system.
Our impression is that many people in the national security community that focus on biosecurity believe that some state bioweapon programs are currently operating[6] and we worry that these programs may expand as advances in synthetic biology facilitate the development of more sophisticated and/or inexpensive bioweapons (making these programs more appealing from the perspective of a state). We also think state actors are the ones most likely to execute sophisticated cyberattacks.
Because of the above, we expect security work in this space to be very important but potentially very challenging.
Our experience
Open Phil began a preliminary search for a full-time information security expert to help our grantees with the above issues in February 2018. We hoped to find someone who could work on assessing the feasibility of different security measures and their plausible effect size as deterrents, assisting grantees in implementing security measures, and helping build up the field of infosec experts trying to reduce GCRs. So far, our search has been unsuccessful.
Why do we think our preliminary search has been challenging, and why do we expect that to continue, and apply to our grantees?
-
We’ve consistently heard, from relatively senior security professionals and candidates for our role, that it’s a “seller’s market”, and thus generally challenging and expensive (in funds and time) to attract top talent.
-
Specifically, our impression is that talented security experts often have many attractive job options to choose from, often involving managing large teams to handle security needs of very large-scale, intellectually engaging projects, and pay in the range of six to seven figures.
-
Our situation and needs (and that of some of our grantees) are unconventional, and those likely won’t confer as much prestige or career capital in the field, compared to other options we’d expect a talented potential hire to have (e.g. taking a job at a large tech company).
-
Our needs are also varied, and may not cleanly map to a well-recognized job profile (e.g. Security Analyst or Chief Information Security Officer), making the option less attractive to risk-averse candidates.
-
Our context in the field is limited, which makes attracting and evaluating candidates more challenging for us. (An additional benefit of more GCR-focused people entering the space is that we’d likely end up with trusted advisors who understand our situation and constraints, and can help us assess the talent and fit of others).
-
We’re particularly cautious about hiring someone we think is likely to end up with access to sensitive information and knowledge of the vulnerabilities of relevant systems.
-
And, as a funder, Open Phil runs the special risk of inadvertently pressuring grantees to interact with someone we hire, even if they have misgivings. This makes us want to be more cautious than if we were hiring someone that only we would work with on sensitive projects.
Potential fit for GCR-focused people
In brief, security experts may be able to address the concerns listed above by:
Developing threat models to identify, e.g., probable attackers and their capabilities, potential attack vectors, and which assets are most vulnerable/desirable and in need of protection.
Evaluating and prioritizing systems, policies, and practices to defend against potential threats.
Assessing feasible levels of risk reduction to inform choices about lines of research to pursue for a given level of acceptable risk
Implementing, maintaining, and auditing those systems, policies, and practices.
Additionally, we think GCR-focused people who enter the field for the purpose of direct work might be especially helpful, compared to potential hires with similar levels of experience and innate talent, but without preexisting interest in GCR reduction. For example:
For both AI and bio, they might focus relatively more on strategies for resisting state actors.
On AI, they might focus relatively more on issues of special relevance to TAI / AGI alignment and strategy.
On biorisks, they might focus relatively more on working with academics and think tanks.
They might be more familiar with and skilled at deploying epistemic tools like making predictions, calibration training, explicit cost-effectiveness analyses, adjustments for the unilateralist’s curse, and scope-sensitive approaches to risk reduction, which might be useful on the object level as well as for interacting with some other staff at the relevant organizations.
We expect security work on GCR reduction to be more attractive to GCR-focused people with security expertise than it would be to otherwise-similar security experts, and the downsides to weigh less heavily. We also expect the “seller’s market” dynamic for security professionals to be advantageous for people who are influenced by this post to pursue this path effectively; even if they don’t find a role doing direct work on GCR reduction, they could find themselves in a lucrative profession doing intellectually engaging work.
We’re unsure how many roles requiring significant security expertise and experience will eventually be available in the GCR reduction space, but we think:
There’s probably currently demand for ~3-15 such people (mostly in AI-related roles),
It’s more likely than not that in 10 years, there will be demand for >25 security experts in GCR-reduction-focused roles, and
It’s at least “plausible” that in 10 years there will be demand for >75 security experts in GCR-reduction-focused roles, if TAI/AGI projects grow and cyberattacks against them intensify sharply and increase in sophistication.
Tentative takeaways
We think it’s worth further exploring security as a potential career path for GCR-focused people, and if that investigation bears out the basic reasoning above, we hope people who think they might be a fit for this work seriously consider moving into the space. That said, we expect the training to be very challenging, and we’re unsure what it would involve or how many people would succeed (of those who try), so given our uncertainties we’re especially wary of making strong recommendations. We’ve discussed this reasoning with staff at 80,000 Hours, who are currently considering research into entering this career path.
These roles seem most promising to consider for someone who already has a technical background, could train in information security relatively quickly, and might be interested in working in the field even if they don’t end up working directly in GCR reduction. Additional desiderata include a security mindset, discretion, and comfort doing confidential work for extended periods of time.
Our current best guess is that people who are interested should consider seeking security training in a top team in industry, such as by working on security at Google or another major tech company, or maybe in relevant roles in government (such as in the NSA or GCHQ). Some large security companies and government entities offer graduate training for people with a technical background. However, note that people we’ve discussed this with have had differing views on this topic.
However, please bear in mind that we haven’t done much investigation into the details of how best to pursue this path. If you’re considering making a switch, we’d suggest doing your own research into how best to do it and your likely degree of fit. We’d also only suggest making the switch if you’d be comfortable with the risk of not landing a job directly relevant to GCR reduction within the next couple of years.
[edit: the form is no longer open] If you’re interested in pursuing this career path, or already have experience in information security, you can fill out this form (managed by 80,000 Hours, and accessible to some staff at 80,000 Hours and Open Philanthropy), and 80,000 Hours may be able to provide additional advice or introductions at some point in the future.
Acknowledgments
Many thanks to staff at 80,000 Hours, CSET, FHI, MIRI, OpenAI, and Open Phil, as well as Ethan Alley, James Eaton-Lee, Jeffrey Ladish, Kevin Esvelt, and Paul Crowley, for their feedback on this post.
- ↩︎
For example, even if an AI project has enough of a lead over its competitors to not be worried about being “scooped” (over some time frame, with respect to some set of capabilities), its leadership will probably be more willing to invest in extensive safety and validation checks if they are also confident the technology won’t be stolen while those checks are conducted.
- ↩︎
See e.g. AI Risk and the Security Mindset, Security and AI alignment, AI safety mindset, and two dialogues by Eliezer Yudkowsky.
- ↩︎
This paragraph is especially inspired by some thinking on this topic by Miles Brundage.
- ↩︎
We’re here referring to deskwork, as opposed to bench research on biological agents, which seems to us to be substantially more risky overall and requires a different set of expertise (expertise in biosafety) to do safely, in addition to information security expertise.
- ↩︎
Information hazards aren’t a big concern for natural biorisks, but our work so far suggests that anthropogenic outbreaks, especially those generated by state actors, constitute much of the risk of a globally catastrophic biological event.
- ↩︎
See e.g. the Arms Control Association’s Chemical and Biological Weapons Status at a Glance and the September 18 2018 Press Briefing on the National Biodefense Strategy (ctrl+f “convention” to find the relevant comments quickly) for public comments on this claim. But, we think our assertion here is not controversial in the national security community working on biosecurity, and conversations with people in that community were also important for persuading us that state BW programs are probably ongoing.
- My current impressions on career choice for longtermists by 4 Jun 2021 17:07 UTC; 444 points) (
- Thoughts on doing good through non-standard EA career pathways by 30 Dec 2019 2:06 UTC; 171 points) (
- 2019 AI Alignment Literature Review and Charity Comparison by 19 Dec 2019 2:58 UTC; 147 points) (
- Some promising career ideas beyond 80,000 Hours’ priority paths by 26 Jun 2020 10:34 UTC; 142 points) (
- 2019 AI Alignment Literature Review and Charity Comparison by 19 Dec 2019 3:00 UTC; 130 points) (LessWrong;
- Information security considerations for AI and the long term future by 2 May 2022 20:53 UTC; 127 points) (
- AI Governance Needs Technical Work by 5 Sep 2022 22:25 UTC; 116 points) (
- My Career Decision-Making Process by 21 Jan 2021 20:17 UTC; 102 points) (
- Technical AGI safety research outside AI by 18 Oct 2019 15:02 UTC; 91 points) (
- What does it mean to become an expert in AI Hardware? by 9 Jan 2021 4:15 UTC; 87 points) (
- EA Forum: Data analysis and deep learning by 12 May 2020 17:39 UTC; 82 points) (
- Information security considerations for AI and the long term future by 2 May 2022 20:54 UTC; 76 points) (LessWrong;
- Jobs that can help with the most important century by 12 Feb 2023 18:19 UTC; 57 points) (
- 80,000 Hours career review: Information security in high-impact areas by 16 Jan 2023 12:45 UTC; 56 points) (
- Privacy as a Blind Spot: Are There Long Term Harms in Using Facebook, Google, Slack etc.? by 16 Jan 2021 17:15 UTC; 48 points) (
- Annotated List of EA Career Advice Resources by 13 Jul 2020 6:12 UTC; 43 points) (
- Technical AGI safety research outside AI by 18 Oct 2019 15:00 UTC; 43 points) (LessWrong;
- AI Governance Needs Technical Work by 5 Sep 2022 22:28 UTC; 41 points) (LessWrong;
- Latest EA Updates for June 2019 by 1 Jul 2019 10:07 UTC; 39 points) (
- EA Forum Prize: Winners for June 2019 by 25 Jul 2019 8:36 UTC; 32 points) (
- AI Governance across Slow/Fast Takeoff and Easy/Hard Alignment spectra by 3 Apr 2022 7:45 UTC; 27 points) (LessWrong;
- Upcoming interviews on the 80,000 Hours Podcast by 1 Jul 2019 14:08 UTC; 25 points) (
- 26 Jul 2019 18:14 UTC; 25 points) 's comment on EA Forum Prize: Winners for June 2019 by (
- Jobs that can help with the most important century by 10 Feb 2023 18:20 UTC; 24 points) (LessWrong;
- A sketch of leader and member-organised movements by 16 Dec 2022 18:01 UTC; 21 points) (
- I’m interviewing Nova Das Sarma about AI safety and information security. What shouId I ask her? by 25 Mar 2022 15:38 UTC; 17 points) (
- Best thing at EAG SF 2019? by 24 Jun 2019 19:19 UTC; 16 points) (
- Upskilling, bridge-building, research on security/cryptography and AI safety by 20 Apr 2023 22:32 UTC; 13 points) (LessWrong;
- How should my timelines influence my career choice? by 3 Aug 2021 10:14 UTC; 13 points) (LessWrong;
- 13 Feb 2022 7:46 UTC; 12 points) 's comment on Is a career in making AI systems more secure a meaningful way to mitigate the X-risk posed by AGI? by (
- [AN #61] AI policy and governance, from two people in the field by 5 Aug 2019 17:00 UTC; 12 points) (LessWrong;
- 4 Jun 2022 6:20 UTC; 12 points) 's comment on alyssavance’s Shortform by (LessWrong;
- 14 Mar 2022 17:12 UTC; 9 points) 's comment on aogara’s Quick takes by (
- Cybersecurity as a career by 12 Jan 2022 9:10 UTC; 7 points) (
- Acknowledgements & References by 14 Dec 2019 7:04 UTC; 6 points) (LessWrong;
- 29 Dec 2019 8:03 UTC; 5 points) 's comment on 2019 AI Alignment Literature Review and Charity Comparison by (
- 9 Jul 2019 0:47 UTC; 5 points) 's comment on Advice for an Undergrad by (
- 11 Nov 2021 0:39 UTC; 3 points) 's comment on Open Thread: Spring 2022 by (
- 10 Aug 2022 0:39 UTC; 2 points) 's comment on AMA: Ought by (
- 長期主義者のキャリア選択に関する現在の感想 by 22 Aug 2023 15:33 UTC; 2 points) (
- 16 Dec 2020 20:51 UTC; 1 point) 's comment on My upcoming CEEALAR stay by (
- Le mie impressioni sulla scelta di una carriera per i lungoterministi by 18 Jan 2023 11:47 UTC; 1 point) (
This post caused me to apply to a six-month internal rotation program at Google as a security engineer. I start next Tuesday.
Awesome, thanks for letting us know!
Hey Taymon, I’m curious about how that career transition went :) Where did you end up, if you don’t mind sharing?
This is a big area of uncertainty for me. I agree that Google & other top companies would be quite valuable, but I’m much less convinced that government work will be as good. At high levels of the NSA, CIA, military intelligence, etc. I expect it be, but for someone getting early experience, it’s less obvious. Government positions are probably going to be less flexible / more constrained in the types of problems to work on and have less quality mentorship opportunities at the lower levels. Startups can be good if they startups value security (Reserve was great for me because I got to actually be in charge of security for the whole company & learn how to get people to use good practices), but most startups do not value security, so I wouldn’t recommend working for a startup unless they showed strong signs of valuing security.
My guess is that the important factors are roughly:
Good technical mentorship—While I expect this to be better than average at the big tech companies, it isn’t guaranteed.
Experience responding to real threats (i.e., a company that has enough attack surface and active threats to get a good sense of what real attacks look like)
Red team experience, as there is no substitute for actually learning how to attack a system
Working with non-security & non-technical people to implement security controls. I think most of the opportunities described in this post will require this kind of experience. Some technical security roles in big companies do not require this, since there is enough specialization that vulnerability remediation can happen via other companies.
I think working at a top security company could be a way to gain a lot of otherwise hard to get experience. Trail of bits, NCC Group, FireEye are a few that come to mind.
This all sounds right to me, though I think some people have different views, and I’m hardly an expert. Speaking for myself at least, the things you point to are roughly why I wanted the “maybe” in front of “relevant roles in government.” Though one added benefit of doing security in government is that, at least if you get a strong security clearance, you might learn classified helpful things about e.g. repelling state-originating APTs.
An additional point is that “relevant roles in government” should probably mean contracting work as well. So it’s possible to go work for Raytheon, get a security clearance, and do cybersecurity work for government (and that pays significantly better!)
Thanks Claire and Luke for writing this!
I have hired security consultants a couple of times, and found that it was challenging, but within the normal limits of how challenging hiring always is. If you want someone to tell you the best practices for encrypting AWS servers, or even how to protect some unusual configuration of AWS services, my guess is that you can probably find someone (although maybe you will be paying them $200+/hour).
My assumption is that the challenge you are pointing to is more about finding people who can e.g. come up with novel cryptographic methods or translate game theoretic international relations results into security protocols, which seems different from (and substantially harder than) the work that most “information security” people do.
Is that accurate? The way you described this as a “seller’s market” etc. makes me unsure if you think it’s challenging to find even “normal”/junior info sec staff.
The key roles we have in mind are a bit closer to what is sometimes called “security officer,” i.e. someone who can think through (novel, GCR-focused) threat models, plausibly involving targeted state-based attacks, develop partly-custom system and software solutions that are a match to those threat models, think through and gather user feedback about tradeoffs between convenience and security of those solutions, develop and perhaps deliver appropriate training for those users, etc. Some of this might include things like “protect some unusual configuration of AWS services,” but I imagine that might also be something that the security officer is able to outsource. We’ve tried working with a few security consultants, and it hasn’t met our needs so far.
Projects like “develop novel cryptographic methods” might also be useful in some cases — see my bullet points on research (rather than implementation) applications of security expertise in the context of AI — but they aren’t the modal use-case we’re thinking of.
But also, we haven’t studied this potential career path to the level of depth that (e.g.) 80,000 Hours typically does when developing a career profile, so we have more uncertainty about many of the details here even than is typically represented in an 80,000 Hours career profile.
Great post. Thought I might share a few related books I’ve found interesting (in rough order of usefulness according to my memory)- I’m looking for more so please share!
1. Perfect Weapon by Sanger. An account of the history of cyberweapons and cyber espionage by one of the NYT reporters who broke a number of stories on covert cyber programs like Stuxnet. I found it relatively “spin-free” compared to other content in the area and probably at least 1.5x as useful as the next most.
2. The Sword and the Shield by Andrew and Mitrokhin. A detailed history of the KGB based on one of the largest intelligence leaks in history (Mitrokhin worked in the KGB archives for years). A lot of the details were in the weeds, it is very USSR centric, and doesn’t reach modern day, but I found it useful for getting a sense of how information security worked in the pre-digital age.
3. Legacy of Ashes by Wiener. A history of the CIA with a definitive slant towards trying to demonstrate the agency is incompetent (I think examples may be somewhat cherry picked to this end based on 2) and other accounts). Still found helpful for getting a general sense of how the intelligence community does info security.
4. Click Here to Kill Everybody by Schneier. Similar to 1) but with a narrative driven more by the author’s various theses on security and seemingly geared toward a more popular audience. Examples are fairly redundant with 1).
(A couple caveats: I wish I knew better examples than the above and read these rapidly and some time ago. I expect that re-reading deeply would change my portrayal of them/ and/or ranking. )
Just finished Spy Schools by Golden. I would rank it between 1) and 2). Describes the history of espionage in academic and research circles. Doesn’t emphasize cyber, but is much more up to date than 2)-4) and given how relevant academia is, I found the examples more interesting.
I’ve created a survey about barriers to entering information security careers for GCR reduction, with a focus on whether funding might be able to help make entering the space easier. If you’re considering this career path or know people that are, and especially if you foresee money being an obstacle, I’d appreciate you taking the survey/forwarding it to relevant people.
The survey is here: https://docs.google.com/forms/d/e/1FAIpQLScEwPFNCB5aFsv8ghIFFTbZS0X_JMnuquE3DItp8XjbkeE6HQ/viewform?usp=sf_link. Open Philanthropy and 80,000 Hours staff members will be able to see the results. I expect it to take around 5-25 minutes to take the survey, depending on how many answers are skipped.
I’ll leave the survey open until EOD March 2nd.
Happy to see this post. Definitely feels like security issues have received insufficient attention.
Agree. Great work everyone who contributed.
I’ve seen some people advise against this career path, and I remember a comment by Luke elsewhere that he’s aware of some people having that view. Given this, I’m curious if there are any specific arguments against pursuing a career in information security that you’ve come across?
(It’s not clear to me that there must be any. - E.g. perhaps all such advice was based on opaque intuitions, or was given for reasons not specific to information security such as “this other career seems even better”.)
IIRC the main concern in the earlier conversations was about how many high-impact roles of this type there might really be in the next couple decades. Probably the number is smaller than (e.g.) the number of similarly high-impact “AI policy” roles, but (as our post says) we think the number of high-impact roles of this type will be substantial. And given how few GCR-focused people there are in general, and how few of them are likely a personal fit for this kind of career path anyway, it might well be that even if many of the people who are a good fit for this path pursue it, that would still not be enough to meet expected need in the next couple decades.
Could you elaborate on why you “expect the training [for becoming an information security professional] to be very challenging”?
Based on the OP, I could see the answer being any combination of the following, and I’m curious if you have more specific views.
a) The training and work is technically challenging.
b) The training and work has idiosyncratic aspects that may be psychologically challenging, e.g. the requirement to handle confidential information over extended periods of time.
c) The training and work requires an unusually broad combination of talents, e.g. both technical aptitude and the ability to learn to manage large teams.
d) You don’t know of any specific reasons why the training would be challenging, but infer that it must be for structural reasons such as few people pursuing that career despite lucrative pay.
I think we meant a bit of (b) and (c) but especially (a).
That’s helpful, thanks!
Do you have a sense of whether the required talent is relatively generic quantitative/technical talent that would e.g. predict success in fields like computer science, physics, or engineering, or something more specific? And also what the bar is?
Currently I’m not sure if what you’re saying is closer to “if you struggled with maths in high school, this career is probably not for you” or “you need to be at a +4 std level of ability in these specific things” (my guess is something closer to the former).
No worries if that was beyond the depth of your investigation.
Yeah, something closer to the former.
I find it rather unfortunate that a job application related form for security experts is not only hosted at Google, but also requires a Google account to even see.
The reason is that security expertise and awareness in many cases (some would say in all) involves care about privacy. Because Google is a company whose business model revolves around taking your personal data and making money out of it, I would expect that a lot of good security-minded people will stop before submitting their information to Google, reducing the chances for finding suitable candidates.
For those, who like me, won’t do that, I post form questions (edited due to broken copy-paste). I’ll leave where to submit them as an exercise for the reader.
London / Oxford / Cambridge
San Francisco Bay Area
Washington, D.C.
Boston
Other
Thanks for writing this.
What do you think is the main difference between the roles you’re describing and a Chief Information Security Officer role?
Are there any industry roles that that anyone thinks would be particularly good or bad preparation?
I work at a large company and there are at least 10 different security-related teams, which from the outside seem to be doing fairly specialized work.
On the difference between the role we’ve tried to hire for at Open Phil specifically and a typical Security Analyst or Security Officer role, a few things come to mind, though we also think we don’t yet have a great sense of the range of security roles throughout the field. One possible difference is that many security roles focus on security systems for a single organization, whereas we’ve primarily looked for someone who could help both Open Phil and some of our grantees, each of whom have potentially quite different needs. Another possible difference is that our GCR focus in AI and biosecurity leads us to some non-standard threat models, and it has been difficult thus far for us to find experienced security experts who readily adapt standard security thinking to a somewhat different set of threat models.
Re: industry roles that would be particularly good or bad preparation. My guess is that for the GCR-mitigating roles we discuss above (i.e. not just potential future roles at Open Phil), the better-preparation for roles will tend to (a) expose one to many different types of challenges, and different aspects of those challenges, rather than being very narrowly scoped, (b) involve threat modeling of, and defense from, very capable and well-resourced attackers, and (c) require some development of novel solutions (not necessary new crypto research; could also just be new configurations of interacting hardware/software systems and user behavior policies and training), among other things.
One potential area of biorisk + infosec work would be in improving the biotech industry’s ability to secure synthesis & lab automation technology from use in creating dangerous pathogens / organisms.
This could be done via circumventing existing controls (i.e. ordering a virus which is on a banned-sequence list), or by hijacking synthesis equipment itself. So protecting this type of infrastructure may be super important. I could see this being a more policy oriented role, but one that would require infosec skills.
I expect this work to be valuable if someone possessed both the political acumen to convince the relevant policy-makers / companies that it was worthwhile and the technical / organizational skill to put solid controls in place. I don’t expect this kind of work to be done by default unless something bad happens [i.e. a company is hacked and a dangerous organism is produced]. So having someone driving preventative measures before any disaster happens could be valuable.
This is a really good post! I have some bold, unsubstantiated claims to make that I’m curious on people’s thoughts on. Source: I’ve done some small amount of security-related work / coursework, hung around a lot of infosec-ey type people in college, and tried to hire a security officer once.
I’ve noticed some hard to articulate but consistent seeming differences in personality / mindset in people I know who work in security. I think it’s plausible that it’s much harder to become “good” at infosec through pursuing an infosec career path than to be come “good” at machine learning by pursuing a machine learning career path. I think this may be especially true the broader you go, e.g. you might be able to become “good” at securing web browsers, but will have trouble transferring general infosec insights to broader problems like biosecurity.
As a result, I think it might be worth EA effort getting people who are already fairly far in the infosec field to be more concerned about GCRs. (Though I think getting people to try infosec careers is also worth it.)
Related to this, many people people I know in infosec think EA concerns about GCRs are wrong for a variety of reasons, even though a lot of them have ex-risky style thoughts about how e.g. surveillance could lead to a totalitarian state with a lot of lock-in. I think this might be an interesting viewpoint difference to look into.
Is it easy to say more about (1) which personality/mindset traits might predict infosec fit, and (2) infosec experts’ objections to typical GCR concerns of EAs?
Here’s a compilation of “how to get started in infosec” guides.
I am not clicking on a link with the URL malicious.link/start :-P
Seems like malicious.link is perfectly legit.
lol yeah it’s an infosec guy’s blog. He’s trolling a bit with the domain name.
Okay, I clicked on it and it seems all fine to me. (To anyone still wary, I Ben Pace promise my impression clicking through is that it’s legit, and actually quite detailed.)
This post was awarded an EA Forum Prize; see the prize announcement for more details.
My notes on what I liked about the post, from the announcement:
Good points. Presumably infosec to prevent nuclear weapons from being hacked would also be valuable direct work?
I do know of a project here that is pretty promising, related to improving secure communication between nuclear weapons states. If you know people with significant expertise who might be interested pm me.
Presumably, though I know very little about that and don’t know how much value would be added there by someone focused on worst case scenarios (over their replacement).