The problem might come down to security being nuanced, complex, hard to measure, needing to be tied to the mission to be effective—so it often requires a lot of judgement . In my experience it’s easy for contractors to apply the same cookie-cutter security they’ve always done, and miss the point.
Two real examples that may be illustrative:
An company with altruistic goals wanted to reduce the risk of a compromise that could prevent them from achieving the mission, so they hired contractors to support cybersecurity. The contractors recommended working on security policy and set to work on it. One of the benefits of policy is demonstrating security compliance, so that other businesses are comfortable buying your services. The policies were designed along these lines, even though sales wasn’t the true motivation for security, and was out of touch with the organisation’s culture. For example, staff were told that they had to follow it for the good of the company, including “don’t be the reason [company] loses a sale”.
The company’s motivation and culture was explained clearly to the contractors. But it’s unusual for an organisation to care about their mission more than money, and common for companies to pretend, so I can understand why the contractors had a hard time understanding.
Another example of disconnect is that many companies and security professionals explicitly do not attempt to defend against nation state attacks, and ignore external harms. I talked to a sucessful cybersecurity professional (CISO at a large tech company) about the security difficulties faced by AI startups and the damage that could be done to the world by the leak of powerful technologies. One of their recommendations was for AI labs to get cyber insurance so they would be financially protected against a compromise. I argued that this doesn’t protect against a foreign state brainwashing its citizens with a large language model, and they agreed, but their initial reaction was that the AI lab can’t get sued over that anyway. In fairness I don’t believe they were callous—just not used to thinking about risks beyond company suceess and survival.
Different contractors may be better, and there may be some out there that ‘get’ it, but it’s an added difficulty when it’s already hard to get and vet information security expertise.
I think it can work to hire contractors for specific technical tasks that require a high amount of expertise and not as much mission judgement, e.g. deploy a security product.
If the bottleneck is essentially about people with relevant expertise not ‘getting it’, then I tentatively suspect that the ideal model for this path for relevant orgs would look like a consultancy. E.g. advice about how to manage contractors, and helping to onboard contractors, rather than trying to ~do the work.
If that’s right, then it suggests that we need relatively few people actually developing this skillset.
(Similarly to how mental health is instrumentally very important for doing good, and it’s great that there are people thinking specifically about mental health in the context of maximising positive impact, but I still wouldn’t recommend ‘psychiatrist/counsellor’ for (m)any people who hadn’t already built up a bunch of relevant expertise.)
I think I’d start with solving the problem for 1-2 EA orgs, in the spirit of “do things that don’t scale”, and once that works (which will probably be hard in several unexpected ways), I’d try to scale to a consultancy that helps 10 orgs at once.
This is only based on my unverified guess about making a product that would fit what the orgs would say “hell yes” to, and my unverified-in-this-situation intuition that starting by trying to solve the problem in a scalable way before doing it for 1-2 “individuals” usually doesn’t work.
(I can elaborate on my intuitions, but if someone read this and disagrees—I encourage you to ignore what I wrote)
Regardless of building a solution (consultancy?) that orgs will say yes to, I also think there’s something healthy of having a single person in the org (the head of security?) who is personally responsible for the security going well (having “power” to make decisions, having information and knowledge to either make decisions or vet other people’s opinions), and this often isn’t the situation with consultancies, who are not in fact responsible in the way I mean.
I can also imagine a trusted consultancy that very specifically helps hiring competent people to be “head of security”.
The problem might come down to security being nuanced, complex, hard to measure, needing to be tied to the mission to be effective—so it often requires a lot of judgement . In my experience it’s easy for contractors to apply the same cookie-cutter security they’ve always done, and miss the point.
Two real examples that may be illustrative:
An company with altruistic goals wanted to reduce the risk of a compromise that could prevent them from achieving the mission, so they hired contractors to support cybersecurity. The contractors recommended working on security policy and set to work on it. One of the benefits of policy is demonstrating security compliance, so that other businesses are comfortable buying your services. The policies were designed along these lines, even though sales wasn’t the true motivation for security, and was out of touch with the organisation’s culture. For example, staff were told that they had to follow it for the good of the company, including “don’t be the reason [company] loses a sale”.
The company’s motivation and culture was explained clearly to the contractors. But it’s unusual for an organisation to care about their mission more than money, and common for companies to pretend, so I can understand why the contractors had a hard time understanding.
Another example of disconnect is that many companies and security professionals explicitly do not attempt to defend against nation state attacks, and ignore external harms. I talked to a sucessful cybersecurity professional (CISO at a large tech company) about the security difficulties faced by AI startups and the damage that could be done to the world by the leak of powerful technologies. One of their recommendations was for AI labs to get cyber insurance so they would be financially protected against a compromise. I argued that this doesn’t protect against a foreign state brainwashing its citizens with a large language model, and they agreed, but their initial reaction was that the AI lab can’t get sued over that anyway. In fairness I don’t believe they were callous—just not used to thinking about risks beyond company suceess and survival.
Different contractors may be better, and there may be some out there that ‘get’ it, but it’s an added difficulty when it’s already hard to get and vet information security expertise.
I think it can work to hire contractors for specific technical tasks that require a high amount of expertise and not as much mission judgement, e.g. deploy a security product.
I don’t believe the issue is limited to information security—I rememberTara discussing the difficulty of outsourcing financial accounting.
Thanks! This reply is very helpful.
If the bottleneck is essentially about people with relevant expertise not ‘getting it’, then I tentatively suspect that the ideal model for this path for relevant orgs would look like a consultancy. E.g. advice about how to manage contractors, and helping to onboard contractors, rather than trying to ~do the work.
If that’s right, then it suggests that we need relatively few people actually developing this skillset.
(Similarly to how mental health is instrumentally very important for doing good, and it’s great that there are people thinking specifically about mental health in the context of maximising positive impact, but I still wouldn’t recommend ‘psychiatrist/counsellor’ for (m)any people who hadn’t already built up a bunch of relevant expertise.)
I think I’d start with solving the problem for 1-2 EA orgs, in the spirit of “do things that don’t scale”, and once that works (which will probably be hard in several unexpected ways), I’d try to scale to a consultancy that helps 10 orgs at once.
This is only based on my unverified guess about making a product that would fit what the orgs would say “hell yes” to, and my unverified-in-this-situation intuition that starting by trying to solve the problem in a scalable way before doing it for 1-2 “individuals” usually doesn’t work.
(I can elaborate on my intuitions, but if someone read this and disagrees—I encourage you to ignore what I wrote)
Regardless of building a solution (consultancy?) that orgs will say yes to, I also think there’s something healthy of having a single person in the org (the head of security?) who is personally responsible for the security going well (having “power” to make decisions, having information and knowledge to either make decisions or vet other people’s opinions), and this often isn’t the situation with consultancies, who are not in fact responsible in the way I mean.
I can also imagine a trusted consultancy that very specifically helps hiring competent people to be “head of security”.
[rough thoughts, not my expertise]