Finally get acceptable information security by throwing money at the problem.
Spend $100M/year to hire, say, 10 world-class security experts and get them everything they need to build the right infrastructure for us, and for e.g. Anthropic.
Strong second—we should build up secure open computing from bare metal (secure, open verifiable CPUS, memory, etc.) to the OS, to compilers, to a secure applications layer.
I discussed this with a couple people ca. 2 years ago, and thought it was likely that a company like Google could design and produce a full stack secure system as a moderately large internal project. And some groups are already doing parts of this—for example, a provably secure OS microkernel, for far less than what we’d be able to spend.
As a fermi estimate on the high end, if we hire 10 top hardware design people for $500k/year each, throw in the same number of OS design people, and compiler designers at the same cost, and a team of 50 great people to do the rest of the development and testing at $300k/year, $100m means that we have 3 years to do this—and it’s an open source project, so we’d get universities, etc. working on this as well. (i.e. we could not mass produce the hardware at theses prices, but that’s commercialization, not design, and it should be funded by sales.)
(not an expert) My impression is that a perfectly secure OS doesn’t buy you much if you use insecure applications on an insecure network etc.
Also, if you think about classified work, the productivity tradeoff is massive: you can’t use your personal computer while working on the project, you can’t use any of your favorite software while working on the project, you can’t use an internet-connected computer while working on the project, you can’t have your cell phone in your pocket while talking about the project, you can’t talk to people about the project over normal phone lines and emails… And then of course viruses get into air-gapped classified networks within hours anyway. :-P
Not that we can’t or shouldn’t buy better security, I’m just slightly skeptical of specifically focusing on building a new low-level foundation rather than doing all the normal stuff really well, like network traffic monitoring, vetting applications and workflows, anti-spearphishing training, etc. etc. Well, I guess you’ll say, “we should do both”. Sure. I guess I just assume that the other things would rapidly become the weakest link.
In terms of low-level security, my old company has a big line of business designing chips themselves to be more secure; they spun out Dover Microsystems to sell that particular technology to commercial (as opposed to military) customers. Just FYI, that’s just one thing I happen to be familiar with. Actually I guess it’s not that relevant.
Agreed that secure low level without application security doesn’t get you there, which is why I said we need a full stack—and even if it wasn’t part of this, redeveloping network infrastructure to be done well and securely seems like a very useful investment.
But doing all the normal stuff well on top of systems that still have insecure chips, BIOS, and kernel just means that the exploits move to lower levels—even if there are fewer, the differences between 90% secure and 100% secure is far more important than moving from 50% to 90%. So we need the full stack.
I see enormous value in it and think it should be considered seriously.
On the other hand, the huge amount of value in it is also a reason I’m skeptical about it being obvious to be achievable: there are already individual giant firms who’d internally at multi-million annual savings (not to talk about the many billions the first firm marketing something like that would immediately earn) from having a convenient simple secure stack ‘for everything’, yet none seems to have something close to it (though I guess many may have something like that in some sub-systems/niches).
So just wondering whether we might underestimate the cost of development/use—despite from gut feeling strongly agreeing that it would seem like such a tractable problem.
I think the budget to do this is easily tens of millions a year, for perhaps a decade, plus the ability to hire the top talent, and it likely only works as a usefully secure system if you open-source it. Are there large firms who are willing to invest $25m/year for 4-5 years on a long-term cybersecurity effort like this, even if it seems somewhat likely to pay off? I suspect not—especially if they worry (plausibly) that governments will actively attempt to interfere in some parts of this.
Agree with the “easily tens of millions a year”, which, however, could also be seen to underline part of what I meant: it is really tricky to know how much we can expect from what exact effort.
I half agree with all your points, but see implicit speculative elements in them too, and hence remain with, a maybe all too obvious statement: let’s consider the idea seriously, but let’s also not forget that we’re obviously not the first ones thinking of this, and in addition to all other uncertainties, keep in our mind that none seems to seriously have very much progress in that domain despite the possibly absolutely enormous value even private firms might have been able to make from it if they had serious progress in it.
Epistemic status: Confused person with zero expertise in this area
Who is “us” in this scenario? I assume it’s meant to be “organizations with access to infohazardous bio/AI data”?
If so, what makes you think of the current infosec of these orgs as “unacceptable”? If you think they’d disagree with this characterization, do you have a sense for why?
If not, what do you see as some plausible consequences of weak infosec that could plausibly total $100m in damages for EA orgs if they came to pass, given that EA is a network of lots of organizations, with pretty limited funding and access to other valuable data per org?
(Even if something happened along the lines of “GiveWell leaks every donor’s credit card number”, I wonder what the actual damage would look like, given how often this sort of thing seems to happen to large organizations that don’t go bankrupt as a result. And it’s hard to imagine that most charities on GiveWell’s scale would actually go positive-EV by investing millions of dollars in infosec.)
This is my impression based on (a) talking to a bunch of people and hearing things like “Yeah our security is unacceptably weak” and “I don’t think we are in danger yet, we probably aren’t on anyone’s radar” and “Yeah we are taking it very seriously, we are looking to hire someone. It’s just really hard to find a good security person.” These are basically the ONLY three things I hear when I raise security concerns, and they are collectively NOT reassuring. I haven’t talked to every org and every person so maybe my experience is misleading. also (b) on priors, it seems that people in general don’t take security seriously until there’s actually a breach. (c) I’ve talked to some people who are also worried about this, and they told me there basically isn’t any professional security person in the EA community willing to work full time on this.
I will go further than that. Everyone I know in infosec, including those who work for either the US or the Israeli government, seem to strongly agree with the following claim: ”No amount of feasible security spending will protect your network against a determined attempt by an advanced national government (at the very least, US, Russia, China, and Israel) to get access. If you need that level of infosec, you can’t put anything on a computer.”
If AI safety is a critical enabler for national security, and/or AI system security is important for their alignment, that means we’re in deep trouble.
Makes sense. Just to clarify — the phrasing here makes me think these are organizations with potentially dangerous technical knowledge, rather than e.g. CEA. Is that right?
Finally get acceptable information security by throwing money at the problem.
Spend $100M/year to hire, say, 10 world-class security experts and get them everything they need to build the right infrastructure for us, and for e.g. Anthropic.
Strong second—we should build up secure open computing from bare metal (secure, open verifiable CPUS, memory, etc.) to the OS, to compilers, to a secure applications layer.
Is this something we could purchase for a few hundred million in a few years?
I discussed this with a couple people ca. 2 years ago, and thought it was likely that a company like Google could design and produce a full stack secure system as a moderately large internal project. And some groups are already doing parts of this—for example, a provably secure OS microkernel, for far less than what we’d be able to spend.
As a fermi estimate on the high end, if we hire 10 top hardware design people for $500k/year each, throw in the same number of OS design people, and compiler designers at the same cost, and a team of 50 great people to do the rest of the development and testing at $300k/year, $100m means that we have 3 years to do this—and it’s an open source project, so we’d get universities, etc. working on this as well. (i.e. we could not mass produce the hardware at theses prices, but that’s commercialization, not design, and it should be funded by sales.)
(not an expert) My impression is that a perfectly secure OS doesn’t buy you much if you use insecure applications on an insecure network etc.
Also, if you think about classified work, the productivity tradeoff is massive: you can’t use your personal computer while working on the project, you can’t use any of your favorite software while working on the project, you can’t use an internet-connected computer while working on the project, you can’t have your cell phone in your pocket while talking about the project, you can’t talk to people about the project over normal phone lines and emails… And then of course viruses get into air-gapped classified networks within hours anyway. :-P
Not that we can’t or shouldn’t buy better security, I’m just slightly skeptical of specifically focusing on building a new low-level foundation rather than doing all the normal stuff really well, like network traffic monitoring, vetting applications and workflows, anti-spearphishing training, etc. etc. Well, I guess you’ll say, “we should do both”. Sure. I guess I just assume that the other things would rapidly become the weakest link.
In terms of low-level security, my old company has a big line of business designing chips themselves to be more secure; they spun out Dover Microsystems to sell that particular technology to commercial (as opposed to military) customers. Just FYI, that’s just one thing I happen to be familiar with. Actually I guess it’s not that relevant.
Agreed that secure low level without application security doesn’t get you there, which is why I said we need a full stack—and even if it wasn’t part of this, redeveloping network infrastructure to be done well and securely seems like a very useful investment.
But doing all the normal stuff well on top of systems that still have insecure chips, BIOS, and kernel just means that the exploits move to lower levels—even if there are fewer, the differences between 90% secure and 100% secure is far more important than moving from 50% to 90%. So we need the full stack.
I see enormous value in it and think it should be considered seriously.
On the other hand, the huge amount of value in it is also a reason I’m skeptical about it being obvious to be achievable: there are already individual giant firms who’d internally at multi-million annual savings (not to talk about the many billions the first firm marketing something like that would immediately earn) from having a convenient simple secure stack ‘for everything’, yet none seems to have something close to it (though I guess many may have something like that in some sub-systems/niches).
So just wondering whether we might underestimate the cost of development/use—despite from gut feeling strongly agreeing that it would seem like such a tractable problem.
I think the budget to do this is easily tens of millions a year, for perhaps a decade, plus the ability to hire the top talent, and it likely only works as a usefully secure system if you open-source it. Are there large firms who are willing to invest $25m/year for 4-5 years on a long-term cybersecurity effort like this, even if it seems somewhat likely to pay off? I suspect not—especially if they worry (plausibly) that governments will actively attempt to interfere in some parts of this.
Agree with the “easily tens of millions a year”, which, however, could also be seen to underline part of what I meant: it is really tricky to know how much we can expect from what exact effort.
I half agree with all your points, but see implicit speculative elements in them too, and hence remain with, a maybe all too obvious statement: let’s consider the idea seriously, but let’s also not forget that we’re obviously not the first ones thinking of this, and in addition to all other uncertainties, keep in our mind that none seems to seriously have very much progress in that domain despite the possibly absolutely enormous value even private firms might have been able to make from it if they had serious progress in it.
Epistemic status: Confused person with zero expertise in this area
Who is “us” in this scenario? I assume it’s meant to be “organizations with access to infohazardous bio/AI data”?
If so, what makes you think of the current infosec of these orgs as “unacceptable”? If you think they’d disagree with this characterization, do you have a sense for why?
If not, what do you see as some plausible consequences of weak infosec that could plausibly total $100m in damages for EA orgs if they came to pass, given that EA is a network of lots of organizations, with pretty limited funding and access to other valuable data per org?
(Even if something happened along the lines of “GiveWell leaks every donor’s credit card number”, I wonder what the actual damage would look like, given how often this sort of thing seems to happen to large organizations that don’t go bankrupt as a result. And it’s hard to imagine that most charities on GiveWell’s scale would actually go positive-EV by investing millions of dollars in infosec.)
This is my impression based on (a) talking to a bunch of people and hearing things like “Yeah our security is unacceptably weak” and “I don’t think we are in danger yet, we probably aren’t on anyone’s radar” and “Yeah we are taking it very seriously, we are looking to hire someone. It’s just really hard to find a good security person.” These are basically the ONLY three things I hear when I raise security concerns, and they are collectively NOT reassuring. I haven’t talked to every org and every person so maybe my experience is misleading. also (b) on priors, it seems that people in general don’t take security seriously until there’s actually a breach. (c) I’ve talked to some people who are also worried about this, and they told me there basically isn’t any professional security person in the EA community willing to work full time on this.
I will go further than that. Everyone I know in infosec, including those who work for either the US or the Israeli government, seem to strongly agree with the following claim:
”No amount of feasible security spending will protect your network against a determined attempt by an advanced national government (at the very least, US, Russia, China, and Israel) to get access. If you need that level of infosec, you can’t put anything on a computer.”
If AI safety is a critical enabler for national security, and/or AI system security is important for their alignment, that means we’re in deep trouble.
Makes sense. Just to clarify — the phrasing here makes me think these are organizations with potentially dangerous technical knowledge, rather than e.g. CEA. Is that right?
Yes.
https://evervault.com/ are launching in October and generally working on problems in this space