Epistemic status: Confused person with zero expertise in this area
Who is âusâ in this scenario? I assume itâs meant to be âorganizations with access to infohazardous bio/âAI dataâ?
If so, what makes you think of the current infosec of these orgs as âunacceptableâ? If you think theyâd disagree with this characterization, do you have a sense for why?
If not, what do you see as some plausible consequences of weak infosec that could plausibly total $100m in damages for EA orgs if they came to pass, given that EA is a network of lots of organizations, with pretty limited funding and access to other valuable data per org?
(Even if something happened along the lines of âGiveWell leaks every donorâs credit card numberâ, I wonder what the actual damage would look like, given how often this sort of thing seems to happen to large organizations that donât go bankrupt as a result. And itâs hard to imagine that most charities on GiveWellâs scale would actually go positive-EV by investing millions of dollars in infosec.)
This is my impression based on (a) talking to a bunch of people and hearing things like âYeah our security is unacceptably weakâ and âI donât think we are in danger yet, we probably arenât on anyoneâs radarâ and âYeah we are taking it very seriously, we are looking to hire someone. Itâs just really hard to find a good security person.â These are basically the ONLY three things I hear when I raise security concerns, and they are collectively NOT reassuring. I havenât talked to every org and every person so maybe my experience is misleading. also (b) on priors, it seems that people in general donât take security seriously until thereâs actually a breach. (c) Iâve talked to some people who are also worried about this, and they told me there basically isnât any professional security person in the EA community willing to work full time on this.
I will go further than that. Everyone I know in infosec, including those who work for either the US or the Israeli government, seem to strongly agree with the following claim: âNo amount of feasible security spending will protect your network against a determined attempt by an advanced national government (at the very least, US, Russia, China, and Israel) to get access. If you need that level of infosec, you canât put anything on a computer.â
If AI safety is a critical enabler for national security, and/âor AI system security is important for their alignment, that means weâre in deep trouble.
Makes sense. Just to clarify â the phrasing here makes me think these are organizations with potentially dangerous technical knowledge, rather than e.g. CEA. Is that right?
Epistemic status: Confused person with zero expertise in this area
Who is âusâ in this scenario? I assume itâs meant to be âorganizations with access to infohazardous bio/âAI dataâ?
If so, what makes you think of the current infosec of these orgs as âunacceptableâ? If you think theyâd disagree with this characterization, do you have a sense for why?
If not, what do you see as some plausible consequences of weak infosec that could plausibly total $100m in damages for EA orgs if they came to pass, given that EA is a network of lots of organizations, with pretty limited funding and access to other valuable data per org?
(Even if something happened along the lines of âGiveWell leaks every donorâs credit card numberâ, I wonder what the actual damage would look like, given how often this sort of thing seems to happen to large organizations that donât go bankrupt as a result. And itâs hard to imagine that most charities on GiveWellâs scale would actually go positive-EV by investing millions of dollars in infosec.)
This is my impression based on (a) talking to a bunch of people and hearing things like âYeah our security is unacceptably weakâ and âI donât think we are in danger yet, we probably arenât on anyoneâs radarâ and âYeah we are taking it very seriously, we are looking to hire someone. Itâs just really hard to find a good security person.â These are basically the ONLY three things I hear when I raise security concerns, and they are collectively NOT reassuring. I havenât talked to every org and every person so maybe my experience is misleading. also (b) on priors, it seems that people in general donât take security seriously until thereâs actually a breach. (c) Iâve talked to some people who are also worried about this, and they told me there basically isnât any professional security person in the EA community willing to work full time on this.
I will go further than that. Everyone I know in infosec, including those who work for either the US or the Israeli government, seem to strongly agree with the following claim:
âNo amount of feasible security spending will protect your network against a determined attempt by an advanced national government (at the very least, US, Russia, China, and Israel) to get access. If you need that level of infosec, you canât put anything on a computer.â
If AI safety is a critical enabler for national security, and/âor AI system security is important for their alignment, that means weâre in deep trouble.
Makes sense. Just to clarify â the phrasing here makes me think these are organizations with potentially dangerous technical knowledge, rather than e.g. CEA. Is that right?
Yes.