This is outside the scope of this document, but I’m a bit curious how useful it would have been to have such a list 3-5 years ago, and why it took so long. Previously I heard something like, “biosecurity is filled with info-hazards, so we can’t have many people in it yet.”
Anyway, it makes a lot of sense to me that we have pretty safe intervention options after all, and I’m happy to see lists being created and acted upon.
The authors will have a more-informed answer, but my understanding is that part of the answer is “some ‘disentanglement’ work needed to be done w.r.t. biosecurity for x-risk reduction (as opposed to biosecurity for lower-stakes scenarios).”
I mention this so that I can bemoan the fact that I think we don’t have a similar list of large-scale, clearly-net-positive projects for the purpose of AI x-risk reduction, in part because (I think) the AI situation is more confusing and requires more and harder disentanglement work (some notes on this here and here). The Open Phil “worldview investigations” team (among others) is working on such disentanglement research for AI x-risk reduction and I would like to see more people tackle this strategic clarity bottleneck, ideally in close communication with folks who have experience with relatively deep, thorough investigations of this type (a la Bio Anchors and other Open Phil worldview investigation reports) and in close communication with folks who will use greater strategic clarity to take large actions.
I have only been involved in biosecurity for 1.5 years, but the focus on purely defensive projects (sterilization, refuges, some sequencing tech) feels relatively recent. It’s a lot less risky to openly talk about those than about technologies like antivirals or vaccines.
I’m happy to see this shift, as concrete lists like this will likely motivate more people to enter the space.
More than infohazards, we were still building capacity and understanding of the area.
But many of these were highlighted in earlier work, including decade of reports from Center for Health Security, etc. (Not to mention my paper with Dave Denkenberger.)
thanks for the kind words! I agree that we didn’t have much good stuff for ppl to do 4 yrs ago when i started in bio but don’t feel like my model matches yours regarding why.
But I’m also wanting to confirm I’ve understood what you are looking for before I ramble.
How much would you agree with this description of what I could imagine filling in from what you said re ‘why it took so long’:
“well I looked at this list of projects, and it didn’t seem all that non-obvious to me, and so the default explanation of ‘it just took a long time to work out these projects’ doesn’t seem to answer the question”
(TBC, I think this would be a very reasonable read of the piece, and I’m not interpreting your question to be critical tho also obviously fine if it is hahah)
That sounds like much of it. To be clear, it’s not that the list is obvious, but more that it seems fairly obvious that a similar list was possible. It seemed pretty clear to me a few years ago that there must be some reasonable lists of non-info-hazard countermeasures that we could work on, for general-purpose bio safety. I didn’t have these particular measures in mind, but figured that roughly similar ones would be viable.
Another part of my view is, ”Could we have hired a few people to work full-time coming up with a list about this good, a few years earlier?”
I know a few people who were discouraged from working in the field earlier on because their was neither the list, nor the go-ahead to try to make a list.
I don’t think any of the info hazards are mentioned here, but you’re right that good lists like this are a long time coming. I haven’t heard that biosec folks actively didn’t want people in the field though—would be interested in who said that.
FWIW, I know of a case from just last month where an EA biosecurity person I respect indicated that they or various people they knew had substantial concerns about the possibility of other researchers (who are known to be EA-aligned and are respected by various longtermist stakeholders) entering the space, due to infohazard concerns.
(I’m not saying I think these people should’ve been concerned or shouldn’t have been. I’m also not saying these people would have confidently overall opposed these researchers entering the space. I’m just registering a data point.)
I am surprised, and feel like I need more context. “This space” is probably too vague. I’m definitely opposed to even well-aligned people spending time thinking up new biothreats. But that’s very different than working on specific risk mitigation projects.
By “this space”, I meant the longtermist biosecurity/biorisk space. As far as I’m aware, the concern was along the lines of “These new people might not be sufficiently cautious about infohazards, so them thinking more about this area in general could be bad”, rather than it being tailored to specific projects/areas/focuses the new people might have (and in particular, it wasn’t because the people proposed thinking up new biothreats).
(But I acknowledge that this remains vague, and also this is essentially second-hand info, so people probably shouldn’t update strongly in light of it.)
I would agree that getting people who aren’t cautious about things like infohazards is a much more mixed blessing if we’re talking about biorisk generally, and I’d want to hear details about what they were doing, and why there were concerns. (I can think of several people whos contribution is net-negative because most of what they do is at best useless, and they create work for others to respond to.)
But as I said, the pitch here from ASB and Ethan was far more narrow, and mostly avoids those concerns.
It seems reasonable to me to be vigilant of sharing infohazards with new researchers in the field. Still, I am wondering if it might actually be worse to leave new researchers in the dark without teaching them how to recognize and contain those infohazards, especially when some are accessible on the internet. Is this a legitimate concern?
Really happy to see this, this looks great.
This is outside the scope of this document, but I’m a bit curious how useful it would have been to have such a list 3-5 years ago, and why it took so long. Previously I heard something like, “biosecurity is filled with info-hazards, so we can’t have many people in it yet.”
Anyway, it makes a lot of sense to me that we have pretty safe intervention options after all, and I’m happy to see lists being created and acted upon.
The authors will have a more-informed answer, but my understanding is that part of the answer is “some ‘disentanglement’ work needed to be done w.r.t. biosecurity for x-risk reduction (as opposed to biosecurity for lower-stakes scenarios).”
I mention this so that I can bemoan the fact that I think we don’t have a similar list of large-scale, clearly-net-positive projects for the purpose of AI x-risk reduction, in part because (I think) the AI situation is more confusing and requires more and harder disentanglement work (some notes on this here and here). The Open Phil “worldview investigations” team (among others) is working on such disentanglement research for AI x-risk reduction and I would like to see more people tackle this strategic clarity bottleneck, ideally in close communication with folks who have experience with relatively deep, thorough investigations of this type (a la Bio Anchors and other Open Phil worldview investigation reports) and in close communication with folks who will use greater strategic clarity to take large actions.
I have only been involved in biosecurity for 1.5 years, but the focus on purely defensive projects (sterilization, refuges, some sequencing tech) feels relatively recent. It’s a lot less risky to openly talk about those than about technologies like antivirals or vaccines.
I’m happy to see this shift, as concrete lists like this will likely motivate more people to enter the space.
More than infohazards, we were still building capacity and understanding of the area.
But many of these were highlighted in earlier work, including decade of reports from Center for Health Security, etc. (Not to mention my paper with Dave Denkenberger.)
thanks for the kind words! I agree that we didn’t have much good stuff for ppl to do 4 yrs ago when i started in bio but don’t feel like my model matches yours regarding why.
But I’m also wanting to confirm I’ve understood what you are looking for before I ramble.
How much would you agree with this description of what I could imagine filling in from what you said re ‘why it took so long’:
“well I looked at this list of projects, and it didn’t seem all that non-obvious to me, and so the default explanation of ‘it just took a long time to work out these projects’ doesn’t seem to answer the question”
(TBC, I think this would be a very reasonable read of the piece, and I’m not interpreting your question to be critical tho also obviously fine if it is hahah)
That sounds like much of it. To be clear, it’s not that the list is obvious, but more that it seems fairly obvious that a similar list was possible. It seemed pretty clear to me a few years ago that there must be some reasonable lists of non-info-hazard countermeasures that we could work on, for general-purpose bio safety. I didn’t have these particular measures in mind, but figured that roughly similar ones would be viable.
Another part of my view is,
”Could we have hired a few people to work full-time coming up with a list about this good, a few years earlier?”
I know a few people who were discouraged from working in the field earlier on because their was neither the list, nor the go-ahead to try to make a list.
I don’t think any of the info hazards are mentioned here, but you’re right that good lists like this are a long time coming. I haven’t heard that biosec folks actively didn’t want people in the field though—would be interested in who said that.
FWIW, I know of a case from just last month where an EA biosecurity person I respect indicated that they or various people they knew had substantial concerns about the possibility of other researchers (who are known to be EA-aligned and are respected by various longtermist stakeholders) entering the space, due to infohazard concerns.
(I’m not saying I think these people should’ve been concerned or shouldn’t have been. I’m also not saying these people would have confidently overall opposed these researchers entering the space. I’m just registering a data point.)
I am surprised, and feel like I need more context. “This space” is probably too vague. I’m definitely opposed to even well-aligned people spending time thinking up new biothreats. But that’s very different than working on specific risk mitigation projects.
By “this space”, I meant the longtermist biosecurity/biorisk space. As far as I’m aware, the concern was along the lines of “These new people might not be sufficiently cautious about infohazards, so them thinking more about this area in general could be bad”, rather than it being tailored to specific projects/areas/focuses the new people might have (and in particular, it wasn’t because the people proposed thinking up new biothreats).
(But I acknowledge that this remains vague, and also this is essentially second-hand info, so people probably shouldn’t update strongly in light of it.)
I would agree that getting people who aren’t cautious about things like infohazards is a much more mixed blessing if we’re talking about biorisk generally, and I’d want to hear details about what they were doing, and why there were concerns. (I can think of several people whos contribution is net-negative because most of what they do is at best useless, and they create work for others to respond to.)
But as I said, the pitch here from ASB and Ethan was far more narrow, and mostly avoids those concerns.
It seems reasonable to me to be vigilant of sharing infohazards with new researchers in the field. Still, I am wondering if it might actually be worse to leave new researchers in the dark without teaching them how to recognize and contain those infohazards, especially when some are accessible on the internet. Is this a legitimate concern?