I don’t think any of the info hazards are mentioned here, but you’re right that good lists like this are a long time coming. I haven’t heard that biosec folks actively didn’t want people in the field though—would be interested in who said that.
FWIW, I know of a case from just last month where an EA biosecurity person I respect indicated that they or various people they knew had substantial concerns about the possibility of other researchers (who are known to be EA-aligned and are respected by various longtermist stakeholders) entering the space, due to infohazard concerns.
(I’m not saying I think these people should’ve been concerned or shouldn’t have been. I’m also not saying these people would have confidently overall opposed these researchers entering the space. I’m just registering a data point.)
I am surprised, and feel like I need more context. “This space” is probably too vague. I’m definitely opposed to even well-aligned people spending time thinking up new biothreats. But that’s very different than working on specific risk mitigation projects.
By “this space”, I meant the longtermist biosecurity/biorisk space. As far as I’m aware, the concern was along the lines of “These new people might not be sufficiently cautious about infohazards, so them thinking more about this area in general could be bad”, rather than it being tailored to specific projects/areas/focuses the new people might have (and in particular, it wasn’t because the people proposed thinking up new biothreats).
(But I acknowledge that this remains vague, and also this is essentially second-hand info, so people probably shouldn’t update strongly in light of it.)
I would agree that getting people who aren’t cautious about things like infohazards is a much more mixed blessing if we’re talking about biorisk generally, and I’d want to hear details about what they were doing, and why there were concerns. (I can think of several people whos contribution is net-negative because most of what they do is at best useless, and they create work for others to respond to.)
But as I said, the pitch here from ASB and Ethan was far more narrow, and mostly avoids those concerns.
It seems reasonable to me to be vigilant of sharing infohazards with new researchers in the field. Still, I am wondering if it might actually be worse to leave new researchers in the dark without teaching them how to recognize and contain those infohazards, especially when some are accessible on the internet. Is this a legitimate concern?
I don’t think any of the info hazards are mentioned here, but you’re right that good lists like this are a long time coming. I haven’t heard that biosec folks actively didn’t want people in the field though—would be interested in who said that.
FWIW, I know of a case from just last month where an EA biosecurity person I respect indicated that they or various people they knew had substantial concerns about the possibility of other researchers (who are known to be EA-aligned and are respected by various longtermist stakeholders) entering the space, due to infohazard concerns.
(I’m not saying I think these people should’ve been concerned or shouldn’t have been. I’m also not saying these people would have confidently overall opposed these researchers entering the space. I’m just registering a data point.)
I am surprised, and feel like I need more context. “This space” is probably too vague. I’m definitely opposed to even well-aligned people spending time thinking up new biothreats. But that’s very different than working on specific risk mitigation projects.
By “this space”, I meant the longtermist biosecurity/biorisk space. As far as I’m aware, the concern was along the lines of “These new people might not be sufficiently cautious about infohazards, so them thinking more about this area in general could be bad”, rather than it being tailored to specific projects/areas/focuses the new people might have (and in particular, it wasn’t because the people proposed thinking up new biothreats).
(But I acknowledge that this remains vague, and also this is essentially second-hand info, so people probably shouldn’t update strongly in light of it.)
I would agree that getting people who aren’t cautious about things like infohazards is a much more mixed blessing if we’re talking about biorisk generally, and I’d want to hear details about what they were doing, and why there were concerns. (I can think of several people whos contribution is net-negative because most of what they do is at best useless, and they create work for others to respond to.)
But as I said, the pitch here from ASB and Ethan was far more narrow, and mostly avoids those concerns.
It seems reasonable to me to be vigilant of sharing infohazards with new researchers in the field. Still, I am wondering if it might actually be worse to leave new researchers in the dark without teaching them how to recognize and contain those infohazards, especially when some are accessible on the internet. Is this a legitimate concern?