FWIW, I know of a case from just last month where an EA biosecurity person I respect indicated that they or various people they knew had substantial concerns about the possibility of other researchers (who are known to be EA-aligned and are respected by various longtermist stakeholders) entering the space, due to infohazard concerns.
(Iām not saying I think these people shouldāve been concerned or shouldnāt have been. Iām also not saying these people would have confidently overall opposed these researchers entering the space. Iām just registering a data point.)
I am surprised, and feel like I need more context. āThis spaceā is probably too vague. Iām definitely opposed to even well-aligned people spending time thinking up new biothreats. But thatās very different than working on specific risk mitigation projects.
By āthis spaceā, I meant the longtermist biosecurity/ābiorisk space. As far as Iām aware, the concern was along the lines of āThese new people might not be sufficiently cautious about infohazards, so them thinking more about this area in general could be badā, rather than it being tailored to specific projects/āareas/āfocuses the new people might have (and in particular, it wasnāt because the people proposed thinking up new biothreats).
(But I acknowledge that this remains vague, and also this is essentially second-hand info, so people probably shouldnāt update strongly in light of it.)
I would agree that getting people who arenāt cautious about things like infohazards is a much more mixed blessing if weāre talking about biorisk generally, and Iād want to hear details about what they were doing, and why there were concerns. (I can think of several people whos contribution is net-negative because most of what they do is at best useless, and they create work for others to respond to.)
But as I said, the pitch here from ASB and Ethan was far more narrow, and mostly avoids those concerns.
It seems reasonable to me to be vigilant of sharing infohazards with new researchers in the field. Still, I am wondering if it might actually be worse to leave new researchers in the dark without teaching them how to recognize and contain those infohazards, especially when some are accessible on the internet. Is this a legitimate concern?
FWIW, I know of a case from just last month where an EA biosecurity person I respect indicated that they or various people they knew had substantial concerns about the possibility of other researchers (who are known to be EA-aligned and are respected by various longtermist stakeholders) entering the space, due to infohazard concerns.
(Iām not saying I think these people shouldāve been concerned or shouldnāt have been. Iām also not saying these people would have confidently overall opposed these researchers entering the space. Iām just registering a data point.)
I am surprised, and feel like I need more context. āThis spaceā is probably too vague. Iām definitely opposed to even well-aligned people spending time thinking up new biothreats. But thatās very different than working on specific risk mitigation projects.
By āthis spaceā, I meant the longtermist biosecurity/ābiorisk space. As far as Iām aware, the concern was along the lines of āThese new people might not be sufficiently cautious about infohazards, so them thinking more about this area in general could be badā, rather than it being tailored to specific projects/āareas/āfocuses the new people might have (and in particular, it wasnāt because the people proposed thinking up new biothreats).
(But I acknowledge that this remains vague, and also this is essentially second-hand info, so people probably shouldnāt update strongly in light of it.)
I would agree that getting people who arenāt cautious about things like infohazards is a much more mixed blessing if weāre talking about biorisk generally, and Iād want to hear details about what they were doing, and why there were concerns. (I can think of several people whos contribution is net-negative because most of what they do is at best useless, and they create work for others to respond to.)
But as I said, the pitch here from ASB and Ethan was far more narrow, and mostly avoids those concerns.
It seems reasonable to me to be vigilant of sharing infohazards with new researchers in the field. Still, I am wondering if it might actually be worse to leave new researchers in the dark without teaching them how to recognize and contain those infohazards, especially when some are accessible on the internet. Is this a legitimate concern?