Thanks for writing the post. I essentially agree with the steers on which areas are more or less âriskyâ. Another point worth highlighting is that, given these issues tend to be difficult to judge and humans are error-prone, it can be worth running things by someone else. Folks are always welcome to contact me if I can be helpful for this purpose.
But I disagree with the remarks in the post along the lines of that âThereâs lots of valuable discussion that is being missed out on in EA spaces on biosecurity, due to concerns over infohazardsâ. Oftenâperhaps usuallyâthe main motivation for discretion isnât âinfohazards!â.
Whilst (as I understand it) the âEAâ perspective on AI safety covers distinct issues from mainstream discussion on AI ethics (e.g. autonomous weapons, algorithmic bias), the main distinction between âEAâ biosecurity and âmainstreamâ biosecurity is one of scale. Thus similar topics are shared between both, and many possible interventions/âpolicy improvements have dual benefit: things that help mitigate the risk of smaller outbreaks tend to help mitigate the risk of catastrophic ones.
These topics are generally very mature fields of study. To put it in perspective, with ~5 years in medicine and public health and 3 degrees, I am roughly par for credentials and substantially below-par for experience at most expert meetings I attendâI know people who have worked on (say) global health security longer than I have been alive. Iâd guess some of this could be put down to unnecessary credentialism and hierarchalism, and it doesnât mean thereâs nothing to do as all the good ideas have already been thought, but it does make low hanging fruit likely to be plucked, and that useful contributions are hard to make without substantial background knowledge.
These are also areas which tend to have powerful stakeholders, entrenched interests, in many cases (especially security-adjacent issues) great political sensitivity. Thus even areas which are pretty âsafeâ from an information hazard perspective (e.g. better governance of dual-use research of concern), can be nonetheless delicate to talk about publicly. Missteps are easy to make (especially without the relevant tacit knowledge), and the consequences can be to (as you note in the write-up) to innoculate the idea, but also to alienate powerful interests and potentially discredit the wider EA community.
The latter is something Iâm particularly sensitive to. This is partly due to my impression that the âgrowing painsâ in other EA cause areas tended to incur unnecessary risk. It is also due to the reactions of folks the pre-existing community when contemplating EA involvement tend not to be unalloyed enthusiasm. They tend to be very impressed with my colleagues who are starting to work in the area, have an appetite for new ideas and âfresh eyesâ, and reassured that EAs in this area tend to be cautious and responsible. Yet despite this they tend to remain cautious about the potential to have a lot of inexperienced people bouncing around delicate areas, both in general but also for their exposure to this community in particular, as they are often going somewhat âout on a limbâ to support âEA biosecurityâ objectives in the first place.
Another feature of this landscape is that the general path to impact of a âgood biosecurity ideaâ is to socialize it in the relevant expert community and build up a coalition of support. (One could argue how efficient this from the point of view of the universe, but it is the case regardless.) In consequence, my usual advice for people seeking to work in this area is that career capital is particularly valuable, not just for developing knowledge and skills, but also gaining the network and credibility to engage with the relevant groups.
I want to start with the recognition that everything I remember hearing from you in particular around this topic, here and elsewhere, has been extremely reasonable. I also very much liked your paper.
My experience has been that I have had multiple discussions around disease shut down prematurely in some in-person EA spaces, or else turned into extended discussions of infohazards, even if Iâm careful. At some point, it started to feel more like a meme than anything. There are some cases where âinfohazardsâ were brought up as a good, genuine, relevant concern, but I also think there are a lot of EAs and rationalists who seem to have a better grasp of the infohazard meme than they do of anything topical in this space. Some of the sentiment youâre pointing to is largely a response to that, and it was one of the motivations for writing a post focused on clear heuristics and guidelines. I suspect this sort of thing happening repeatedly comes with its own kind of reputational risk, which could stand to see some level of critical examination.
I think there are good reasons for the apparent consensus you present that particularly effective EA Biorisk work requires extraordinarily credentialed people.* You did a good job of presenting that here. The extent to which political sensitivity and the delicate art of reputation-management plays into this, is something I was partially aware of, but had perhaps under-weighted. I appreciate you spelling it out.
The military seems to have every reason to adopt discretion as a default. Thereâs also a certain tendency of the media and general public to freak out in actively damaging directions around topics like epidemiology, which might feed somewhat into a need for reputation-management-related discretion in those areas as well. The response to an epidemic seems to have a huge, and sometimes negative, impact on how a disease progresses, so a certain level of caution in these fields seems pretty warranted.
I want to quickly note that I tend to be relatively-unconvinced that mature and bureaucratic hierarchies are evidence of a field being covered competently. But I would update considerably in your direction if your experience agrees with something like the following:
Is it your impression that whenever you -or talented friends in this area- come up with a reasonably-implementable good idea, that after searching around, you tend to discover that someone else has already found it and tried it?
And if not, what typically seems to have gone wrong? Is there a step that usually falls apart?
(Here are some possible bottlenecks I could think of, and Iâm curious if one of them sounds more right to you than the others: Is it hard to search for whatâs already been done, to the point that there are dozens of redundant projects? Is it a case of there being too much to do, and each project is a rather large undertaking? (a million good ideas, each of which would take 10 years to test) Does it seem to be too challenging for people to find some particular kind of collaborator? A resource inadequacy? Is the field riddled with untrustworthy contributions, just waiting for a replication crisis? (that would certainly do a lot to justify the unease and skepticism about newcomers that you described above) Does it mostly look like good ideas tend to die a bureaucratic death? Or does it seem as if structurally, itâs almost impossible for people to remain motivated by the right things? Or is the field just⌠noisy, for lack of a better word for it. Hard to measure for real effect or success.)
*It does alienate me, personally. I try very hard to stand as a counterargument to âcredentialism-requiredâ; someone who tries to get mileage out of engaging with conversations and small biorisk-related interventions as a high-time-investment hobby on the side of an analysis career. Officially, all Iâm backed up with on this is a biology-related BS degree, a lot of thought, enthusiasm, and a tiny dash of motivating spite. If there wasnât at least a piece of me fighting against some of the strong-interpretation implications of this conclusion, this post would never have been written. But I do recognize some level of validity to the reasoning.
Is it your impression that whenever you -or talented friends in this area- come up with a reasonably-implementable good idea, that after searching around, you tend to discover that someone else has already found it and tried it?
I think this is somewhat true, although I donât think this (or the suggestions for bottlenecks in the paragraph below) quite hits the mark. The mix of considerations are something like these:
1) I generally think the existing community covers the area fairly competently (from an EA perspective). I think the main reason for this is because the âwish listâ of what youâd want to see for (say) a disease surveillance system from an EA perspective will have a lot of common elements with what those with more conventional priorities would want. Combined with the billions of dollars and lots of able professionals, even areas which are neglected in relative terms still tend to have well-explored margins.
1.1) So there are a fair few cases where I come across something in the literature that anticipates an idea I had, or of colleagues/âcollaborators reporting back, âIt turns out people are already trying to do all the things Iâd want them to do re. Xâ.
1.2) Naturally, given Iâm working on this, I donât think thereâs no more good ideas to have. But it also means I foresee quite a lot of the value is rebalancing/âpushing on the envelope of the existing portfolio rather than âEA biosecurityâ striking out on its own.
2) A lot turns on âreasonably-implementableâ. Thereâs a generally treacherous terrain that usually lies between idea and implementation, and propelling the former to the latter through this generally needs a fair amount of capital (of various types). I think this is the typical story for why many fairly obvious improvements havenât happened.
2.1) For policy contributions, perhaps the main challenge is buy-in. Usually one canât âimplement yourselfâ, and rely instead on influencing the relevant stakeholders (e.g. science, industry, government(s)) to have an impact. Bandwidth is generally limited in the best case, and typical cases tend to be fraught with well-worn conflicts arising from differing priorities etc. Hence the delicateness mentioned above.
2.2) For technical contributions, there are âup-frontâ challenges common to doing any sort of bio-science research (e.g. wet-labs are very expensive). However, pushing one of these up the technology readiness levels to implementation also runs into similar policy challenges (as, again, you can seldom âimplement yourselfâ).
3) This doesnât mean there are no opportunities to contribute. Even if thereâs a big bottleneck further down the policy funnel, new ideas upstream still have value (although knowing what the bottleneck looks like can help one target these to have easier passageâand not backfire), and in many cases there will be more incremental work which can lay the foundation for further development. There could be a synergistic relationship with folks who are more heavily enmeshed in the existing community can help translate initiatives/âideas from those less so.
Just wanted to say thanks to both Gregory and Spiracular for their detailed and thoughtful back and forth in this thread. As someone coming from a place somewhere in the middle but having spent less time thinking through these considerations, I found getting to hear your personal perspectives very helpful.
Thanks! For me, this does a bit to clear up why buy-in is perceived as such a key bottleneck.
(And secondarily, supporting the idea that other areas of fairly-high ROI are likely to be centered around facilitating collaboration and consolidation of resources among people with a lot of pre-existing experience/âexpertise/âbuy-in.)
Thanks for writing the post. I essentially agree with the steers on which areas are more or less âriskyâ. Another point worth highlighting is that, given these issues tend to be difficult to judge and humans are error-prone, it can be worth running things by someone else. Folks are always welcome to contact me if I can be helpful for this purpose.
But I disagree with the remarks in the post along the lines of that âThereâs lots of valuable discussion that is being missed out on in EA spaces on biosecurity, due to concerns over infohazardsâ. Oftenâperhaps usuallyâthe main motivation for discretion isnât âinfohazards!â.
Whilst (as I understand it) the âEAâ perspective on AI safety covers distinct issues from mainstream discussion on AI ethics (e.g. autonomous weapons, algorithmic bias), the main distinction between âEAâ biosecurity and âmainstreamâ biosecurity is one of scale. Thus similar topics are shared between both, and many possible interventions/âpolicy improvements have dual benefit: things that help mitigate the risk of smaller outbreaks tend to help mitigate the risk of catastrophic ones.
These topics are generally very mature fields of study. To put it in perspective, with ~5 years in medicine and public health and 3 degrees, I am roughly par for credentials and substantially below-par for experience at most expert meetings I attendâI know people who have worked on (say) global health security longer than I have been alive. Iâd guess some of this could be put down to unnecessary credentialism and hierarchalism, and it doesnât mean thereâs nothing to do as all the good ideas have already been thought, but it does make low hanging fruit likely to be plucked, and that useful contributions are hard to make without substantial background knowledge.
These are also areas which tend to have powerful stakeholders, entrenched interests, in many cases (especially security-adjacent issues) great political sensitivity. Thus even areas which are pretty âsafeâ from an information hazard perspective (e.g. better governance of dual-use research of concern), can be nonetheless delicate to talk about publicly. Missteps are easy to make (especially without the relevant tacit knowledge), and the consequences can be to (as you note in the write-up) to innoculate the idea, but also to alienate powerful interests and potentially discredit the wider EA community.
The latter is something Iâm particularly sensitive to. This is partly due to my impression that the âgrowing painsâ in other EA cause areas tended to incur unnecessary risk. It is also due to the reactions of folks the pre-existing community when contemplating EA involvement tend not to be unalloyed enthusiasm. They tend to be very impressed with my colleagues who are starting to work in the area, have an appetite for new ideas and âfresh eyesâ, and reassured that EAs in this area tend to be cautious and responsible. Yet despite this they tend to remain cautious about the potential to have a lot of inexperienced people bouncing around delicate areas, both in general but also for their exposure to this community in particular, as they are often going somewhat âout on a limbâ to support âEA biosecurityâ objectives in the first place.
Another feature of this landscape is that the general path to impact of a âgood biosecurity ideaâ is to socialize it in the relevant expert community and build up a coalition of support. (One could argue how efficient this from the point of view of the universe, but it is the case regardless.) In consequence, my usual advice for people seeking to work in this area is that career capital is particularly valuable, not just for developing knowledge and skills, but also gaining the network and credibility to engage with the relevant groups.
Thanks for the thoughtful response!
I want to start with the recognition that everything I remember hearing from you in particular around this topic, here and elsewhere, has been extremely reasonable. I also very much liked your paper.
My experience has been that I have had multiple discussions around disease shut down prematurely in some in-person EA spaces, or else turned into extended discussions of infohazards, even if Iâm careful. At some point, it started to feel more like a meme than anything. There are some cases where âinfohazardsâ were brought up as a good, genuine, relevant concern, but I also think there are a lot of EAs and rationalists who seem to have a better grasp of the infohazard meme than they do of anything topical in this space. Some of the sentiment youâre pointing to is largely a response to that, and it was one of the motivations for writing a post focused on clear heuristics and guidelines. I suspect this sort of thing happening repeatedly comes with its own kind of reputational risk, which could stand to see some level of critical examination.
I think there are good reasons for the apparent consensus you present that particularly effective EA Biorisk work requires extraordinarily credentialed people.* You did a good job of presenting that here. The extent to which political sensitivity and the delicate art of reputation-management plays into this, is something I was partially aware of, but had perhaps under-weighted. I appreciate you spelling it out.
The military seems to have every reason to adopt discretion as a default. Thereâs also a certain tendency of the media and general public to freak out in actively damaging directions around topics like epidemiology, which might feed somewhat into a need for reputation-management-related discretion in those areas as well. The response to an epidemic seems to have a huge, and sometimes negative, impact on how a disease progresses, so a certain level of caution in these fields seems pretty warranted.
I want to quickly note that I tend to be relatively-unconvinced that mature and bureaucratic hierarchies are evidence of a field being covered competently. But I would update considerably in your direction if your experience agrees with something like the following:
Is it your impression that whenever you -or talented friends in this area- come up with a reasonably-implementable good idea, that after searching around, you tend to discover that someone else has already found it and tried it?
And if not, what typically seems to have gone wrong? Is there a step that usually falls apart?
(Here are some possible bottlenecks I could think of, and Iâm curious if one of them sounds more right to you than the others: Is it hard to search for whatâs already been done, to the point that there are dozens of redundant projects? Is it a case of there being too much to do, and each project is a rather large undertaking? (a million good ideas, each of which would take 10 years to test) Does it seem to be too challenging for people to find some particular kind of collaborator? A resource inadequacy? Is the field riddled with untrustworthy contributions, just waiting for a replication crisis? (that would certainly do a lot to justify the unease and skepticism about newcomers that you described above) Does it mostly look like good ideas tend to die a bureaucratic death? Or does it seem as if structurally, itâs almost impossible for people to remain motivated by the right things? Or is the field just⌠noisy, for lack of a better word for it. Hard to measure for real effect or success.)
*It does alienate me, personally. I try very hard to stand as a counterargument to âcredentialism-requiredâ; someone who tries to get mileage out of engaging with conversations and small biorisk-related interventions as a high-time-investment hobby on the side of an analysis career. Officially, all Iâm backed up with on this is a biology-related BS degree, a lot of thought, enthusiasm, and a tiny dash of motivating spite. If there wasnât at least a piece of me fighting against some of the strong-interpretation implications of this conclusion, this post would never have been written. But I do recognize some level of validity to the reasoning.
Hello Spiracular,
I think this is somewhat true, although I donât think this (or the suggestions for bottlenecks in the paragraph below) quite hits the mark. The mix of considerations are something like these:
1) I generally think the existing community covers the area fairly competently (from an EA perspective). I think the main reason for this is because the âwish listâ of what youâd want to see for (say) a disease surveillance system from an EA perspective will have a lot of common elements with what those with more conventional priorities would want. Combined with the billions of dollars and lots of able professionals, even areas which are neglected in relative terms still tend to have well-explored margins.
1.1) So there are a fair few cases where I come across something in the literature that anticipates an idea I had, or of colleagues/âcollaborators reporting back, âIt turns out people are already trying to do all the things Iâd want them to do re. Xâ.
1.2) Naturally, given Iâm working on this, I donât think thereâs no more good ideas to have. But it also means I foresee quite a lot of the value is rebalancing/âpushing on the envelope of the existing portfolio rather than âEA biosecurityâ striking out on its own.
2) A lot turns on âreasonably-implementableâ. Thereâs a generally treacherous terrain that usually lies between idea and implementation, and propelling the former to the latter through this generally needs a fair amount of capital (of various types). I think this is the typical story for why many fairly obvious improvements havenât happened.
2.1) For policy contributions, perhaps the main challenge is buy-in. Usually one canât âimplement yourselfâ, and rely instead on influencing the relevant stakeholders (e.g. science, industry, government(s)) to have an impact. Bandwidth is generally limited in the best case, and typical cases tend to be fraught with well-worn conflicts arising from differing priorities etc. Hence the delicateness mentioned above.
2.2) For technical contributions, there are âup-frontâ challenges common to doing any sort of bio-science research (e.g. wet-labs are very expensive). However, pushing one of these up the technology readiness levels to implementation also runs into similar policy challenges (as, again, you can seldom âimplement yourselfâ).
3) This doesnât mean there are no opportunities to contribute. Even if thereâs a big bottleneck further down the policy funnel, new ideas upstream still have value (although knowing what the bottleneck looks like can help one target these to have easier passageâand not backfire), and in many cases there will be more incremental work which can lay the foundation for further development. There could be a synergistic relationship with folks who are more heavily enmeshed in the existing community can help translate initiatives/âideas from those less so.
Just wanted to say thanks to both Gregory and Spiracular for their detailed and thoughtful back and forth in this thread. As someone coming from a place somewhere in the middle but having spent less time thinking through these considerations, I found getting to hear your personal perspectives very helpful.
Thanks! For me, this does a bit to clear up why buy-in is perceived as such a key bottleneck.
(And secondarily, supporting the idea that other areas of fairly-high ROI are likely to be centered around facilitating collaboration and consolidation of resources among people with a lot of pre-existing experience/âexpertise/âbuy-in.)