This response is very late since I only just came across this post, but I was wondering if the author had any more details on what the backlash against EA that they mentioned specifically entailed? I haven’t been able to find any information about this on the web, unless it’s specifically related to the backlash against Esvelt’s arguments regarding DNA synthesis screening or those discussing the effects of LLMs? Is the biosecurity community also, for example, undergoing a backlash against the arguments for plausible biological existential risk that EAs are making?
I can’t speak for the author, and while I’d classify these as examples of suspicion and/or criticism of EA biosecurity rather than a “backlash against EA”, here are some links:
I’ll also say I’ve heard criticism of “securitising health” which is much less about EAs in biosecurity and more clashing concerns between groups that prioritise global health and national security, where EA biosecurity folks often end up seen as more aligned with the national security concerns due to prioritising risks from deliberate misuse of biology.
Thanks Tessa. I actually came to this post and asked this question because it was quoted in the ‘Exaggerating the risks’ series, but then this post didn’t give any examples to back up this claim, which Thorstad has then quoted. I had come across this article by Undark which includes statements by some experts that are quite critical of Kevin Esvelt’s advocacy regarding nucleic acid synthesis. I think the Lentzos article is the kind of example I was wondering about—although I’m still not sure if it directly shows that the failure to justify their position on the details of the source of risk itself is the problem. (Specifically, I think the key thing Lentzos is saying is the risks Open Phil is worrying about are extremely unlikely in the near-term—which is true, they just think it’s more important for longtermist reasons and are therefore 1) more worried about what happens in the medium and long term and 2) still worried about low risk, high harm events. So the dispute doesn’t seem to me to be necessarily related to the details of catastrophic biorisk itself.)
This response is very late since I only just came across this post, but I was wondering if the author had any more details on what the backlash against EA that they mentioned specifically entailed? I haven’t been able to find any information about this on the web, unless it’s specifically related to the backlash against Esvelt’s arguments regarding DNA synthesis screening or those discussing the effects of LLMs? Is the biosecurity community also, for example, undergoing a backlash against the arguments for plausible biological existential risk that EAs are making?
I can’t speak for the author, and while I’d classify these as examples of suspicion and/or criticism of EA biosecurity rather than a “backlash against EA”, here are some links:
Will splashy philanthropy cause the biosecurity field to focus on the wrong risks?, Filippa Lentzos, 2019 (also linked and discussed on the forum)
Exaggerating the risks post series, Reflective Altruism, 2022-2024
Recent criticism specifically of AI-Bio risks, such as Propaganda or Science: Open Source AI and Bioterrorism Risk
I’ll also say I’ve heard criticism of “securitising health” which is much less about EAs in biosecurity and more clashing concerns between groups that prioritise global health and national security, where EA biosecurity folks often end up seen as more aligned with the national security concerns due to prioritising risks from deliberate misuse of biology.
Thanks Tessa. I actually came to this post and asked this question because it was quoted in the ‘Exaggerating the risks’ series, but then this post didn’t give any examples to back up this claim, which Thorstad has then quoted. I had come across this article by Undark which includes statements by some experts that are quite critical of Kevin Esvelt’s advocacy regarding nucleic acid synthesis. I think the Lentzos article is the kind of example I was wondering about—although I’m still not sure if it directly shows that the failure to justify their position on the details of the source of risk itself is the problem. (Specifically, I think the key thing Lentzos is saying is the risks Open Phil is worrying about are extremely unlikely in the near-term—which is true, they just think it’s more important for longtermist reasons and are therefore 1) more worried about what happens in the medium and long term and 2) still worried about low risk, high harm events. So the dispute doesn’t seem to me to be necessarily related to the details of catastrophic biorisk itself.)