Let’s make nice things with biology. Working on nucleic acid synthesis screening at IBBIS. Also into dual-use risk assessment, synthetic biology, lab automation, event production, donating to global health. From Toronto, lived in Paris and Santiago, currently in the SF Bay. Website: tessa.fyi
Tessa A 🔸
The paper Existential Risk and Cost-Effective Biosecurity makes a distinction between Global Catastrophic Risk and an Existential Risk in the context of biological threats:
Quoting the caption from the paper: A spectrum of differing impacts and likelihoods from biothreats. Below each category of risk is the number of human fatalities. We loosely define global catastrophic risk as being 100 million fatalities, and existential risk as being the total extinction of humanity. Alternative definitions can be found in previous reports, as well as within this journal issue.
One thing I find hopeful, under the “Consensus-finding on risks and benefits of research” idea, is that the report Emerging Technologies and Dual-Use Concerns (WHO, 2021) includes two relevant governance priorities:
Safety by design in dual-use research projects: “A comprehensive approach to identifying the potential benefits and risks of research may improve the design and flag potential pitfalls early in the research. ”
Continued lack of a global framework for DURC: “Previous WHO consultations have highlighted the lack of a global framework as a critical gap, and regulations, norms and laws to address DURC remain fragmented among stakeholders and countries.”
This was based on an expert elicitation study using the IDEA (Investigate, Discuss, Estimate, Aggregate) framework… I find it hopeful that this process identified these governance issues as priorities!
That said, I find it less hopeful that when “asked to allocate each issue a score from 1 to 100 reflecting its impact and plausibility” the scores for “The Lack of a Global DURC Framework” appear to range from 1 to 99:Figure 2 from the CSER / WHO report on Emerging Technologies and Dual-Use Concerns.
I had recent cause to return to this post and will note that I am currently working on a short paper about this.
I think people are also unaware of how tiny the undergraduate populations of elite US/UK universities are, especially if you (like me) did not grow up or go to school in those countries.
Quoting a 2015 article from Joseph Heath, which I found shocking at the time:
There are few better ways of illustrating the difference than to look at the top U.S. colleges and compare them to a highly-ranked Canadian university, like the University of Toronto where I work. The first thing you’ll notice is that American schools are miniscule. The top 10 U.S. universities combined (Harvard, Princeton, Yale, etc.) have room for fewer than 60,000 undergraduates total. The University of Toronto, by contrast, alone has more capacity, with over 68,000 undergraduate students.
In other words, Canadian universities are in the business of mass education. We take entire generations of Canadians, tens of thousands of them recent immigrants, and give them access to the middle classes. Fancy American schools are in the business of offering boutique education to a very tiny, coddled minority, giving them access to the upper classes. That’s a really fundamental difference.
Oxford (12,510 undergraduates) and Cambridge (12,720 undergraduates ) are less tiny, but still comparatively small, especially since the UK population is about 1.75x Canada’s.
In terms of needing such a system to be lightweight and specific: this also implies needing it what is sometimes called “adaptive governance” (i.e. you have to be able to rapidly change your rules when new issues emerge).
For example, there were ambiguities about whether SARS-CoV-2 fell under Australia Group export controls on “SARS-like-coronaviruses” (related journal article)… a more functional system would include triggers for removing export controls (e.g. at a threshold of global transmission, public health needs will likely outweigh biosecurity concerns about pathogen access)
Freakonomics also currently in Global Poverty!
Distribution of cost-effectiveness feels like one of the most important concepts from the EA community. The attitude that, for a given goal that you have, some ways of achieving that goal will be massively more cost-effective than others is an assumption that underlies a lot of cause comparisons, and the value of doing such comparisons at all.
I want to especially +1 item (3) here― the best actions for a skill-focused group will be very different depending on how skilled its group members are. Using my own experience organising a biosecurity-focused group (which fizzled out because the core members skilled up and ended up focused on direct work… not a bad outcome).
Some examples of the purposes of skill-focused groups, at different skill levels:
Newcomer = learn together
Member goals: Figure out if you are interested in an area, or what you are interested in within it.
Core Activities: Getting familiar with foundational papers and ideas in the field.
Possible structures: reading groups, giving talks summarizing current work, watching lectures together, collectively brainstorming questions you have, shared research on basic questions.
Advanced Beginner = sharpen ideas
Member goals: Figure out if your ideas and projects in an area are good, be ready to pivot as you learn more.
Core activities: Get feedback on your ideas, find useful resources or potential collaborators.
Possible structures: lightning talks, one person presents and receives feedback on their project, fireside chats or Q&As with experts.
Expert = keep up with the field
Member Goals: Make progress on your projects while staying aware of relevant of new developments.
Core activities: Find potential synergies with your work, get feedback and critique, find collaborators.
Possible structures: seminar series focused on project updates, research reading groups where summary talks are given by more junior group members.
a rhetorical move that introduces huge moral stakes into the world-view in order to push people into drastically altering their actions and priorities
What is the definition you’d prefer people to stick to? Something like “being pushed into actions that have a very low probability of producing value, because the reward would be extremely high in the unlikely event they did work out”?
The Drowning Child argument doesn’t seem like an example of Pascal’s Mugging, but Wikipedia gives the example of:
“give me five dollars, or I’ll use my magic powers from outside the Matrix to run a Turing machine that simulates and kills 3 ↑↑↑↑ 3”
and I think recent posts like The AI Messiah are gesturing at something like that (see, even, this video from the comments on that post: Is AI Safety a Pascal’s Mugging?).
I haven’t looked into this in detail (honest epistemic status: saw a screenshot on Twitter) but what do you think of the recent paper Association of Influenza Vaccination With Cardiovascular Risk?
Quoting from it, re: tractable interventions:
The effect sizes reported here for major adverse cardiovascular events and cardiovascular mortality (in patients with and without recent ACS) are comparable with—if not greater than—those seen with guideline-recommended mainstays of cardiovascular therapy, such as aspirin, angiotensin-converting enzyme inhibitors, β-blockers, statins, and dual antiplatelet therapy.
Minor elaboration on your last point: a piece of advice I got from someone who did psychological research on how to solicit criticism was to try to brainstorm someone’s most likely criticism of you would be, and then offer that up when requesting criticism, as this is a credible indication that you’re open to it. Examples:
“Hey, do you have any critical feedback on the last discussion I ran? I talked a lot about AI stuff, but I know that can be kind of alienating for people who have more interest in political action than technology development… Does that seem right? Is there other stuff I’m missing?”
“Hey, I’m looking for criticism on my leadership of this group. One thing I was worried about is that I make time for 1:1s with new members, but not so much with people that have been in the group for more than one year...”
“Did you think there was there anything off about our booth last week? I was noticing we were the only group handing out free books, maybe that looked weird. Did you notice anything else?”
Some recent-ish resources that potential applicants might want to check out:
David Manheim and Gregory Lewis, High-risk human-caused pathogen exposure events from 1975-2016, data note published in August 2021.
As a way to better understand the risk of Global Catastrophic Biological Risks due to human activities, rather than natural sources, this paper reports on a dataset of 71 incidents involving either accidental or purposeful exposure to, or infection by, a highly infectious pathogenic agent.
Filippa Lentzos and Gregory D. Koblentz, Mapping Maximum Biological Containment Labs Globally, policy brief published in May 2021 part of the Global Biolabs project.
This study provides an authoritative resource that: 1) maps BSL4 labs that are planned, under construction, or in operation around the world, and 2) identifies indicators of good biosafety and biosecurity practices in the countries where the labs are located.
2021 Global Health Security Index, https://www.ghsindex.org/.
If you click through to the PDFs under each individual country profile, they have detailed information on the country’s biosafety and biosecurity laws! (Example: the exact laws aren’t clear from https://www.ghsindex.org/country/ukraine/ but if you click through to the “Country Score Justification Summary” PDF (https://www.ghsindex.org/wp-content/uploads/2021/12/Ukraine.pdf) it has like 100 pages of policy info.
One now-inactive past project in this space that I would highlight (since I would very much like something similar to exist again) is The Sunshine Project. Quoting its (sadly very short) Wikipedia page:
The Sunshine Project worked by exposing research on biological and chemical weapons. Typically, it accessed documents under the Freedom of Information Act and other open records laws, publishing reports and encouraging action to reduce the risk of biological warfare. It tracked the construction of high containment laboratory facilities and the dual-use activities of the U.S. biodefense program.
Some more on Edward Hammond’s work/methods show up in this press article on The Worrying Murkiness of Institutional Biosafety Committees:
In 2004, an activist named Edward Hammond fired up his fax machine and sent out letters to 390 institutional biosafety committees across the country. His request was simple: Show me your minutes.
...
The committees “are the cornerstone of institutional oversight of recombinant DNA research,” according to the NIH, and at many institutions, their purview includes high-security labs and research on deadly pathogens.
...
When Hammond began requesting minutes in 2004, he said, he intended to dig up information about bioweapons, not to expose cracks in biosafety oversight. But he soon found that many institutions were unwilling to hand over minutes, or were struggling to provide any record of their IBCs at all. For example, he recalled, Utah State was a hub of research into biological weapons agents. “And their biosafety committee had not met in like 10 years, or maybe ever,” Hammond said. “They didn’t have any records of it ever meeting.”
I logically acknowledge that: “In some cases, an extravagant lifestyle can even produce a lot of good, depending on the circumstances… It’s not my preferred moral aesthetic, but the world’s problems don’t care about my aesthetics.”
I know that, but… I care about my aesthetics.
For nearly everyone, I think there exists is a level of extravagance that disgusts their moral aesthetics. I’m sure I sit above that level for some, with my international flights and two $80 keyboards. My personal aesthetic disgust triggers somewhere around “how dare you spend $1000 on a watch when people die of dehydration”. Giving a blog $100,000 isn’t quite disgusting, yet, ew?
The post I’ve read that had the least missing mood around speculative philanthropy was probably the So You Want To Run A Microgrants Program retrospective on Astral Codex Ten, which included the following:
If your thesis is “Instead of saving 300 lives, which I could totally do right now, I’m gonna do this other thing, because if I do a good job it’ll save even more than 300 lives”, then man, you had really better do a good job with the other thing.
I like the scenario this post gives for risks of omission: a giant Don’t Look Up asteroid hurtling towards the earth. I wouldn’t be mad if people misspent some money, trying to stop it, because the problem was so urgent. Problems are urgent!
...yet, ew? So many other things look kind of extravagant, and they’re competing against lives. I feel unsure about whether to treat my aesthetically-driven moral impulses as useful information about my motivations vs. obviously-biased intuitions to correct against.
(For example, I started looking into donating a kidney a few years ago and was like… man, I could easily save an equal number of years of life without accruing 70+ micromorts, but that’s not nearly as rad? Still on the fence about this one.)
[crosspost from my twitter]
You might be interested to know that iGEM (disclosure: my employer) just published a blog post about infohazards. We currently offer biorisk workshops for teams; this year we plan to offer a general workshop on risk awareness, a workshop specifically on dual-use, and potentially some others. We don’t have anything on general EA / rationality, though we do share biosecurity job and training opportunities with our alumni network.
On passive technologies, I imagine the links from Biosecurity needs engineers and materials scientists would be informative. The areas highlighted there under “physical protection from pathogens” are:
Improving personal protective equipment (PPE)
Suppressing pathogen spread in the built environment
Improving biosafety in high-containment labs and clinics
Suppressing pathogen spread in vehicles
For spread in vehicles and the built environment, my sense (based on conversations with others, not independent research) is that lots of folks are excited about about upper-air UV-C systems to deactivate viruses. I don’t know the best reading on that so here’s a somewhat random March 2022 paper on the subject: Far-UVC (222 nm) efficiently inactivates an airborne pathogen in a room-sized chamber
(For all of these comments, take these resources as a lower-intensity recommendation than other things on this list, since these are selected based on the criteria of “things that seem relevant to this topic” rather than “things I found particularly interesting”.)
On cyberbiosecurity:
I enjoyed Defining “Cyberbiosecurity” and why we should stop using the term, a skeptical 2019 blog post from Alexander Titus, which basically argues that “cyberbiosecurity” is a term that ends up discouraging work because no one knows where to start!
The winners of the 2021 NTI Next Generation for Biosecurity contest wrote Towards Responsible Genomic Surveillance: A Review of Biosecurity and Dual-use Regulation which focuses on data privacy issues related to pandemic genomic surveillance
Dual use of artificial-intelligence-powered drug discovery, a March 2022 paper, argues for controlled API access to ML models that might be used to generate toxins
The recent (April 2022) paper Biosecurity in an age of open science looks at some biosecurity implications of open data sharing, and argues for access controls and APIs based on FAIR principles
(For all of these comments, take these resources as a lower-intensity recommendation than other things on this list, since these are selected based on the criteria of “things that seem relevant to this topic” rather than “things I found particularly interesting”.)
Under Solutions to deal with misinformation, Tara Kirk Sell at the Johns Hopkins Center for Health Security has done a bunch of related work (her list of publications includes things like a National Priorities to Combat Misinformation and Disinformation for COVID-19 and Future Public Health Threats: A Call for a National Strategy and Longitudinal Risk Communication: A Research Agenda for Communicating in a Pandemic). She was also interviewed for the 80,000 Hours podcast in May 2020, though I suspect her thinking has evolved since then.
(For all of these comments, take these resources as a lower-intensity recommendation than other things on this list, since these are selected based on the criteria of “things that seem relevant to this topic” rather than “things I found particularly interesting”.)
I have strong “social security number” associations with the acronym SSN.
Setting those aside, I feel “scale” and “solvability” are simpler and perhaps less jargon-y words than “impact” and “tractability” (which is probably good), but I hear people use “impact” much more frequently than “scale” in conversation, and it feels broader in definition, so I lean towards “ITN” over “SSN”.
Relatedly, an area where I think arXiv could have a huge impact (in both biosecurity and AI) would be setting standards for easy-to-implement manged access to algorithms and datasets.
This is something called for in Biosecurity in an Age of Open Science:
This sort of idea also appears in New ideas for mitigating biotechnology misuse under responsible access to genetic sequences and in Dual use of artificial-intelligence-powered drug discovery as a proposal for managing risks from algorithmically designed toxins.