Letās make nice things with biology. Working on nucleic acid synthesis screening at IBBIS. Also into dual-use risk assessment, synthetic biology, lab automation, event production, donating to global health. From Toronto, lived in Paris and Santiago, currently in the SF Bay. Website: tessa.fyi
Tessa A šø
Is That DNA Dangerous
A new biosecurity-relevant newsletter (which me and Anemone put together) is GCBR Organization Updates. Every few months, weāll ask organizations who are doing impactful work to reduce GCBRs to share their current projects, recent publications, and any opportunities for collaboration.
I was part of a youth delegation to the BWC in 2017, and I think the greatest benefit I got was that it raised my aspirations. Iām not sure Iād previously conceived of myself as the sort of person who could speak at the UN. I also heard an expert bowing out of dinner early because they had to go finish their slides for the next day, and realized there isnāt some upper echelon of governance and society where everyone is hypercompetent and on top of things; even at the frigginā United Nations people are making their slides the night before.
I donāt know how much of an effect this had on my decision to start a biosecurity meetup the next year and eventually transition to full-time biosecurity work, but I think it played a role. There are other benefits too; Schelling-point NGO networking, collecting lived-experience stories that make your understanding of diplomacy more vivid, and creating a pressure of prior consistency that increases the chance of that a delegate will continue to work on biosecurity (YMMV on whether the last item is a benefit).
+1 on āspecialist experts are surprisingly accessible to enthusiastic youthā, cf some relevant advice from Alexey Guzey
Thanks for this comment, and thanks to Nadia for writing the post, Iām really happy to see it up on the forum!
Chris and I wrote the guidance for reading groups and early entrants to the field; this was partly because we felt that new folks are most likely to feel stuck/āintimidated/āforced-into-deference/āetc. and because itās where we most often found ourselves repeating the same advice over and over.
I think there are people whose opinions I respect who would disagree with the guidance in a few ways:
We recommend a few kinds of interpersonal interventions, and some people think this is a poor way to manage information hazards, and the community should aim to have much more explicit /ā regimented policies
We recommend quite a bit of caution about information hazards, which more conservative people might consider an attention hazard in and of itself (drawing attention to the fact that information that would enable harm could be generated)
We recommend quite a bit of caution about information hazards, which less conservative people might consider too encouraging of deference or secrecy (e.g. people who have run into more trouble doing successful advocacy or recruiting/āfostering talent, people who have different models of infohazard dyanmics, people who are worried that a lack of transparency worsens the communityās prioritization)
We donāt cover a lot of common scenarios, as Nadia noted in her comment
(Side note: itās always both flattering and confusing to be considered a āsenior memberā of this community. I suppose itās true, because EA is very young, but I have many collaborators and colleagues who have decade(s) of experience working full-time on biorisk reduction, which I most certainly do not.)
This is more a response to āit is easy to build an intuitive case for biohazards not being very important or an existential riskā, rather than your proposals...
My feeling is that it is fairly difficult to make the case that biological hazards present an existential as opposed to catastrophic risk and that this matters for some EA types selecting their career paths, but it doesnāt matter as much in the grand scale of advocacy? The set of philosophical assumptions under which ānot an existential riskā can be rounded to ānot very importantā seems common in the EA community, but extremely uncommon outside of it.
My best guess is that any existential biorisk scenarios probably route through civilisational collapse, and that those large-scale risks are most likely a result of deliberate misuse, rather than accidents. This seems importantly different from AI risk (though I do think you might run into trouble with reckless or careless actors in bio as well).
I think a focus on global catastrophic biological risks already puts oneās focus in a pretty different (and fairly neglected) place from many people working on reducing pandemic risks, and that the benefit of trying to get into the details of whether a specific threat is existential or catastrophic doesnāt really outweigh the costs of potentially generating infohazards.
My guess is that (2) will be fairly hard to achieve, because the sorts of threat models that are sufficiently detailed to be credible to people trying to do hardcore existential-risk-motivated cause prioritization are dubiously cost-benefitted from an infohazard perspective.
Happy to pitch in with a few stories of rejection!
2010: I applied for MIT and Princeton for undergraduate studies and wasnāt accepted to either. Not trying harder to get into those schools was a major regret of mine for about 5 years (I barely studied for the SATs, in part because I was the only person I knew who took themā¦ itās uncommon for Canadians to attend university in the states). I later ended up working on teams with people who had gone to fancy US schools, such that I no longer believe this had a clearly negative impact on my trajectory.
2018: Rejected for LTFF funding for the biosecurity conference that eventually became Catalyst. We re-applied in a subsequent round and were funded.
2018: I applied to be a Research Analyst at Open Phil in their big 2018 recruitment round, and got through two rounds of work tests before ultimately being rejected after an interview. The interview really didnāt go well; I felt like a total idiot, and didnāt get the job. This was maybe the most rough rejection; I felt like I wasted basically all of my non-work time for a month on work tests, at a time when I was feeling pretty bad about how effectively I was spending my time.
2018: rejected from the SynBioBeta conference fellowship run by Johns Hopkins, which at the time felt like it could have been an entry point into a biosecurity career transition. Definitely had some angst about whether it was even possible to make such a transition.
2019: I was rejected from a really cool engineering role at Culture Biosciences after a phone screen interview. I got so distressed after this (āIām not technical enough for a real hardware-y engineering job any more! augh!!ā) that I did some electronics projects that I really didnāt have time for, largely out of angst. They later reached out to me again when they had a role closer to my (more software-specialzied) skillset, and I completed a full round of interviews and received an offer, though I ultimately decided not to leave my job in order to have more time to focus on my part-time biosecurity projects.
These were all pretty painful for me at the timeā¦ and Iām realizing Iāve since come up with stories where the rejections were okay, or part of a fine trajectory. I guess one message here is ājust because you were rejected once doesnāt mean you will be if you apply againā?
Maybe thereās a huge illusion in EA of āsomeone else has probably worked out these big assumptions we are makingā. This goes all the way up to the person at Open Phil thinking āHolden has probably worked these outā but actually no one has.
I just wanted to highlight this in particular; I have heard people at Open Phil say things along the lines of āā¦ but we could be completely wrong about this!ā about large strategic questions. A few examples related to my work:
Is it net positive to have a dedicated community of EAs working on reducing GCBRs, or would it be better for people to be more fully integrated into the broader biosecurity field?
If we want to have this community, should we try to increase its size? How quickly?
Is it good to emphasize concerns about dual-use and information hazards when people are getting started in biosecurity, or does that end up either stymieing them (or worse, inspiring them to produce more harmful ideas)?
These are big questions, and I have spent dozens (though not hundreds) of hours thinking about themā¦ which has led to me feeling like I have āworking hypothesesā in response to each. A working hypothesis is not a robust, confident answer based on well-worked-out assumptions. I could be wrong, but I suspect this is also true in many other areas of community building and cause prioritisation, even āall the way upā.
I recall meeting Karolina M. Sulich, the VP of Osmocosm, at EAGxBerlin last year, and thought some of her machine olfaction x biosecurity ideas were really cool! Iād be stoked for more people to look into this.
A few more you might share:
Global Partnership Against the Spread of Weapons and Materials of Mass Destruction mailing list
Biological Security (Weekly Updates) from Parlimentarians for Global Action
This is great! I think that project-based learning is simply a way more effective way to learn about a cause area than going through a reading list (I know youāve written about this before). Cold Takes has quite a lot of writing about how just reading stuff is probably not the best way to form a view and robustly retain things.
Itās also super generous of you to offer to review peopleās fit-test projects :)
Another poem about loss that moves me, this one specifically about grieving a dear friend:
Itās what others do, not us, die, even the closest
on a vainglorious, glorious morning, as the song goes,
the yellow or golden palms glorious and all the rest
a sparkling splendour, die. Theyāre practising calypsos,
theyāre putting up and pulling down tents, vendors are slicing
the heads of coconuts around the Savannah, men
are leaning on, then leaping into pirogues, a moon will be rising
tonight in the same place over Morne Coco, then
the full grief will hit me and my heart will toss
like a horseās head or a threshing bamboo grove
that even you could be part of the increasing loss
that is the daily dial of the revolving shade. Love
lies underneath it all though, the more surprising
the death, the deeper the love, the tougher the life.
The pain is over, feathers close your eyelids, Oliver.
What a happy friend and what a fine wife!
Your death is like our friendship beginning over.ā for Oliver Jackman, Derek Walcott
My favourite cookbook right now is The Korean Vegan. Magical, delicious flavour combinations. The bulgogi blew my mind. The cookbook also sets you up to have a fridge full of sauces and banchan to dress up any weekday rice + protein combination into a delicious meal.
This West-African-inspired peanut soup from Cookie and Kate is what I pull out whenever I want to make something impressively delicious, but also fast and low-effort.
I found I Can Cook Vegan by Isa Chanda Moskovitz to be somewhat hit and miss, but the hits (buffalo cauliflower salad, sloppy shiitakes, chickpea tuna melt, maple-mustard brussel sprouts) were really solid. I recommend this over her earlier cookbooks; she has really reined in her desire to have 30-ingredient recipes that take over an hour to prepare.
The Moosewood Cookbook is a classic for a reason, but you gotta get a version released either before or after the 1990s low-fat fad. We like oil and salt! We like calories! Put the fat in!!
This was a beautiful remembering, thank you for sharing it. Often how I want to grieve people is just to remember them in detail, saying: they were here, not like anyone else, but specifically this is the way they were; I remember, and I wish they were still in the world. This post felt like that sort of grief.
This is my favourite poem about grief, which I often return to when grieving the people Iāve lost (most especially my partner Zach):
I am not resigned to the shutting away of loving hearts in the hard ground.
So it is, and so it will be, for so it has been, time out of mind:
Into the darkness they go, the wise and the lovely. Crowned
With lilies and with laurel they go; but I am not resigned.Lovers and thinkers, into the earth with you.
Be one with the dull, the indiscriminate dust.
A fragment of what you felt, of what you knew,
A formula, a phrase remains,ābut the best is lost.The answers quick and keen, the honest look, the laughter, the love,ā
They are gone. They are gone to feed the roses. Elegant and curled
Is the blossom. Fragrant is the blossom. I know. But I do not approve.
More precious was the light in your eyes than all the roses in the world.Down, down, down into the darkness of the grave
Gently they go, the beautiful, the tender, the kind;
Quietly they go, the intelligent, the witty, the brave.
I know. But I do not approve. And I am not resigned.ā Dirge Without Music , Edna St. Vincent Millay
Thanks for this post! I agree with your point about being careful on terms, and thought it might be useful to collect a few definitions together in a comment.
DURC (Dual-Use Research of Concern)
DURC is defined differently by different organizations. The WHO defines it as:
research that is intended to provide a clear benefit, but which could easily be misapplied to do harm
while the definition given in the 2012 US government DURC policy is:
life sciences research that, based on current understanding, can be reasonably anticipated to provide knowledge, information, products, or technologies that could be directly misapplied to pose a significant threat with broad potential consequences to public health and safety, agricultural crops and other plants, animals, the environment, materiel, or national security
ePPP (enhanced Potential Pandemic Pathogen)
ePPP is a term (in my experience) mostly relevant to the US regulatory context, and was set out in the 2017 HHS P3CO Framework as follows:
A potential pandemic pathogen (PPP) is a pathogen that satisfies both of the following:
It is likely highly transmissible and likely capable of wide and uncontrollable spread in human populations; and
It is likely highly virulent and likely to cause significant morbidity and/āor mortality in humans.
An enhanced PPP is defined as a PPP resulting from the enhancement of the transmissibility and/āor virulence of a pathogen. Enhanced PPPs do not include naturally occurring pathogens that are circulating in or have been recovered from nature, regardless of their pandemic potential.
One way in which this definition has been criticized (quoting the recent NSABB report on updating the US biosecurity oversight framework) is that āresearch involving the enhancement of pathogens that do not meet the PPP definition (e.g., those with low or moderate virulence) but is anticipated to result in the creation of a pathogen with the characteristics described by the PPP definition could be overlooked.ā
GOF (Gain-of-Function)
GOF is not a term that I know to have a clear definition. In the linked Virology under the microscope paper, examples range from making Arabidopsis (a small flowering model plant) more drought-resistant to making H5N1 (avian influenza) transmissible between mammals. I suggest avoiding this term if you can. (The paper acknowledges the term is fuzzily defined, citing The shifting sands of āgain-of-functionā research.)
Biosafety, biosecurity, biorisk
The definitions you gave in the footnote seem solid, and similar to the ones Iād offer, though one runs into competing definitions (e.g. the definition provided for biosafety doesnāt mention unintentional exposure). I will note that EA tends to treat ābiosecurityā as an umbrella term for āreducing biological riskā in a way that doesnāt reflect its usage in the biosecurity or public health communities. Also, as far as I can tell, Australia means a completely different thing by ābiosecurityā than the rest of the English-speaking world, which will sometimes lead to confusing Google results.
Just echoing the experience of āitās been a pretty humbling experience to read more of the literatureā; biosecurity policy has a long history of good ideas and nuanced discussions. On US gain-of-function policy in particular, I found myself particularly humbled by the 2015 article Gain-of-function experiments: time for a real debate, an adversarial collaboration between researchers involved in controversial viral gain-of-function work and biosecurity professionals who had argued such work should face more scrutiny. Itās interesting to see where the contours of the debate have changed and how much they havenāt changed in the past 7+ years.
BioseĀcuĀrity Happy HourāEAG Bay Area 2023
Yeah, my impression from Canada is that masterās degrees are not all scams. A totally normal path for an academic is to do a (poorly) paid, research-based masterās in one lab, then jump over to another lab for a (maybe slightly shorter than in the USA) PhD.
That said, the most academically impressive researchers I knew at my Canadian school (i.e. already had solid publications and research experience as undergrads) went straight to US-based PhDs, even if they were hoping to return to Canada as academics after getting their doctorate.
One thing that sort of did this for me at EAGxBerlin, which I wonder if we could have some kind of infrastructure for, was hosting āunofficial office hoursā where I put my name on a piece of paper and sat in a specific place for two hours, and talked with people who came past. (I was also able to tell people in Swapcard that we could talk during that time as well as or instead of in a 1:1.)
I could imagine unconference-y or āhost your own conversation tableā infrastructure for this as well (instead of or in addition to āunoffical office hours with Xā).
I canāt speak for the author, and while Iād classify these as examples of suspicion and/āor criticism of EA biosecurity rather than a ābacklash against EAā, here are some links:
Will splashy philanthropy cause the biosecurity field to focus on the wrong risks?, Filippa Lentzos, 2019 (also linked and discussed on the forum)
Exaggerating the risks post series, Reflective Altruism, 2022-2024
Recent criticism specifically of AI-Bio risks, such as Propaganda or Science: Open Source AI and Bioterrorism Risk
Iāll also say Iāve heard criticism of āsecuritising healthā which is much less about EAs in biosecurity and more clashing concerns between groups that prioritise global health and national security, where EA biosecurity folks often end up seen as more aligned with the national security concerns due to prioritising risks from deliberate misuse of biology.