Letâs make nice things with biology. Working on nucleic acid synthesis screening at IBBIS. Also into dual-use risk assessment, synthetic biology, lab automation, event production, donating to global health. From Toronto, lived in Paris and Santiago, currently in the SF Bay. Website: tessa.fyi
Tessa A đ¸
While referencing the 7 Generations principle, I would credit it to âthe Iroquois confederacyâ or âthe Haudenosaunee (Iroquois) confederacyâ rather than âthe Iroquois tribeâ. There isnât one tribe associated with that name; itâs an alliance formed by the Mohawk, Oneida, Onondaga, Cayuga and Seneca (and joined by the Tuscarora in 1722).
(Aside: In Ontario, where Iâm from, we tend to use the word ânationâ rather than âtribeâ to refer to the members of the confederacy, but itâs possible this is a US/âCanada difference, and the part that bothered me was the inaccuracy of the singular more than the specific word choice.)
Thanks for putting together the summary, I enjoyed reading it!
I really liked this post, and resonate strongly with the sentiment of âNothing can take donating away from me, not even a bad dayâ.
Although I do direct work on biosecurity, my donations (~15% gross income) go almost entirely to global health and wellbeing, and some of this is because I want to be reassured that I had a positive impact, even if all my various speculative research ideas (and occasional unproductive depressive spirals) amount to nothing.
I would be curious how you feel that intersects with the wording of the GWWC pledge, which includesI shall give __ to whichever organisations can most effectively use it to improve the lives of others
As the sort of pedant who loves a solemn vow, I wonder if my global health and wellbeing donations are technically fulfilling this pledge, based on my judgements of how to improve the lives of others. That said, this only bothers me a little because, you know, this mess of incoherent commitments is out here giving what she can, and I recognize that might not meet a theoretical threshold of âmost effectiveâ.
Relatedly, an area where I think arXiv could have a huge impact (in both biosecurity and AI) would be setting standards for easy-to-implement manged access to algorithms and datasets.
This is something called for in Biosecurity in an Age of Open Science:Given the misuse potential of research objects like code, datasets, and protocols, approaches for risk mitigation are needed. Across digital research objects, there appears to be a trend towards increased modularisation, i.e., sharing information in dedicated, purpose built repositories, in contrast to supplementary materials. This modularisation may allow differential access to research products according to the risk that they represent. Curated repositories with greater access control could be used that allow reuse and verification when full public disclosure of a research object is inadvisable. Such repositories are already critical for life sciences that deal with personally identifiable information.
This sort of idea also appears in New ideas for mitigating biotechnology misuse under responsible access to genetic sequences and in Dual use of artificial-intelligence-powered drug discovery as a proposal for managing risks from algorithmically designed toxins.
- Sep 29, 2022, 7:16 PM; 12 points) 's comment on BioseÂcuÂrity Dual Use ScreenÂingâProÂject ProÂposal (seekÂing vetÂting & proÂject lead) by (
- Jul 19, 2022, 2:15 PM; 1 point) 's comment on arxiv.orgâI might work there soon by (
The paper Existential Risk and Cost-Effective Biosecurity makes a distinction between Global Catastrophic Risk and an Existential Risk in the context of biological threats:
Quoting the caption from the paper: A spectrum of differing impacts and likelihoods from biothreats. Below each category of risk is the number of human fatalities. We loosely define global catastrophic risk as being 100 million fatalities, and existential risk as being the total extinction of humanity. Alternative definitions can be found in previous reports, as well as within this journal issue.
One thing I find hopeful, under the âConsensus-finding on risks and benefits of researchâ idea, is that the report Emerging Technologies and Dual-Use Concerns (WHO, 2021) includes two relevant governance priorities:
Safety by design in dual-use research projects: âA comprehensive approach to identifying the potential benefits and risks of research may improve the design and flag potential pitfalls early in the research. â
Continued lack of a global framework for DURC: âPrevious WHO consultations have highlighted the lack of a global framework as a critical gap, and regulations, norms and laws to address DURC remain fragmented among stakeholders and countries.â
This was based on an expert elicitation study using the IDEA (Investigate, Discuss, Estimate, Aggregate) framework⌠I find it hopeful that this process identified these governance issues as priorities!
That said, I find it less hopeful that when âasked to allocate each issue a score from 1 to 100 reflecting its impact and plausibilityâ the scores for âThe Lack of a Global DURC Frameworkâ appear to range from 1 to 99:Figure 2 from the CSER /â WHO report on Emerging Technologies and Dual-Use Concerns.
I had recent cause to return to this post and will note that I am currently working on a short paper about this.
List of Lists of ConÂcrete BioseÂcuÂrity ProÂject Ideas
I think people are also unaware of how tiny the undergraduate populations of elite US/âUK universities are, especially if you (like me) did not grow up or go to school in those countries.
Quoting a 2015 article from Joseph Heath, which I found shocking at the time:
There are few better ways of illustrating the difference than to look at the top U.S. colleges and compare them to a highly-ranked Canadian university, like the University of Toronto where I work. The first thing youâll notice is that American schools are miniscule. The top 10 U.S. universities combined (Harvard, Princeton, Yale, etc.) have room for fewer than 60,000 undergraduates total. The University of Toronto, by contrast, alone has more capacity, with over 68,000 undergraduate students.
In other words, Canadian universities are in the business of mass education. We take entire generations of Canadians, tens of thousands of them recent immigrants, and give them access to the middle classes. Fancy American schools are in the business of offering boutique education to a very tiny, coddled minority, giving them access to the upper classes. Thatâs a really fundamental difference.
Oxford (12,510 undergraduates) and Cambridge (12,720 undergraduates ) are less tiny, but still comparatively small, especially since the UK population is about 1.75x Canadaâs.
In terms of needing such a system to be lightweight and specific: this also implies needing it what is sometimes called âadaptive governanceâ (i.e. you have to be able to rapidly change your rules when new issues emerge).
For example, there were ambiguities about whether SARS-CoV-2 fell under Australia Group export controls on âSARS-like-coronavirusesâ (related journal article)⌠a more functional system would include triggers for removing export controls (e.g. at a threshold of global transmission, public health needs will likely outweigh biosecurity concerns about pathogen access)
Freakonomics also currently in Global Poverty!
Distribution of cost-effectiveness feels like one of the most important concepts from the EA community. The attitude that, for a given goal that you have, some ways of achieving that goal will be massively more cost-effective than others is an assumption that underlies a lot of cause comparisons, and the value of doing such comparisons at all.
I want to especially +1 item (3) hereâ the best actions for a skill-focused group will be very different depending on how skilled its group members are. Using my own experience organising a biosecurity-focused group (which fizzled out because the core members skilled up and ended up focused on direct work⌠not a bad outcome).
Some examples of the purposes of skill-focused groups, at different skill levels:
Newcomer = learn together
Member goals: Figure out if you are interested in an area, or what you are interested in within it.
Core Activities: Getting familiar with foundational papers and ideas in the field.
Possible structures: reading groups, giving talks summarizing current work, watching lectures together, collectively brainstorming questions you have, shared research on basic questions.
Advanced Beginner = sharpen ideas
Member goals: Figure out if your ideas and projects in an area are good, be ready to pivot as you learn more.
Core activities: Get feedback on your ideas, find useful resources or potential collaborators.
Possible structures: lightning talks, one person presents and receives feedback on their project, fireside chats or Q&As with experts.
Expert = keep up with the field
Member Goals: Make progress on your projects while staying aware of relevant of new developments.
Core activities: Find potential synergies with your work, get feedback and critique, find collaborators.
Possible structures: seminar series focused on project updates, research reading groups where summary talks are given by more junior group members.
a rhetorical move that introduces huge moral stakes into the world-view in order to push people into drastically altering their actions and priorities
What is the definition youâd prefer people to stick to? Something like âbeing pushed into actions that have a very low probability of producing value, because the reward would be extremely high in the unlikely event they did work outâ?
The Drowning Child argument doesnât seem like an example of Pascalâs Mugging, but Wikipedia gives the example of:
âgive me five dollars, or Iâll use my magic powers from outside the Matrix to run a Turing machine that simulates and kills 3 ââââ 3â
and I think recent posts like The AI Messiah are gesturing at something like that (see, even, this video from the comments on that post: Is AI Safety a Pascalâs Mugging?).
I havenât looked into this in detail (honest epistemic status: saw a screenshot on Twitter) but what do you think of the recent paper Association of Influenza Vaccination With Cardiovascular Risk?
Quoting from it, re: tractable interventions:
The effect sizes reported here for major adverse cardiovascular events and cardiovascular mortality (in patients with and without recent ACS) are comparable withâif not greater thanâthose seen with guideline-recommended mainstays of cardiovascular therapy, such as aspirin, angiotensin-converting enzyme inhibitors, β-blockers, statins, and dual antiplatelet therapy.
Minor elaboration on your last point: a piece of advice I got from someone who did psychological research on how to solicit criticism was to try to brainstorm someoneâs most likely criticism of you would be, and then offer that up when requesting criticism, as this is a credible indication that youâre open to it. Examples:
âHey, do you have any critical feedback on the last discussion I ran? I talked a lot about AI stuff, but I know that can be kind of alienating for people who have more interest in political action than technology development⌠Does that seem right? Is there other stuff Iâm missing?â
âHey, Iâm looking for criticism on my leadership of this group. One thing I was worried about is that I make time for 1:1s with new members, but not so much with people that have been in the group for more than one year...â
âDid you think there was there anything off about our booth last week? I was noticing we were the only group handing out free books, maybe that looked weird. Did you notice anything else?â
Some recent-ish resources that potential applicants might want to check out:
David Manheim and Gregory Lewis, High-risk human-caused pathogen exposure events from 1975-2016, data note published in August 2021.
As a way to better understand the risk of Global Catastrophic Biological Risks due to human activities, rather than natural sources, this paper reports on a dataset of 71 incidents involving either accidental or purposeful exposure to, or infection by, a highly infectious pathogenic agent.
Filippa Lentzos and Gregory D. Koblentz, Mapping Maximum Biological Containment Labs Globally, policy brief published in May 2021 part of the Global Biolabs project.
This study provides an authoritative resource that: 1) maps BSL4 labs that are planned, under construction, or in operation around the world, and 2) identifies indicators of good biosafety and biosecurity practices in the countries where the labs are located.
2021 Global Health Security Index, https://ââwww.ghsindex.org/ââ.
If you click through to the PDFs under each individual country profile, they have detailed information on the countryâs biosafety and biosecurity laws! (Example: the exact laws arenât clear from https://ââwww.ghsindex.org/ââcountry/ââukraine/ââ but if you click through to the âCountry Score Justification Summaryâ PDF (https://ââwww.ghsindex.org/ââwp-content/ââuploads/ââ2021/ââ12/ââUkraine.pdf) it has like 100 pages of policy info.
One now-inactive past project in this space that I would highlight (since I would very much like something similar to exist again) is The Sunshine Project. Quoting its (sadly very short) Wikipedia page:
The Sunshine Project worked by exposing research on biological and chemical weapons. Typically, it accessed documents under the Freedom of Information Act and other open records laws, publishing reports and encouraging action to reduce the risk of biological warfare. It tracked the construction of high containment laboratory facilities and the dual-use activities of the U.S. biodefense program.
Some more on Edward Hammondâs work/âmethods show up in this press article on The Worrying Murkiness of Institutional Biosafety Committees:
In 2004, an activist named Edward Hammond fired up his fax machine and sent out letters to 390 institutional biosafety committees across the country. His request was simple: Show me your minutes.
...
The committees âare the cornerstone of institutional oversight of recombinant DNA research,â according to the NIH, and at many institutions, their purview includes high-security labs and research on deadly pathogens.
...
When Hammond began requesting minutes in 2004, he said, he intended to dig up information about bioweapons, not to expose cracks in biosafety oversight. But he soon found that many institutions were unwilling to hand over minutes, or were struggling to provide any record of their IBCs at all. For example, he recalled, Utah State was a hub of research into biological weapons agents. âAnd their biosafety committee had not met in like 10 years, or maybe ever,â Hammond said. âThey didnât have any records of it ever meeting.â
I logically acknowledge that: âIn some cases, an extravagant lifestyle can even produce a lot of good, depending on the circumstances⌠Itâs not my preferred moral aesthetic, but the worldâs problems donât care about my aesthetics.â
I know that, but⌠I care about my aesthetics.
For nearly everyone, I think there exists is a level of extravagance that disgusts their moral aesthetics. Iâm sure I sit above that level for some, with my international flights and two $80 keyboards. My personal aesthetic disgust triggers somewhere around âhow dare you spend $1000 on a watch when people die of dehydrationâ. Giving a blog $100,000 isnât quite disgusting, yet, ew?
The post Iâve read that had the least missing mood around speculative philanthropy was probably the So You Want To Run A Microgrants Program retrospective on Astral Codex Ten, which included the following:
If your thesis is âInstead of saving 300 lives, which I could totally do right now, Iâm gonna do this other thing, because if I do a good job itâll save even more than 300 livesâ, then man, you had really better do a good job with the other thing.
I like the scenario this post gives for risks of omission: a giant Donât Look Up asteroid hurtling towards the earth. I wouldnât be mad if people misspent some money, trying to stop it, because the problem was so urgent. Problems are urgent!
...yet, ew? So many other things look kind of extravagant, and theyâre competing against lives. I feel unsure about whether to treat my aesthetically-driven moral impulses as useful information about my motivations vs. obviously-biased intuitions to correct against.
(For example, I started looking into donating a kidney a few years ago and was like⌠man, I could easily save an equal number of years of life without accruing 70+ micromorts, but thatâs not nearly as rad? Still on the fence about this one.)
[crosspost from my twitter]
You might be interested to know that iGEM (disclosure: my employer) just published a blog post about infohazards. We currently offer biorisk workshops for teams; this year we plan to offer a general workshop on risk awareness, a workshop specifically on dual-use, and potentially some others. We donât have anything on general EA /â rationality, though we do share biosecurity job and training opportunities with our alumni network.
This feels very related to the recent post Most Ivy-smart students arenât at Ivy-tier schools, which notes near the beginning: