Letâs make nice things with biology. Working on nucleic acid synthesis screening at IBBIS. Also into dual-use risk assessment, synthetic biology, lab automation, event production, donating to global health. From Toronto, lived in Paris and Santiago, currently in the SF Bay. Website: tessa.fyi
Tessa A đ¸
At 12:27pm ET, I see the ââ You unlocked an additional US$10 match by sharing #FallGivingChallenge đâ after I click to copy the link, but donât see anything about an additional $100 match.
The Johns Hopkins Masterâs of Public Health is an 11-month program, and there is a Health Security Scholarship for this program offered by the Johns Hopkins Center for Health Security. From the scholarship page:
Currently, health security and global catastrophic biological risks (GBCRs) are not routinely integrated into the academic, research, and policy communities. The educational opportunities provided by the Johns Hopkins Center for Health Security enables individuals to enter the health security field and work to improve our preparedness for and in response to these potentially catastrophic public health emergencies and enhance community resilience.
Another fantastic post, Kuhan! I like this agentic vision of networking, where the goal is âimprove EA networks by building useful connections between people, groups, and projectsâ rather than âimprove my own network by connecting to EAsâ. Itâs a more exciting (and ambitious!) goal to have :)
One tiny way to implement this is by asking, towards the end of an EA networking conversation
Is there anything else you can think of that I might do for you?
In my experience, if you or your interlocutor come up with a real request here, you get a nice burst of camaraderie, an emergent sense that youâre on the same team.
Itâs surprisingly difficult for me to tell if the GMU Biodefense Masterâs is meant to be finished in one yearâ itâs 36 credits, which is equivalent to 3 semesters of full-time undergraduate courseload at GMU, but I donât know if Masterâs students typically take a lighter courseload.
There is a case to be made for Paul Ehrlich, the late 19th-century chemist recently highlighted by Cold Takes, who developed the staining techniques that allowed us to identify blood types, which enabled blood transfusions, which are estimated to have saved 1 Billion lives in this World Economic Forum article.
He shouldnât get credit for all the lives saved by blood transfusions, but it seems like he discovered a lot of important medical technology beyond the staining techniques (e.g. he started developing drugs that a target particular pathogen without affecting normal host cells, a novel enough concept that it got the fancy name âZauberkugelâ, which translates to âmagic bulletâ). Check out Paul Ehrlich (1854-1915) and His Contributions to the Foundation and Birth of Translational Medicine if you want more details.
One source of answers might be the Future of Life Award, which is âgiven to individuals who, without having received much recognition at the time, have helped make today dramatically better than it may otherwise have beenâ. It has so far been awarded to:
2021: Joseph Farman , Susan Solomon and Stephen Andersen for helping save our ozone layer
2020: Viktor Zhdanov (your answer) and William Foege for critical contributions to the eradication of a virus that killed 30% of those it infected: Smallpox
2019: Matthew Meselson for being a driving force behind the 1972 Biological Weapons Convention, which averted an arms race in bioweapons
2018: Stanislav Petrov for helping to prevent an all-out US-Russian nuclear war with his decision to ignore algorithms and instead follow his gut instinct
2017: Vasili Arkhipov for single-handedly preventing nuclear war during the height of the Cuban Missile Crisis through vetoing a submarine launch
+1 to this advice. I agree with many other commenters that I learned more in activities like extracurriculars, co-op semesters, and volunteer gigs than I did from the extra classes I took. The post that most improved my attitude towards my classes in school was Half-assing it with everything youâve got.
Despite that, I donât regret taking ~1 extra class per semester in my undergrad. Reasons why:
My degree (Canadian engineering program) was very inflexible (I had only 2 elective classes before my 3rd year)
I wanted to take advanced courses outside that degree (some interesting 3rd-year biology courses which have been relevant to me later)
My extra courses were always easy, and âcoursework has increasing marginal costsâ didnât feel that true given my high default courseload; I was already in a state of being constantly busy with schoolwork and adding one (relatively easy) course on top of that didnât change my quality of life much (i.e. there was already minimal slack in my schedule)
Iâm not great at general academic self-study and happen to like learning from in-person lectures (though volunteer/âwork projects are even better)
Even in this scenario, there was some high-achiever-ego-bait that Iâm glad I didnât go for (e.g. trying to get a minor in biology would have involved taking some courses I didnât care about, while sacrificing others that I did). So even if you read this and think ânah, I like my heavy courseloadâ, you may want to reflect on how well your academic plans connect with your overall goals.
+1 Will the EA Forum also adopt a new quarterly (or monthly, or annual) curation process?
I donât think the prize caused me to write substantially better posts (though I think it increased my standards a smidgeon) but it definitely caused me to read more high-quality posts. It would be nice to continue getting a selective âgreatest hitsâ highlight.
Youâve identified my two main frustrations with the book: US-centrism and the attitude that there exist no substantial objections to open borders (rather than a more measured argument that the benefits outweigh the harms). There were a few panels towards the end of the book which typify this for me:
I, uh, I donât think âthe only thing that stands in the way of opening the border is sheer political apathyâ. QuĂŠbecois separatists were ransoming politicians within my parentsâ lifetime,and QuĂŠbec nearly separated in 1995. I donât expect most Americans to pay attention to the fragility of Canadian federalism, but itâs super frustrating to see someone be so confident that there is no possible argument against their position!
This book contained several interesting economic arguments (e.g. âmigration good for the economy = big countries do betterâ, as you pointed out) but enough credibility-straining overconfidence that I havenât been recommending it.
Fair enough; itâs unsurprising that a major critique of longtermism is âactually, present people matter more than future peopleâ. To me, a more productive framing of this criticism than racist/ânon-racist is about longtermist indifference to redistribution. Iâve seen various recent critiques quoting the following paragraph of Nick Becksteadâs thesis:
Saving lives in poor countries may have significantly smaller ripple effects than saving and improving lives in rich countries. Why? Richer countries have substantially more innovation, and their workers are much more economically productive. By ordinary standardsâat least by ordinary enlightened humanitarian standardsâsaving and improving lives in rich countries is about equally as important as saving and improving lives in poor countries, provided lives are improved by roughly comparable amounts. But it now seems more plausible to me that saving a life in a rich country is substantially more important than saving a life in a poor country, other things being equal.
The standard neartermist response is âall other things are definitely not equal, itâs much easier to save a life in a poor country than a rich countryâ, while the standard longtermist response is (I think) âthis is the wrong comparison to pay attention to, we should focus on protecting humanityâs potentialâ. Given this difference, I disagree a little with this bit of the OP:
the motivations for the part of the community which embraces longtermism still includes Peter Singerâs embrace of practical ethics and effective altruist ideas like the Giving Pledge
in that some of the foundational values embedded in Peter Singerâs writings (e.g. The Life You Can Save) strike me as redistributive commitments. This is very much reflected in the quote from Sanjay included in the OP. As far as I can tell (reading the EA Forum, The Precipice, and various Bostrom papers) longtermist philosophy typically does not emphasize redistribution or fairness as core values, but instead focuses on the overwhelming value of the far future.
(That said, I have seen some fairness-based arguments that future people are a constituency whose interests are underweighted politically, for example in response to the proposed UN Special Envoy for Future Generations.)
I want to note not just the skulls of the eugenic roots of futurism, but also the âcreepy skull pyramidâ of longtermists suggesting actions that harm current people in order to protect hypothetical future value.
This goes anywhere from suggestions to slow down AI progress, which seems comfortably within the Overton Window but risks slowing down economic growth and thus slowing reductions in global poverty, to the extreme actions suggested in some Bostrom pieces. Quoting the Current Affairs piece:
While some longtermists have recently suggested that there should be constraints on which actions we can take for the far future, others like Bostrom have literally argued that preemptive violence and even a global surveillance system should remain options for ensuring the realization of âour potential.â
Mind you, I donât think these tensions are unique to longtermism. In biosecurity, even if youâre focused entirely on the near-term, there are a lot of trade-offs and tensions between preventing harm and securing benefits.
You might have really robust export controls that never let pathogens be shipped around the world⌠but that will make it harder for developing countries to build up their biomanufacturing capacity. Under the bioweapons convention you have a lot of diplomats arguing about balancing Article IV (âany national measures necessary to prohibit and prevent the development, production, stockpiling, acquisition or retention of biological weaponsâ) and Article X (âthe fullest possible exchange of equipment, materials and information for peaceful purposesâ). That said, I think longtermist commitments can increase the relative importance of preventing harm.
I just want to highlight your second pointâ resource allocation within the movement away from the global poor and towards longtermsismâ seems to be a big part of what is concretely criticized in the Current Affairs piece. Quoting:
This means that if you want to do the most good, you should focus on these far-future people rather than on helping those in extreme poverty today. As [Hilary Greaves and Will MacAskill] write, âfor the purposes of evaluating actions, we can in the first instance often simply ignore all the effects contained in the first 100 (or even 1,000) years, focusing primarily on the further-future effects. Short-run effects act as little more than tie-breakers.â
...
Since our resources for reducing existential risk are finite, Bostrom argues that we must not âfritter [them] awayâ on what he describes as âfeel-good projects of suboptimal efficacy.â Such projects would include, on this account, not just saving people in the Global Southâthose most vulnerable, especially womenâfrom the calamities of climate change, but all other non-existential philanthropic causes, too.
This doesnât seem to me like a purely hypothetical harm. If you value existing people much more than potential future people (not an uncommon moral intuition) then this is concretely bad, especially since the EA community is able to move around a lot of philanthropic capital.
I would prefer there to exist reasons to press the button other than destroying value.
I really liked Brigid Slipkaâs comment that the ritual seems âappears to emphasize the precise opposite values that we honor in Petrovâ, including an emphasis on deference to, rather than defiance of, in-group norms.
If there were a different officer than Petrov on that watch, and he called his superiors and announced there were missiles incoming, what would his motivations have been? I doubt they would have been âburn down the USA lolâ, but instead trusting the system, following orders or social norms, or thinking the decision should be in the hands of someone higher up the command chain.
It feels disappointingly simplistic that the only reason to press the button is âburn down a website lolâ.
I would add more chaotic information. I thought the phishing message that brought down the site last year was, far from being a design failure as described in the postmortem), an excellent example of emphasizing something closer to what Petrov faced. The message that brought down the site was:
You are part of a smaller group of 30 users who has been selected for the second part of this experiment. In order for the website not to go down, at least 5 of these selected users must enter their codes within 30 minutes of receiving this message, and at least 20 of these users must enter their codes within 6 hours of receiving the message. To keep the site up, please enter your codes as soon as possible. You will be asked to complete a short survey afterwards.
This includes:
Misleading information related to a job youâve been tasked with
Time pressure
and in order to not bring down the site, you had to pause (despite the time pressure!) and question whether the information was true and worth acting on, given potentially grave consequences. This feels extremely in the spirit of the thing.
ÂĄBien hecho! Una iniciativa que me parece relacionada es La Iniciativa para la Seguridad Global una organizaciĂłn ÂŤ dedicada a la No ProliferaciĂłn de Armas de DestrucciĂłn Masiva Âť que era formada en 2020. Unos de sus expertos en bioseguridad y armas nucleares podrĂa estar interesado en unirse a esta red.
Thanks for this post! I love a good research agenda. Some other relevant bits of work:
The 2019 Global Priorities Institute Workshop on the Economics of Catastrophe, which included Bridget Williams on âCatastrophic Risks from Biotechnologyâ as well as a number of interesting-sounding general topics (e.g. Ilan Noy on âWhat Can Mainstream Economics Ideas Add to the Conversation?â)
Lennart Stern (Paris School of Economics) has a paper on Optimal Subsidies for Home Delivery in Times of COVID-19 and is apparently drafting something on âoptimal pandemic insurance for global outbreak response funds with endogenous fundingâ
I see some overlaps with the Legal Priorities Project research agenda for synthetic biology, as it has sections on on Pandemic Finance and Access and Benefits-Sharing
Congratulations! According to the Founderâs Pledge FAQ, anyone who holds equity in a company can participate. They offer a bunch of High-Impact Giving Support. You might be able to book a call with them and get advice about how to efficiently donate equity.
I think you have an acronym collision here between HLMI = âhuman-level machine intelligenceâ = âhigh-level machine intelligenceâ. Your overall conclusion still seems right to me, but this collision made things confusing.
Details
I got confused because the evidence provided in footnote 11 didnât seem (to me) like it implied âthat the researchers simply werenât thinking very hard about the questionsâ. Why would âhuman-level machine intelligenceâ imply the ability to automate the labour of all humans?
My confusion was resolved by looking up the definition of HLMI in part 4 of Bio Anchors. There, HLMI is referring to âhigh-level machine intelligenceâ. If you go back to Grace et al. 2017, they defined this as:
âHigh-level machine intelligenceâ (HLMI) is achieved when unaided machines can accomplish every task better and more cheaply than human workers.
This seems stronger to me than human-level! Even âAI systems that can essentially automate all of the human activities needed to speed up scientific and technological advancementâ (the definition of PASTA above) could leave some labour out, but this definition does not.
I think your conclusion is still right. There shouldnât have been a discrepancy between the forecasts for HLMI and âfull automationâ (defined as âwhen for any occupation, machines could be built to carry out the task better and more cheaply than human workersâ). Similarly, the expected date for the automation of AI research, a job done by human workers, should not be after the expected date for HLMI.
Still, I would change the acronym and maybe remove the section of the footnote about individual milestones; the milestones forecasting was a separate survey question from the forecasting of automation of specific human jobs, and it was more confusing to skim through Grace et al. 2017 expecting those data points to have come from the same question.
Aw, itâs always really nice to hear that people are enjoying the words I fling out onto the internet!
Often both the benefits and risks of a given bit of research are pretty speculative, so evaluation of specific cases depends on oneâs underlying beliefs about potential gains from openness and potential harms from new life sciences insights. My hope is that there are opportunities to limit the risks of disclosure while still getting the benefits of openness, which is why I want to sketch out some of the selective-disclosure landscape between âfull secrecy by defaultâ (paranoid?) and âfull openness by defaultâ (reckless?).
If youâre like to read a strong argument against openness in one particular contentious case, I recommend Gregory Koblentzâs 2018 paper A Critical Analysis of the Scientific and Commercial Rationales for the De Novo Synthesis of Horsepox Virus. From the paper:
This article evaluates the scientific and commercial rationales for the synthesis of horsepox virus. I find that the claimed benefits of using horsepox virus as a smallpox vaccine rest on a weak scientific foundation and an even weaker business case that this project will lead to a licensed medical countermeasure. The combination of questionable benefits and known risks of this dual use research raises serious questions about the wisdom of undertaking research that could be used to recreate variola virus.
...
The putative benefit to synthesizing horsepox virus for use as a smallpox vaccine rests on four assumptions made by Tonix: that the modern-day smallpox vaccine based on vaccinia virus is directly descended from horsepox virus, that ancestral horsepox virus is a safer candidate for a human vaccine than derived vaccinia virus, that current smallpox vaccines are not safe enough, and that there is a significant demand for a new smallpox vaccine. All four of these scientific and commercial claims need to be true to fully realize the expected benefit of synthesizing horsepox virus. I argue that there are serious doubts that all of these assumptions are valid, raising important questions about the wisdom of synthesizing this virus given the risks posed by pioneering a technique that could be used to recreate variola virus.
Hereâs a link to my profile, which includes donations to about 20 EA-aligned charities: https://ââwww.every.org/ââ@tessa.alexanian