Let’s make nice things with biology. Working on biosecurity at iGEM. Also into lab automation, event production, donating to global health. From Toronto, lived in Paris, currently in the SF Bay. Website: tessa.fyi
Tessa
Thanks for writing out a reaction very similar to my own. As I wrote in a comment on a different topic, “it seems to me that one of the core values of effective altruism is that of impartiality― giving equal moral weight to people who are distant from me in space and/or time.”
I agree that “all people count equally” is an imprecise way to express that value (and I would probably choose to frame in in the lens of “value” rather than “belief”) but I read this as an imprecise expression of a common value in the movement rather than a deep philosophical commitment to valuing all minds exactly the same.
I really liked the encouraging tone of this― “from one little fish in the sEA to another” was so sweet― and like the suggestion to instigate small / temporary / obvious projects. Reminds me a bit of the advice in Dive In which I totally failed to integrate when I first read it, but now feels very spot on; I spent ages agnoising over whether my project ideas were Effective Enough and lost
monthsyears that could have been spent building imperfect things and nurturing competence and understanding.
I logically acknowledge that: “In some cases, an extravagant lifestyle can even produce a lot of good, depending on the circumstances… It’s not my preferred moral aesthetic, but the world’s problems don’t care about my aesthetics.”
I know that, but… I care about my aesthetics.
For nearly everyone, I think there exists is a level of extravagance that disgusts their moral aesthetics. I’m sure I sit above that level for some, with my international flights and two $80 keyboards. My personal aesthetic disgust triggers somewhere around “how dare you spend $1000 on a watch when people die of dehydration”. Giving a blog $100,000 isn’t quite disgusting, yet, ew?
The post I’ve read that had the least missing mood around speculative philanthropy was probably the So You Want To Run A Microgrants Program retrospective on Astral Codex Ten, which included the following:
If your thesis is “Instead of saving 300 lives, which I could totally do right now, I’m gonna do this other thing, because if I do a good job it’ll save even more than 300 lives”, then man, you had really better do a good job with the other thing.
I like the scenario this post gives for risks of omission: a giant Don’t Look Up asteroid hurtling towards the earth. I wouldn’t be mad if people misspent some money, trying to stop it, because the problem was so urgent. Problems are urgent!
...yet, ew? So many other things look kind of extravagant, and they’re competing against lives. I feel unsure about whether to treat my aesthetically-driven moral impulses as useful information about my motivations vs. obviously-biased intuitions to correct against.
(For example, I started looking into donating a kidney a few years ago and was like… man, I could easily save an equal number of years of life without accruing 70+ micromorts, but that’s not nearly as rad? Still on the fence about this one.)
[crosspost from my twitter]
Myself and Zachary Jacobi did some research for a post that we were going to call “Second-Order Effects Make Climate Change an Existential Threat” back in April 2019. At this point, it’s unlikely that our notes will be converted into a post, so I’m going to link a document of our rough notes.
The tl;dr of the doc:
Epistemic status: conjecture stated strongly to open debate.
It seems like there is a robust link between heat and crime (at least 1%/ºC). We should be concerned that increased temperatures due to climate change will lead to increases in conflict that represent an existential threat.
We assumed that:
Climate change is real and happening (Claim 0).
Conflict between humans is a major source of existential risk (Claim 1).
Tessa researched whether increased atmospheric CO2 concentrations would make people worse at thinking (Claim 2).
She concluded that there is only mixed evidence that CO2 concentrations affect cognition, and only at very high (i.e. indoor) concentrations.
If you are concerned about the CO2 → poor cognition → impulsivity/conflict link, worry about funding HVAC systems, not climate change.
Zach researched whether heat makes people more violent (Claim 3).
They concluded that “This seems to be solidly borne out by a variety of research and relatively uncontroversial, although there is quibbling about which confounders (alcohol, nicer weather) play a role. On the whole, we’re looking at at least 1%/ºC increase in crime. The exact mechanism remains unknown and everything I’ve read seems to have at least one counter-argument against it.”
The quality of the studies supporting this claim surprised both of us.
We did not get around to researching the intersection of food scarcity, climate change, and conflict .
This has been discussed in another comment thread on this post.
The rough notes represent maybe 4 person-hours of research and discussion; it’s a shallow investigation.
Thanks for this post!
I wanted to link a few previous discussions of this topic on the EA Forum, as I think the discussion there might also be relevant to this issue:
I want to note not just the skulls of the eugenic roots of futurism, but also the “creepy skull pyramid” of longtermists suggesting actions that harm current people in order to protect hypothetical future value.
This goes anywhere from suggestions to slow down AI progress, which seems comfortably within the Overton Window but risks slowing down economic growth and thus slowing reductions in global poverty, to the extreme actions suggested in some Bostrom pieces. Quoting the Current Affairs piece:
While some longtermists have recently suggested that there should be constraints on which actions we can take for the far future, others like Bostrom have literally argued that preemptive violence and even a global surveillance system should remain options for ensuring the realization of “our potential.”
Mind you, I don’t think these tensions are unique to longtermism. In biosecurity, even if you’re focused entirely on the near-term, there are a lot of trade-offs and tensions between preventing harm and securing benefits.
You might have really robust export controls that never let pathogens be shipped around the world… but that will make it harder for developing countries to build up their biomanufacturing capacity. Under the bioweapons convention you have a lot of diplomats arguing about balancing Article IV (“any national measures necessary to prohibit and prevent the development, production, stockpiling, acquisition or retention of biological weapons”) and Article X (“the fullest possible exchange of equipment, materials and information for peaceful purposes”). That said, I think longtermist commitments can increase the relative importance of preventing harm.
Minor elaboration on your last point: a piece of advice I got from someone who did psychological research on how to solicit criticism was to try to brainstorm someone’s most likely criticism of you would be, and then offer that up when requesting criticism, as this is a credible indication that you’re open to it. Examples:
“Hey, do you have any critical feedback on the last discussion I ran? I talked a lot about AI stuff, but I know that can be kind of alienating for people who have more interest in political action than technology development… Does that seem right? Is there other stuff I’m missing?”
“Hey, I’m looking for criticism on my leadership of this group. One thing I was worried about is that I make time for 1:1s with new members, but not so much with people that have been in the group for more than one year...”
“Did you think there was there anything off about our booth last week? I was noticing we were the only group handing out free books, maybe that looked weird. Did you notice anything else?”
I just want to highlight your second point― resource allocation within the movement away from the global poor and towards longtermsism― seems to be a big part of what is concretely criticized in the Current Affairs piece. Quoting:
This means that if you want to do the most good, you should focus on these far-future people rather than on helping those in extreme poverty today. As [Hilary Greaves and Will MacAskill] write, “for the purposes of evaluating actions, we can in the first instance often simply ignore all the effects contained in the first 100 (or even 1,000) years, focusing primarily on the further-future effects. Short-run effects act as little more than tie-breakers.”
...
Since our resources for reducing existential risk are finite, Bostrom argues that we must not “fritter [them] away” on what he describes as “feel-good projects of suboptimal efficacy.” Such projects would include, on this account, not just saving people in the Global South—those most vulnerable, especially women—from the calamities of climate change, but all other non-existential philanthropic causes, too.
This doesn’t seem to me like a purely hypothetical harm. If you value existing people much more than potential future people (not an uncommon moral intuition) then this is concretely bad, especially since the EA community is able to move around a lot of philanthropic capital.
Some recent-ish resources that potential applicants might want to check out:
David Manheim and Gregory Lewis, High-risk human-caused pathogen exposure events from 1975-2016, data note published in August 2021.
As a way to better understand the risk of Global Catastrophic Biological Risks due to human activities, rather than natural sources, this paper reports on a dataset of 71 incidents involving either accidental or purposeful exposure to, or infection by, a highly infectious pathogenic agent.
Filippa Lentzos and Gregory D. Koblentz, Mapping Maximum Biological Containment Labs Globally, policy brief published in May 2021 part of the Global Biolabs project.
This study provides an authoritative resource that: 1) maps BSL4 labs that are planned, under construction, or in operation around the world, and 2) identifies indicators of good biosafety and biosecurity practices in the countries where the labs are located.
2021 Global Health Security Index, https://www.ghsindex.org/.
If you click through to the PDFs under each individual country profile, they have detailed information on the country’s biosafety and biosecurity laws! (Example: the exact laws aren’t clear from https://www.ghsindex.org/country/ukraine/ but if you click through to the “Country Score Justification Summary” PDF (https://www.ghsindex.org/wp-content/uploads/2021/12/Ukraine.pdf) it has like 100 pages of policy info.
I’m also familiar with this school of thought, but I’m not sure it’s empirically validated?
In the case of Dominic Cummings, I believe you are referring to this post which describes running successful political campaigns. Those seem like they might be an outlier, in that they are an extremely time-bound competition where “do things faster than your opponent” is an obvious win? As Samuel noted, running a startup is also a case where a marginal month of delivery matters, since you likely have <1 year of runway to demonstrate to investors that you should continue being funded. The other examples you cite don’t seem to be of people optimizing for impact.
Lynette Bye put some empirical research into the post How Long Can People Reasonably Work?, but found the literature pretty disappointing. Her top-level conclusions included:First, as you work more hours, each hour becomes less productive. If I had to guess based on the research, I’d say there are steeply diminishing marginal value around 40-50 hours per week, and negative returns (meaning less total output for the day per additional hour) somewhere between 50 and 70 hours.
…
I’m fairly skeptical any of this research tells us how much to work (you can see more details below). I place more confidence on the anecdotal reports of productive people. It’s common for them to report three to five hours of deep work on a top priority each day, plus several hours more of lower energy or more “following curiosity”-type work (three more yet-to-be-released interviews also report in this range; one interview reports more). To be clear, I think they’re describing consistent, intense, “write a book chapter” levels of focus for those three to five hours.The hyperproductive people I know seem to score well on (1) working on important things and (2) being very focused while working, but vary in how many hours of work they do per week (I’d estimate 30-50).
I am not a hyperproductive person, so I’m not sure you should take productivity advice from me, but “try to do at least one thing I think is actually important per week” seems to give me better results than “try to work really hard”, since the latter can lead to hyperfocused work on things that don’t really matter.Curious if you know of any sources that were missed in Lynette’s post, or this response, though!
I think people are also unaware of how tiny the undergraduate populations of elite US/UK universities are, especially if you (like me) did not grow up or go to school in those countries.
Quoting a 2015 article from Joseph Heath, which I found shocking at the time:
There are few better ways of illustrating the difference than to look at the top U.S. colleges and compare them to a highly-ranked Canadian university, like the University of Toronto where I work. The first thing you’ll notice is that American schools are miniscule. The top 10 U.S. universities combined (Harvard, Princeton, Yale, etc.) have room for fewer than 60,000 undergraduates total. The University of Toronto, by contrast, alone has more capacity, with over 68,000 undergraduate students.
In other words, Canadian universities are in the business of mass education. We take entire generations of Canadians, tens of thousands of them recent immigrants, and give them access to the middle classes. Fancy American schools are in the business of offering boutique education to a very tiny, coddled minority, giving them access to the upper classes. That’s a really fundamental difference.
Oxford (12,510 undergraduates) and Cambridge (12,720 undergraduates ) are less tiny, but still comparatively small, especially since the UK population is about 1.75x Canada’s.
This:
it is worth some eyebrow-raising if it turns out that the ingroup defense is something along the lines of “well, by bioethicists, we mean research ethicists, and by research ethicists we mean research bureaucrats, and by research bureaucrats, we mean research bureaucracy.”
has been roughly my impression of the curious EA bioethics hate, which I have tried to push back on when I’ve seen my friends expressing it. I liked the Rob Bensinger piece Thirty-three randomly selected bioethics papers that you linked.
My sense is that there are institutions making dubious, hyperconservative, and omission-biased “ethical” judgments for reasons that are more to do with liability than ethics. I think many USA-based researchers don’t really interact with “bioethics” except when asked to fill out extremely onerous forms for their institution (e.g. “what are the risks of asking people to look at differently-coloured triangles on a computer screen?”, where an insufficiently-detailed response means your project can’t go ahead).
Happy to pitch in with a few stories of rejection!
2010: I applied for MIT and Princeton for undergraduate studies and wasn’t accepted to either. Not trying harder to get into those schools was a major regret of mine for about 5 years (I barely studied for the SATs, in part because I was the only person I knew who took them… it’s uncommon for Canadians to attend university in the states). I later ended up working on teams with people who had gone to fancy US schools, such that I no longer believe this had a clearly negative impact on my trajectory.
2018: Rejected for LTFF funding for the biosecurity conference that eventually became Catalyst. We re-applied in a subsequent round and were funded.
2018: I applied to be a Research Analyst at Open Phil in their big 2018 recruitment round, and got through two rounds of work tests before ultimately being rejected after an interview. The interview really didn’t go well; I felt like a total idiot, and didn’t get the job. This was maybe the most rough rejection; I felt like I wasted basically all of my non-work time for a month on work tests, at a time when I was feeling pretty bad about how effectively I was spending my time.
2018: rejected from the SynBioBeta conference fellowship run by Johns Hopkins, which at the time felt like it could have been an entry point into a biosecurity career transition. Definitely had some angst about whether it was even possible to make such a transition.
2019: I was rejected from a really cool engineering role at Culture Biosciences after a phone screen interview. I got so distressed after this (“I’m not technical enough for a real hardware-y engineering job any more! augh!!”) that I did some electronics projects that I really didn’t have time for, largely out of angst. They later reached out to me again when they had a role closer to my (more software-specialzied) skillset, and I completed a full round of interviews and received an offer, though I ultimately decided not to leave my job in order to have more time to focus on my part-time biosecurity projects.
These were all pretty painful for me at the time… and I’m realizing I’ve since come up with stories where the rejections were okay, or part of a fine trajectory. I guess one message here is “just because you were rejected once doesn’t mean you will be if you apply again”?
Appreciate you sharing why you have a negative impression of the Effective Altruism movement and aren’t interested in joining an EA org; you might be getting downvoted under the “clear, on-topic, and kind” comment guideline, but I’m not sure. In my own experience, there sure are lots of frustrating Silicon Valley memes that are overly dismissive of social factors (or of sexism and racism) out in the world, but they aren’t dominant among people actually doing direct EA-affiliated work. As a few recent examples that demonstrate a sensitivity to the importance of social factors, I enjoyed this 80,000 Hours Podcast with Leah Garcés on strategic and empathetic communication for animal advocacy and this post on surprising things learned from a year of working on policy to eliminate lead exposure in Malawi, Bostwana, Madagascar and Zimbabwe.
I want to especially +1 item (3) here― the best actions for a skill-focused group will be very different depending on how skilled its group members are. Using my own experience organising a biosecurity-focused group (which fizzled out because the core members skilled up and ended up focused on direct work… not a bad outcome).
Some examples of the purposes of skill-focused groups, at different skill levels:
Newcomer = learn together
Member goals: Figure out if you are interested in an area, or what you are interested in within it.
Core Activities: Getting familiar with foundational papers and ideas in the field.
Possible structures: reading groups, giving talks summarizing current work, watching lectures together, collectively brainstorming questions you have, shared research on basic questions.
Advanced Beginner = sharpen ideas
Member goals: Figure out if your ideas and projects in an area are good, be ready to pivot as you learn more.
Core activities: Get feedback on your ideas, find useful resources or potential collaborators.
Possible structures: lightning talks, one person presents and receives feedback on their project, fireside chats or Q&As with experts.
Expert = keep up with the field
Member Goals: Make progress on your projects while staying aware of relevant of new developments.
Core activities: Find potential synergies with your work, get feedback and critique, find collaborators.
Possible structures: seminar series focused on project updates, research reading groups where summary talks are given by more junior group members.
Fair enough; it’s unsurprising that a major critique of longtermism is “actually, present people matter more than future people”. To me, a more productive framing of this criticism than racist/non-racist is about longtermist indifference to redistribution. I’ve seen various recent critiques quoting the following paragraph of Nick Beckstead’s thesis:
Saving lives in poor countries may have significantly smaller ripple effects than saving and improving lives in rich countries. Why? Richer countries have substantially more innovation, and their workers are much more economically productive. By ordinary standards—at least by ordinary enlightened humanitarian standards—saving and improving lives in rich countries is about equally as important as saving and improving lives in poor countries, provided lives are improved by roughly comparable amounts. But it now seems more plausible to me that saving a life in a rich country is substantially more important than saving a life in a poor country, other things being equal.
The standard neartermist response is “all other things are definitely not equal, it’s much easier to save a life in a poor country than a rich country”, while the standard longtermist response is (I think) “this is the wrong comparison to pay attention to, we should focus on protecting humanity’s potential”. Given this difference, I disagree a little with this bit of the OP:
the motivations for the part of the community which embraces longtermism still includes Peter Singer’s embrace of practical ethics and effective altruist ideas like the Giving Pledge
in that some of the foundational values embedded in Peter Singer’s writings (e.g. The Life You Can Save) strike me as redistributive commitments. This is very much reflected in the quote from Sanjay included in the OP. As far as I can tell (reading the EA Forum, The Precipice, and various Bostrom papers) longtermist philosophy typically does not emphasize redistribution or fairness as core values, but instead focuses on the overwhelming value of the far future.
(That said, I have seen some fairness-based arguments that future people are a constituency whose interests are underweighted politically, for example in response to the proposed UN Special Envoy for Future Generations.)
This post is delightful! I really appreciate how much effort you put into offering honest context (including things like funding and per-week hourly commitments). Especially in combination with your discussion of mindset, friendship and motivation, and the detailed best practices (with links out to resource documents! nice!) the work that went into this post (and Stanford EA more broadly) makes sentences like the following:
I think that our growth is replicable, and that you do not need to be a superstar public speaker or highly experienced organizer to run a successful group (I sure wasn’t).
ring true! Congrats on writing such solid motivation fuel.
Thanks for this post! I agree with your point about being careful on terms, and thought it might be useful to collect a few definitions together in a comment.
DURC (Dual-Use Research of Concern)
DURC is defined differently by different organizations. The WHO defines it as:
research that is intended to provide a clear benefit, but which could easily be misapplied to do harm
while the definition given in the 2012 US government DURC policy is:
life sciences research that, based on current understanding, can be reasonably anticipated to provide knowledge, information, products, or technologies that could be directly misapplied to pose a significant threat with broad potential consequences to public health and safety, agricultural crops and other plants, animals, the environment, materiel, or national security
ePPP (enhanced Potential Pandemic Pathogen)
ePPP is a term (in my experience) mostly relevant to the US regulatory context, and was set out in the 2017 HHS P3CO Framework as follows:
A potential pandemic pathogen (PPP) is a pathogen that satisfies both of the following:
It is likely highly transmissible and likely capable of wide and uncontrollable spread in human populations; and
It is likely highly virulent and likely to cause significant morbidity and/or mortality in humans.
An enhanced PPP is defined as a PPP resulting from the enhancement of the transmissibility and/or virulence of a pathogen. Enhanced PPPs do not include naturally occurring pathogens that are circulating in or have been recovered from nature, regardless of their pandemic potential.
One way in which this definition has been criticized (quoting the recent NSABB report on updating the US biosecurity oversight framework) is that “research involving the enhancement of pathogens that do not meet the PPP definition (e.g., those with low or moderate virulence) but is anticipated to result in the creation of a pathogen with the characteristics described by the PPP definition could be overlooked.”
GOF (Gain-of-Function)
GOF is not a term that I know to have a clear definition. In the linked Virology under the microscope paper, examples range from making Arabidopsis (a small flowering model plant) more drought-resistant to making H5N1 (avian influenza) transmissible between mammals. I suggest avoiding this term if you can. (The paper acknowledges the term is fuzzily defined, citing The shifting sands of ‘gain-of-function’ research.)
Biosafety, biosecurity, biorisk
The definitions you gave in the footnote seem solid, and similar to the ones I’d offer, though one runs into competing definitions (e.g. the definition provided for biosafety doesn’t mention unintentional exposure). I will note that EA tends to treat “biosecurity” as an umbrella term for “reducing biological risk” in a way that doesn’t reflect its usage in the biosecurity or public health communities. Also, as far as I can tell, Australia means a completely different thing by “biosecurity” than the rest of the English-speaking world, which will sometimes lead to confusing Google results.
Thanks for taking the time to put together this list, this is great! I found that a few of these were on the forum already:
Why Charities Usually Don’t Differ Astronomically in Expected Cost-Effectiveness
A post you can upvote about Sendwave: Why and how to start a for-profit company serving emerging markets
New UK aid strategy – prioritising research and crisis response
For the cage-free campaigns summarized in the vox article you linked, there are ~20 posts under the corporate cage-free campaigns tag, not sure which which you think is the best.
Using the tax system and stock market to donate more: a few basic strategies is a more recent posts that references https://reducing-suffering.org/should-altruists-leverage-investments/, might be a good upvote (many of the other links are referenced in various forum posts, too, this is just one highlight)
Crossposted yesterday: [Crosspost] Reducing Risks of Astronomical Suffering: A Neglected Priority
I have crossposted the following, and may crosspost more if I feel like it (and will add them to this list if I do:
Brian Tomasik – Differential Intellectual Progress as a Positive-Sum Project
Paul Christiano – Machine intelligence and capital accumulation
Carl Shulman – What portion of a boost to global GDP goes to the poor?
Carl Shulman — How migration liberalization might eliminate most absolute poverty
Also, to my pleasant shock, if you copy-paste from one website into the EA Forum WYSIWYG editor, it formats tables and images correctly? This makes cross-posting way easier than I’d realized!
I don’t plan to engage deeply with this post, but I wanted to leave a comment pushing back on the unsubtle currents of genetic determinism (“individuals from those families with sociological profiles amenable to movements like effective altruism, progressivism, or broad Western Civilisational values are being selected out of the gene pool”), homophobia (“cultures that accept gay people on average have lower birth rates and are ultimately outnumbered by neighboring homophobic cultures”, in a piece that is all about how low birth rates are a key problem of our time) , and ethnonationalism (“based in developed countries that will be badly hit by the results of these skewed demographics”) running through this piece.
I believe that genetics influence individual personality, but am very skeptical of claims of strong genetic determinism, especially on a societal level. Moreover, it seems to me that one of the core values of effective altruism is that of impartiality― giving equal moral weight to people who are distant from me in space and/or time. The kind of essentialist and elitist rhetoric common among people who concern themselves with demographic collapse seems in direct opposition to that value; if you think a key priority of our time is ensuring the right people have children, especially if your definition of “the right people” focuses on elite and wealthy people in Western countries, I doubt that we have compatible notions of what it means to do the most good.
Many pieces that criticize effective altruism quote this paragraph from Nick Beckstead’s2013 thesis:
I would like our community to be unequivocal that all other things are not equal, and would distance myself from a community/movement that embraced an idea that lives in rich countries are more important than lives in poor countries. This seems, as I said, in direct opposition to the core values that attracted me to effective altruism.