Let’s make nice things with biology. Working on biosecurity at iGEM. Also into lab automation, event production, donating to global health. From Toronto, lived in Paris, currently in the SF Bay. Website: tessa.fyi
Tessa
Aiming for the minimum of self-care is dangerous
List of Lists of Concrete Biosecurity Project Ideas
How to run a high-energy reading group
A Biosecurity and Biorisk Reading+ List
Thanks for writing out a reaction very similar to my own. As I wrote in a comment on a different topic, “it seems to me that one of the core values of effective altruism is that of impartiality― giving equal moral weight to people who are distant from me in space and/or time.”
I agree that “all people count equally” is an imprecise way to express that value (and I would probably choose to frame in in the lens of “value” rather than “belief”) but I read this as an imprecise expression of a common value in the movement rather than a deep philosophical commitment to valuing all minds exactly the same.
Retrospective on Catalyst, a 100-person biosecurity summit
I really liked the encouraging tone of this― “from one little fish in the sEA to another” was so sweet― and like the suggestion to instigate small / temporary / obvious projects. Reminds me a bit of the advice in Dive In which I totally failed to integrate when I first read it, but now feels very spot on; I spent ages agnoising over whether my project ideas were Effective Enough and lost
monthsyears that could have been spent building imperfect things and nurturing competence and understanding.
Examples of Successful Selective Disclosure in the Life Sciences
I logically acknowledge that: “In some cases, an extravagant lifestyle can even produce a lot of good, depending on the circumstances… It’s not my preferred moral aesthetic, but the world’s problems don’t care about my aesthetics.”
I know that, but… I care about my aesthetics.
For nearly everyone, I think there exists is a level of extravagance that disgusts their moral aesthetics. I’m sure I sit above that level for some, with my international flights and two $80 keyboards. My personal aesthetic disgust triggers somewhere around “how dare you spend $1000 on a watch when people die of dehydration”. Giving a blog $100,000 isn’t quite disgusting, yet, ew?
The post I’ve read that had the least missing mood around speculative philanthropy was probably the So You Want To Run A Microgrants Program retrospective on Astral Codex Ten, which included the following:
If your thesis is “Instead of saving 300 lives, which I could totally do right now, I’m gonna do this other thing, because if I do a good job it’ll save even more than 300 lives”, then man, you had really better do a good job with the other thing.
I like the scenario this post gives for risks of omission: a giant Don’t Look Up asteroid hurtling towards the earth. I wouldn’t be mad if people misspent some money, trying to stop it, because the problem was so urgent. Problems are urgent!
...yet, ew? So many other things look kind of extravagant, and they’re competing against lives. I feel unsure about whether to treat my aesthetically-driven moral impulses as useful information about my motivations vs. obviously-biased intuitions to correct against.
(For example, I started looking into donating a kidney a few years ago and was like… man, I could easily save an equal number of years of life without accruing 70+ micromorts, but that’s not nearly as rad? Still on the fence about this one.)
[crosspost from my twitter]
Will splashy philanthropy cause the biosecurity field to focus on the wrong risks?
Scott Alexander – Nobody Is Perfect, Everything Is Commensurable
Carl Shulman — How are brain mass (and neurons) distributed among humans and the major farmed land animals?
Myself and Zachary Jacobi did some research for a post that we were going to call “Second-Order Effects Make Climate Change an Existential Threat” back in April 2019. At this point, it’s unlikely that our notes will be converted into a post, so I’m going to link a document of our rough notes.
The tl;dr of the doc:
Epistemic status: conjecture stated strongly to open debate.
It seems like there is a robust link between heat and crime (at least 1%/ºC). We should be concerned that increased temperatures due to climate change will lead to increases in conflict that represent an existential threat.
We assumed that:
Climate change is real and happening (Claim 0).
Conflict between humans is a major source of existential risk (Claim 1).
Tessa researched whether increased atmospheric CO2 concentrations would make people worse at thinking (Claim 2).
She concluded that there is only mixed evidence that CO2 concentrations affect cognition, and only at very high (i.e. indoor) concentrations.
If you are concerned about the CO2 → poor cognition → impulsivity/conflict link, worry about funding HVAC systems, not climate change.
Zach researched whether heat makes people more violent (Claim 3).
They concluded that “This seems to be solidly borne out by a variety of research and relatively uncontroversial, although there is quibbling about which confounders (alcohol, nicer weather) play a role. On the whole, we’re looking at at least 1%/ºC increase in crime. The exact mechanism remains unknown and everything I’ve read seems to have at least one counter-argument against it.”
The quality of the studies supporting this claim surprised both of us.
We did not get around to researching the intersection of food scarcity, climate change, and conflict .
This has been discussed in another comment thread on this post.
The rough notes represent maybe 4 person-hours of research and discussion; it’s a shallow investigation.
Thanks for this post!
I wanted to link a few previous discussions of this topic on the EA Forum, as I think the discussion there might also be relevant to this issue:
I want to note not just the skulls of the eugenic roots of futurism, but also the “creepy skull pyramid” of longtermists suggesting actions that harm current people in order to protect hypothetical future value.
This goes anywhere from suggestions to slow down AI progress, which seems comfortably within the Overton Window but risks slowing down economic growth and thus slowing reductions in global poverty, to the extreme actions suggested in some Bostrom pieces. Quoting the Current Affairs piece:
While some longtermists have recently suggested that there should be constraints on which actions we can take for the far future, others like Bostrom have literally argued that preemptive violence and even a global surveillance system should remain options for ensuring the realization of “our potential.”
Mind you, I don’t think these tensions are unique to longtermism. In biosecurity, even if you’re focused entirely on the near-term, there are a lot of trade-offs and tensions between preventing harm and securing benefits.
You might have really robust export controls that never let pathogens be shipped around the world… but that will make it harder for developing countries to build up their biomanufacturing capacity. Under the bioweapons convention you have a lot of diplomats arguing about balancing Article IV (“any national measures necessary to prohibit and prevent the development, production, stockpiling, acquisition or retention of biological weapons”) and Article X (“the fullest possible exchange of equipment, materials and information for peaceful purposes”). That said, I think longtermist commitments can increase the relative importance of preventing harm.
Minor elaboration on your last point: a piece of advice I got from someone who did psychological research on how to solicit criticism was to try to brainstorm someone’s most likely criticism of you would be, and then offer that up when requesting criticism, as this is a credible indication that you’re open to it. Examples:
“Hey, do you have any critical feedback on the last discussion I ran? I talked a lot about AI stuff, but I know that can be kind of alienating for people who have more interest in political action than technology development… Does that seem right? Is there other stuff I’m missing?”
“Hey, I’m looking for criticism on my leadership of this group. One thing I was worried about is that I make time for 1:1s with new members, but not so much with people that have been in the group for more than one year...”
“Did you think there was there anything off about our booth last week? I was noticing we were the only group handing out free books, maybe that looked weird. Did you notice anything else?”
I just want to highlight your second point― resource allocation within the movement away from the global poor and towards longtermsism― seems to be a big part of what is concretely criticized in the Current Affairs piece. Quoting:
This means that if you want to do the most good, you should focus on these far-future people rather than on helping those in extreme poverty today. As [Hilary Greaves and Will MacAskill] write, “for the purposes of evaluating actions, we can in the first instance often simply ignore all the effects contained in the first 100 (or even 1,000) years, focusing primarily on the further-future effects. Short-run effects act as little more than tie-breakers.”
...
Since our resources for reducing existential risk are finite, Bostrom argues that we must not “fritter [them] away” on what he describes as “feel-good projects of suboptimal efficacy.” Such projects would include, on this account, not just saving people in the Global South—those most vulnerable, especially women—from the calamities of climate change, but all other non-existential philanthropic causes, too.
This doesn’t seem to me like a purely hypothetical harm. If you value existing people much more than potential future people (not an uncommon moral intuition) then this is concretely bad, especially since the EA community is able to move around a lot of philanthropic capital.
Some recent-ish resources that potential applicants might want to check out:
David Manheim and Gregory Lewis, High-risk human-caused pathogen exposure events from 1975-2016, data note published in August 2021.
As a way to better understand the risk of Global Catastrophic Biological Risks due to human activities, rather than natural sources, this paper reports on a dataset of 71 incidents involving either accidental or purposeful exposure to, or infection by, a highly infectious pathogenic agent.
Filippa Lentzos and Gregory D. Koblentz, Mapping Maximum Biological Containment Labs Globally, policy brief published in May 2021 part of the Global Biolabs project.
This study provides an authoritative resource that: 1) maps BSL4 labs that are planned, under construction, or in operation around the world, and 2) identifies indicators of good biosafety and biosecurity practices in the countries where the labs are located.
2021 Global Health Security Index, https://www.ghsindex.org/.
If you click through to the PDFs under each individual country profile, they have detailed information on the country’s biosafety and biosecurity laws! (Example: the exact laws aren’t clear from https://www.ghsindex.org/country/ukraine/ but if you click through to the “Country Score Justification Summary” PDF (https://www.ghsindex.org/wp-content/uploads/2021/12/Ukraine.pdf) it has like 100 pages of policy info.
I’m also familiar with this school of thought, but I’m not sure it’s empirically validated?
In the case of Dominic Cummings, I believe you are referring to this post which describes running successful political campaigns. Those seem like they might be an outlier, in that they are an extremely time-bound competition where “do things faster than your opponent” is an obvious win? As Samuel noted, running a startup is also a case where a marginal month of delivery matters, since you likely have <1 year of runway to demonstrate to investors that you should continue being funded. The other examples you cite don’t seem to be of people optimizing for impact.
Lynette Bye put some empirical research into the post How Long Can People Reasonably Work?, but found the literature pretty disappointing. Her top-level conclusions included:First, as you work more hours, each hour becomes less productive. If I had to guess based on the research, I’d say there are steeply diminishing marginal value around 40-50 hours per week, and negative returns (meaning less total output for the day per additional hour) somewhere between 50 and 70 hours.
…
I’m fairly skeptical any of this research tells us how much to work (you can see more details below). I place more confidence on the anecdotal reports of productive people. It’s common for them to report three to five hours of deep work on a top priority each day, plus several hours more of lower energy or more “following curiosity”-type work (three more yet-to-be-released interviews also report in this range; one interview reports more). To be clear, I think they’re describing consistent, intense, “write a book chapter” levels of focus for those three to five hours.The hyperproductive people I know seem to score well on (1) working on important things and (2) being very focused while working, but vary in how many hours of work they do per week (I’d estimate 30-50).
I am not a hyperproductive person, so I’m not sure you should take productivity advice from me, but “try to do at least one thing I think is actually important per week” seems to give me better results than “try to work really hard”, since the latter can lead to hyperfocused work on things that don’t really matter.Curious if you know of any sources that were missed in Lynette’s post, or this response, though!
I don’t plan to engage deeply with this post, but I wanted to leave a comment pushing back on the unsubtle currents of genetic determinism (“individuals from those families with sociological profiles amenable to movements like effective altruism, progressivism, or broad Western Civilisational values are being selected out of the gene pool”), homophobia (“cultures that accept gay people on average have lower birth rates and are ultimately outnumbered by neighboring homophobic cultures”, in a piece that is all about how low birth rates are a key problem of our time) , and ethnonationalism (“based in developed countries that will be badly hit by the results of these skewed demographics”) running through this piece.
I believe that genetics influence individual personality, but am very skeptical of claims of strong genetic determinism, especially on a societal level. Moreover, it seems to me that one of the core values of effective altruism is that of impartiality― giving equal moral weight to people who are distant from me in space and/or time. The kind of essentialist and elitist rhetoric common among people who concern themselves with demographic collapse seems in direct opposition to that value; if you think a key priority of our time is ensuring the right people have children, especially if your definition of “the right people” focuses on elite and wealthy people in Western countries, I doubt that we have compatible notions of what it means to do the most good.
Many pieces that criticize effective altruism quote this paragraph from Nick Beckstead’s2013 thesis:
I would like our community to be unequivocal that all other things are not equal, and would distance myself from a community/movement that embraced an idea that lives in rich countries are more important than lives in poor countries. This seems, as I said, in direct opposition to the core values that attracted me to effective altruism.