Let’s make nice things with biology. Working on biosecurity at iGEM. Also into lab automation, event production, donating to global health. From Toronto, lived in Paris, currently in the SF Bay. Website: tessa.fyi
Tessa
Carl Shulman — How are brain mass (and neurons) distributed among humans and the major farmed land animals?
Brian Tomasik – Differential Intellectual Progress as a Positive-Sum Project
Brian Tomasik – Reasons to Be Nice to Other Value Systems
Carl Shulman – What portion of a boost to global GDP goes to the poor?
Paul Christiano – Machine intelligence and capital accumulation
Carl Shulman – How migration liberalization might eliminate most absolute poverty
Scott Alexander – Nobody Is Perfect, Everything Is Commensurable
Minding Our Way – Conviction without self-deception
Minding Our Way – Deliberate Once
Minding Our Way – Dive In
Scott Alexander – Axiology, Morality, Law
Will splashy philanthropy cause the biosecurity field to focus on the wrong risks?
Yes, I have updated towards the view that a single funder can strongly influence the direction and focus of a research field.
I notice I feel reluctant to give any detailed description of what I learned in those conversations in this entirely public forum; I’d like people to feel as if they can share their opinions with me without those later being broadcast.
My broad, stitched-together impression (which could be as much my interpretation as the opinion of those I spoke to) is that people are excited about the emergence of a major new funder, but leery of the sudden change in what research is most easily able to get funded. In addition to bringing new people into the field, Open Phil granting has redirected some established researchers to focus on GCBRs, and I think there is a view that GCBRs are a valid concern, but not so singularly important that they should overwhelm other research agendas.
Evidence Action – We’re Shutting Down No Lean Season, Our Seasonal Migration Program: Here’s Why
[Question] What are your top papers of the 2010s?
It seems they are worried they might learn more and decide they were wrong and now want something different… If you truly, deeply care about altruism, you’ll keep picking it in every moment, up until the world changes enough that you don’t.
I don’t object to learning more and realizing that I value different things, but there are a lot of other reasons I might end up with different priorities or values. Some of those are not exactly epistemically virtuous.
As a concrete example, I worry that living in the SF bay area is making me care less about extreme wealth disparities. I witness them so regularly that it’s hard for me to feel the same flare of frustration that I once did. This change has felt like a gradual hedonic adaptation, rather than a thoughtful shifting of my beliefs; the phrase “value drift” fits that experience well.
One solution here is, of course, not to use my emotional responses as a guide for my values (cf. Against Moral Intuitions) but emotions are a very useful decision-making shortcut and I’d prefer not to take on the cognitive overhead of suppressing them.
My values being differently expressed seems very important, though. If I feel as if I value the welfare of distant people, but I stop taking actions in line with that (e.g. making donations to global poverty charities), do I still value it to the same extent?
That said, my example wasn’t about external behaviour changes, so you probably weren’t responding with that in mind.
I’ve inarguably experienced drift in the legibility of my values to myself, since I no longer have the same emotional signal for them. I find the the term “Value Drift” a useful shorthand for that, but it sounds like you find it makes things unclear?
Myself and Zachary Jacobi did some research for a post that we were going to call “Second-Order Effects Make Climate Change an Existential Threat” back in April 2019. At this point, it’s unlikely that our notes will be converted into a post, so I’m going to link a document of our rough notes.
The tl;dr of the doc:
Epistemic status: conjecture stated strongly to open debate.
It seems like there is a robust link between heat and crime (at least 1%/ºC). We should be concerned that increased temperatures due to climate change will lead to increases in conflict that represent an existential threat.
We assumed that:
Climate change is real and happening (Claim 0).
Conflict between humans is a major source of existential risk (Claim 1).
Tessa researched whether increased atmospheric CO2 concentrations would make people worse at thinking (Claim 2).
She concluded that there is only mixed evidence that CO2 concentrations affect cognition, and only at very high (i.e. indoor) concentrations.
If you are concerned about the CO2 → poor cognition → impulsivity/conflict link, worry about funding HVAC systems, not climate change.
Zach researched whether heat makes people more violent (Claim 3).
They concluded that “This seems to be solidly borne out by a variety of research and relatively uncontroversial, although there is quibbling about which confounders (alcohol, nicer weather) play a role. On the whole, we’re looking at at least 1%/ºC increase in crime. The exact mechanism remains unknown and everything I’ve read seems to have at least one counter-argument against it.”
The quality of the studies supporting this claim surprised both of us.
We did not get around to researching the intersection of food scarcity, climate change, and conflict .
This has been discussed in another comment thread on this post.
The rough notes represent maybe 4 person-hours of research and discussion; it’s a shallow investigation.
That’s great!
I’ve been working on a biosecurity event (Catalyst) that’s happening later this month in SF. It’s going to be a larger and less purely EA audience (and thus I expect it to have less of a working-group atmosphere) but I’d be happy to connect afterwards and share any takeaways on biorisk event organization.
How much collaboration exists between research analysts (or operations associates, for that matter)?
I decided against working in academic research because I do much better in a team environment (short feedback loops, bouncing ideas off peers, sense that my work contributes to shared purpose and projects) than I do working independently. I prefer the industry side of basically all of Philip Guo’s industry vs. academia comparisons. Would it still make sense for me to apply for an OpenPhil job? I think I have relevant skills, but I’m worried that I wouldn’t be effective in a research environment, even if it is non-academic.