Research associate at SecureBio, Research Affiliate at Kevin Esvelt’s MIT research group Sculpting Evolution, physician. Thinking about ways to safeguard the world from bio.
slg
Idea: Red-teaming fellowships
Practical advice for how to run EA organisations is really valuable, thanks for writing this up.
EA Analysis of the German Coalition Agreement 2021–2025
Hey, I just wanted to leave a note of thanks for this excellent write-up!
I and some other EAs are planning an event with a similar format—your advice is super helpful to structure our planning and avoid obvious mistakes.In general, these kinds of project management retrospectives provide a lot of value (e.g., EAF’s hiring retrospective).
This is cool, I had no idea you were also working on this.
This could be easier, yes. I know of one person who models the defensive potential of different metagenomic sequencing approaches, but I think there is space for at least 3-5 additional people doing this.
I think he was explicitly addressing your question of sexually-transmitted diseases being capable of triggering pandemics, not if they can end civilization.
Discussing the latter in detail would quickly get into infohazards—but I think we should spend some of our efforts (10%) on defending against non-respiratory viruses. But I haven’t thought about this in detail.
I do mean EAs with a longtermist focus. While writing about highly-engaged EAs, I had Benjamin Todd’s EAG talk in mind, in which he pointed out that only around 4% of highly-engaged EAs are working in bio.
And thanks for pointing out I should be more precise. To qualify my statement, I’m 75% confident that this should happen.
Despite how promising and scalable we think some biosecurity interventions are, we don’t necessarily think that biosecurity should grow to be a substantially larger fraction of longtermist effort than it is currently.
Agreed that it shouldn’t grow substantially, but ~doubling the share of highly-engaged EAs working on biosecurity feels reasonable to me.
I have only been involved in biosecurity for 1.5 years, but the focus on purely defensive projects (sterilization, refuges, some sequencing tech) feels relatively recent. It’s a lot less risky to openly talk about those than about technologies like antivirals or vaccines.
I’m happy to see this shift, as concrete lists like this will likely motivate more people to enter the space.
@CarlaZoeC or Luke Kemp, could you create another forum post solely focused on your article? This might lead to more focused discussions, separating debate on community norms vs discussing arguments within your piece.
I also wanted to express that I’m sorry this experience has been so stressful. It’s crucial to facilitate internal critique of EA, especially as the movement is becoming more powerful, and I feel pieces like yours are very useful to launch constructive discussions.
I particularly agree with the last point on focussing on purely defensive (not net-defensive) pathogen-agnostic technologies, such as metagenomic sequencing and resilience measures like PPE, air filters and shelters.
If others share this biodefense model in the longtermist biosecurity community, I think it’d be important to point towards these countermeasures in introductory materials (80k website, reading lists, future podcast episodes)
I do wonder what the downside is here. It’s a fleeting, low-fidelity impression of EA that will probably not stick in most minds. However, if 10-20 people donate money after hearing about it through Patrick, it might already be positive in sum.
Do you specifically object to the term megaproject, or rather to the idea of launching larger organizations and projects that could potentially absorb a lot of money?
If it’s the latter, the case for megaprojects is that they are bigger bets, with which funders could have an impact using larger sums of money, i.e., ~1-2 order of magnitudes bigger than current large longtermist grants. It is generally understood that EA has a funding overhang, which is even more true if you buy into longtermism, given that there are few obvious investment opportunities in longtermism.
I agree that many large-scale projects often have cost and time overruns (I enjoyed this EconTalk episode with Bent Flyvberg on the reasons for this). But, if we believe that a non-negligible number of megaprojects do work out, it seems to be an area we should explore.
Maybe it’d be a good idea to collect a list of past megaprojects that worked out well, without massive cost-overruns. Reflecting on this briefly, I think of the Manhattan Project, prestigious universities (Oxbridge, LMU, Harvard), and public transport projects like the TGV .
EA megaprojects continued
Hey Ludwig, happy to collaborate on this. A bunch of other EAs and I analyzed the initial party programs under EA considerations; this should be easily adapted to the final agreement and turned into a forum post.
Caveat: I work in Biosecurity.
I agree with the last point. Based on Ben Todd’s presentation at EAG,
18% of engaged EAs work on AI alignment, while
4% work on Biosecurity.
Based on Toby Ord’s estimates in the Precipice, the risk of extinction in the next 100 years from
Unaligned artificial intelligence is ∼ 1 in 10, while
the risk from engineered pandemics is ∼ 1 in 30.
So, the stock of people in AI is 4.5x higher than Biosecurity, while AI is only 3x as important.
There is a lot of nuance missing here, but I’m moderately confident that this dysbalance warrants more people moving into Biosecurity. Especially now that there we’re in a moment of high traceability concerning pandemic preparedness.
Is there a historical precedent for social movements buying media? If so, it’d be interesting to know how that influenced the outlet’s public perception/readership.
As of now, it seems like movements “merely” influence media, such as the NYTimes turning more leftward in the last few years or Vox employing more EA-oriented journalists.
Spencer Greenberg also comes to mind; he once noted that his agreeableness is in the 77th percentile. I’d consider him a generator.
As far as I understand sessions will be fully subsidised by TfG. If you can’t afford them you can choose to pay 0$—unsure if this is standard among EA coaches.
I also think centralisation of psychological services might be valuable as it makes it easier to match fitting coaches/coachees and assess coaching performance.