Research associate at SecureBio, Research Affiliate at Kevin Esvelt’s MIT research group Sculpting Evolution, physician. Thinking about ways to safeguard the world from bio.
slg
Hi, happy to read about where you stand and where you want to go with NEAD.
FYI, the link in this sentence seems broken: “currently offering self-hosted alternatives to Slack, Google, and Zoom, one reason for this being our concern with risks from data privacy neglect. ”
Thanks for this write-up. Concerning this point:
Quantitative investigation of tech capabilities required for broad environmental nucleic acid surveillance to be useful
This article provides a good introduction to current challenges within genomic pathogen surveillance: Ten recommendations for supporting open pathogen genomic analysis in public health
Spencer Greenberg also comes to mind; he once noted that his agreeableness is in the 77th percentile. I’d consider him a generator.
Hey Ludwig, happy to collaborate on this. A bunch of other EAs and I analyzed the initial party programs under EA considerations; this should be easily adapted to the final agreement and turned into a forum post.
Do you specifically object to the term megaproject, or rather to the idea of launching larger organizations and projects that could potentially absorb a lot of money?
If it’s the latter, the case for megaprojects is that they are bigger bets, with which funders could have an impact using larger sums of money, i.e., ~1-2 order of magnitudes bigger than current large longtermist grants. It is generally understood that EA has a funding overhang, which is even more true if you buy into longtermism, given that there are few obvious investment opportunities in longtermism.
I agree that many large-scale projects often have cost and time overruns (I enjoyed this EconTalk episode with Bent Flyvberg on the reasons for this). But, if we believe that a non-negligible number of megaprojects do work out, it seems to be an area we should explore.
Maybe it’d be a good idea to collect a list of past megaprojects that worked out well, without massive cost-overruns. Reflecting on this briefly, I think of the Manhattan Project, prestigious universities (Oxbridge, LMU, Harvard), and public transport projects like the TGV .
I particularly agree with the last point on focussing on purely defensive (not net-defensive) pathogen-agnostic technologies, such as metagenomic sequencing and resilience measures like PPE, air filters and shelters.
If others share this biodefense model in the longtermist biosecurity community, I think it’d be important to point towards these countermeasures in introductory materials (80k website, reading lists, future podcast episodes)
@CarlaZoeC or Luke Kemp, could you create another forum post solely focused on your article? This might lead to more focused discussions, separating debate on community norms vs discussing arguments within your piece.
I also wanted to express that I’m sorry this experience has been so stressful. It’s crucial to facilitate internal critique of EA, especially as the movement is becoming more powerful, and I feel pieces like yours are very useful to launch constructive discussions.
I have only been involved in biosecurity for 1.5 years, but the focus on purely defensive projects (sterilization, refuges, some sequencing tech) feels relatively recent. It’s a lot less risky to openly talk about those than about technologies like antivirals or vaccines.
I’m happy to see this shift, as concrete lists like this will likely motivate more people to enter the space.
Despite how promising and scalable we think some biosecurity interventions are, we don’t necessarily think that biosecurity should grow to be a substantially larger fraction of longtermist effort than it is currently.
Agreed that it shouldn’t grow substantially, but ~doubling the share of highly-engaged EAs working on biosecurity feels reasonable to me.
I do mean EAs with a longtermist focus. While writing about highly-engaged EAs, I had Benjamin Todd’s EAG talk in mind, in which he pointed out that only around 4% of highly-engaged EAs are working in bio.
And thanks for pointing out I should be more precise. To qualify my statement, I’m 75% confident that this should happen.
I think he was explicitly addressing your question of sexually-transmitted diseases being capable of triggering pandemics, not if they can end civilization.
Discussing the latter in detail would quickly get into infohazards—but I think we should spend some of our efforts (10%) on defending against non-respiratory viruses. But I haven’t thought about this in detail.
This could be easier, yes. I know of one person who models the defensive potential of different metagenomic sequencing approaches, but I think there is space for at least 3-5 additional people doing this.
Hey, I just wanted to leave a note of thanks for this excellent write-up!
I and some other EAs are planning an event with a similar format—your advice is super helpful to structure our planning and avoid obvious mistakes.In general, these kinds of project management retrospectives provide a lot of value (e.g., EAF’s hiring retrospective).
Practical advice for how to run EA organisations is really valuable, thanks for writing this up.
As far as I understand sessions will be fully subsidised by TfG. If you can’t afford them you can choose to pay 0$—unsure if this is standard among EA coaches.
I also think centralisation of psychological services might be valuable as it makes it easier to match fitting coaches/coachees and assess coaching performance.
Maybe you’re already considering this but here it goes anyway:
I‘d advise against the name ‚longtermist hub‘. I wouldn‘t want longtermism to also become an identity, just as EA is one.
It also has reputational risks—which is why new EA-oriented orgs do not have EA in their name.
Do you know which funder is supporting the EA Hotel type thing?
Noting my excitement that you picked up on the idea and will actually make this happen!
The structure you lay out sounds good.
Regarding the winning team, will there be financial rewards? I’d give it >70% that someone would fund at least a ~$1000 award for the best team.
I was very happy to read this, great to hear that your switch to direct work was successful!
Hi!
I think you mean to say: “every way a higher growth rate would be good is also an equally plausibly reason it would be bad”
Instead you wrote:
“Evidential symmetry here would be something like: every way a higher growth rate would be good is also an equally plausibly reason it would be good eg. increased emissions are equally likely to be good as they are to be bad.) ”