Research associate at SecureBio, Research Affiliate at Kevin Esvelt’s MIT research group Sculpting Evolution, physician. Thinking about ways to safeguard the world from bio.
slg
I just wanted to note that I appreciated this post and the subsequent discussion, as it quickly allowed me to get a better model of the value of antivirals. Publicly visible discussions around biosecurity interventions are rare, making it hard to understand other people’s models.
I appreciate that there are infohazards considerations here, but I feel it’s too hard for people to scrutinize the views of others because of this.
Appreciated the 5-minute summary; I think more reports of this length should have two summaries, one TL;DR, the other similar to your 5 min summary.
Let’s phrase it even more explicitly: You trust EVF to always make the right calls, even in 10 years from now.
The quote above (emphasis mine) reads like a strawman; I don’t think Michael would say that they always make the right call. My personal view is that individuals steering GWWC will mostly make the right decisions and downside risks are small enough not to warrant costly governance interventions.
This point is boring, but I don’t think Twitter gives an accurate picture of what the world thinks about EA. I still think there is a point in sometimes reacting to bad-faith arguments and continuing to i) put out good explanations of EA-ish ideas and ii) writing up thoughts on what went wrong. But communicating too fast, before, e.g., we have an improved understanding of the FTX situation, seems bad.
Also, as a semi-good analogy for the Wytham question, the World Economic Forum draws massive protests every year but is still widely respected among important circles.
Probably fits max 50-100 people, though I have low certainty and this might change in the future. I think it‘s designed to host smaller events than the above, e.g., cause-area specific conferences/retreats.
On my end, the FLI link is broken: https://futureoflife.org/category/laws/open-letters-laws/
Agreed that their research is decent, but they are post-graduate institutes and have no undergraduate students.
Careers concerning Global Catastrophic Biological Risks (GCBRs) from a German perspective
Thanks, I saw a similar graph on Twitter! Wondering what kind of measurements would most clearly indicate more in-depth-engagement with EA—though traffic to the Forum likely comes close to that.
Thanks, fixed!
[Question] EA Publicity Drive—What are the best signs of increased, in-depth engagement with EA?
I liked it a lot. Given that he probably wasn’t involved beforehand, the author got a detailed picture of EA’s current state.
That makes sense; thanks for expanding on your comment.
I appreciate that many EA’s focus on high IQ and general mental ability can be hard to deal with. For instance, I found this quite aversive when I first got into EA.
But I’m unsure why your comment has 10 upvotes, given that you do not give many arguments for your statements.
Please let me know if anything below is uncharitable of if I misread something!Focusing on elite universities
[...] why EA’s obsession with elite universities is sickening.
The share of highly talented students at elite universities is higher. Thus, given the limited number of individuals who can do in-person outreach, it makes sense to prioritize elite unis.
From my own experience, Germany has no elite universities. This makes outreach a lot harder, as we have no location to go to where we can be sure to address many highly talented students. Instead, German EAs self-select into EA by finding information online. Thus, if Germany had an elite uni, I would put most of my outreach efforts there.Returns to high IQ
But I think the returns to lots of high-IQ people in EA are also pretty modest [...]
If you condition on the view that EA is bottle-necked by highly engaged and capable individuals that start new projects or found organizations, selecting for IQ seems as one of the best first steps.
IQ predicts good performance among various tasks and is thus plausibly upstream of having a diversity of skills.
E.g., a 2011 study of 2329 participants in the Study of Mathematically Precocious Youth cohort shows no cut-off at which additional cognitive ability doesn’t matter anymore. Participants were identified as intellectually gifted (top 1% of mental ability) at the age of 13 years and followed up for 25+ years. Even within this top percentile stratum of ability, being in the top quartile predicts substantially better outcomes: Among the top 0.25%, ~34% of cohort participants have a doctorate, and around 12% have filed a patent 25+ years after being identified as gifted at the age of 13. This compares to 4.5% of the US population holding a doctorate degree in 2018; I couldn’t find data on the share of US Americans who have filed a patent, but I wouldn’t be surprised if it’s at least one order of magnitude lower.
More on this cohort can be found here. Value of different perspectives/skills
[...] it’s much more important to get people with varied perspectives/skills into EA.
Looking at the value of I) varied perspectives and II) skills in turn.
Regarding I), I’d also want to select people who reason well and scrutinize widely held effective altruist assumptions. But, I wouldn’t aim to maximize the variety of perspectives in EA for the sake of having different views alone (as this doesn’t account for the merit of each view).
And again, generating perspectives with lots of merit is likely linked to high IQ.
On II), I agree that having EAs with various skills is important given that EA-oriented work is becoming increasingly diverse (e.g., doing AI Safety Research, building pandemic shelters, drafting legislation that governs x-risks).
What success looks like
I was very happy to read this, great to hear that your switch to direct work was successful!
Noting my excitement that you picked up on the idea and will actually make this happen!
The structure you lay out sounds good.
Regarding the winning team, will there be financial rewards? I’d give it >70% that someone would fund at least a ~$1000 award for the best team.
Do you know which funder is supporting the EA Hotel type thing?
Maybe you’re already considering this but here it goes anyway:
I‘d advise against the name ‚longtermist hub‘. I wouldn‘t want longtermism to also become an identity, just as EA is one.
It also has reputational risks—which is why new EA-oriented orgs do not have EA in their name.
Appreciated this post! Have you considered crossposting this to Lesswrong? Seems like an important audience for this.