Research associate at SecureBio, Research Affiliate at Kevin Esvelt’s MIT research group Sculpting Evolution, physician. Thinking about ways to safeguard the world from bio.
slg
Flagging that I’m only about 1⁄3 in.
Regarding this paragraph:
″ An epistemically healthy community seems to be created by acquiring maximally-rational, intelligent, and knowledgeable individuals, with social considerations given second place. Unfortunately, the science does not bear this out. The quality of an epistemic community does not boil down to the de-biasing and training of individuals;[3] more important factors appear to be the community’s composition, its socio-economic structure, and its cultural norms.[4]”
When saying that the science doesn’t bear this out you go on to cite footnotes in your original article. If you want to make the case for this, it might be better to either i) point to very specific ways how the current qualities of EA lead to flawed conclusions, or ii) point to research that makes a similar claim.
Maybe you’re already considering this but here it goes anyway:
I‘d advise against the name ‚longtermist hub‘. I wouldn‘t want longtermism to also become an identity, just as EA is one.
It also has reputational risks—which is why new EA-oriented orgs do not have EA in their name.
I have only been involved in biosecurity for 1.5 years, but the focus on purely defensive projects (sterilization, refuges, some sequencing tech) feels relatively recent. It’s a lot less risky to openly talk about those than about technologies like antivirals or vaccines.
I’m happy to see this shift, as concrete lists like this will likely motivate more people to enter the space.
This post reads like it wants to convince its readers that AGI is near/will spell doom, picking and spelling out arguments in a biased way.
Just because many ppl on the Forum and LW (including myself) believe that AI Safety is very important and isn’t given enough attention by important actors, I don’t want to lower our standards for good arguments in favor of more AI Safety.
Some parts of the post that I find lacking:
“We don’t have any obstacle left in mind that we don’t expect to get overcome in more than 6 months after efforts are invested to take it down.”
I don’t think more than 1⁄3 of ML researchers or engineers at DeepMind, OpenAI, or Anthropic would sign this statement.
“No one knows how to predict AI capabilities.”
Many people are trying though (Ajeya Cotra, EpochAI), and I think these efforts aren’t worthless. Maybe a different statement could be: “New AI capabilities appear discontinuously, and we have a hard time predicting such jumps. Given this larger uncertainty, we should worry more about unexpected and potentially dangerous capability increases”.
“RLHF and Fine-Tuning have not worked well so far.”
Not taking into account if RLHF scales (as linked, Jan Leike of OpenAI doesn’t think so) and if RLHF leads to deception, from my cursory reading and experience, ChatGPT shows substantially better behavior than Bing, which might be due to the latter not using RLHF.
Overall I do agree with the article and think that recent developments have been worrying. Still, if the goal of the articles is to get independently-thinking individuals to think about working on AI Safety, I’d prefer less extremized arguments.
This point is boring, but I don’t think Twitter gives an accurate picture of what the world thinks about EA. I still think there is a point in sometimes reacting to bad-faith arguments and continuing to i) put out good explanations of EA-ish ideas and ii) writing up thoughts on what went wrong. But communicating too fast, before, e.g., we have an improved understanding of the FTX situation, seems bad.
Also, as a semi-good analogy for the Wytham question, the World Economic Forum draws massive protests every year but is still widely respected among important circles.
Let’s phrase it even more explicitly: You trust EVF to always make the right calls, even in 10 years from now.
The quote above (emphasis mine) reads like a strawman; I don’t think Michael would say that they always make the right call. My personal view is that individuals steering GWWC will mostly make the right decisions and downside risks are small enough not to warrant costly governance interventions.
I appreciate that many EA’s focus on high IQ and general mental ability can be hard to deal with. For instance, I found this quite aversive when I first got into EA.
But I’m unsure why your comment has 10 upvotes, given that you do not give many arguments for your statements.
Please let me know if anything below is uncharitable of if I misread something!Focusing on elite universities
[...] why EA’s obsession with elite universities is sickening.
The share of highly talented students at elite universities is higher. Thus, given the limited number of individuals who can do in-person outreach, it makes sense to prioritize elite unis.
From my own experience, Germany has no elite universities. This makes outreach a lot harder, as we have no location to go to where we can be sure to address many highly talented students. Instead, German EAs self-select into EA by finding information online. Thus, if Germany had an elite uni, I would put most of my outreach efforts there.Returns to high IQ
But I think the returns to lots of high-IQ people in EA are also pretty modest [...]
If you condition on the view that EA is bottle-necked by highly engaged and capable individuals that start new projects or found organizations, selecting for IQ seems as one of the best first steps.
IQ predicts good performance among various tasks and is thus plausibly upstream of having a diversity of skills.
E.g., a 2011 study of 2329 participants in the Study of Mathematically Precocious Youth cohort shows no cut-off at which additional cognitive ability doesn’t matter anymore. Participants were identified as intellectually gifted (top 1% of mental ability) at the age of 13 years and followed up for 25+ years. Even within this top percentile stratum of ability, being in the top quartile predicts substantially better outcomes: Among the top 0.25%, ~34% of cohort participants have a doctorate, and around 12% have filed a patent 25+ years after being identified as gifted at the age of 13. This compares to 4.5% of the US population holding a doctorate degree in 2018; I couldn’t find data on the share of US Americans who have filed a patent, but I wouldn’t be surprised if it’s at least one order of magnitude lower.
Value of different perspectives/skills
[...] it’s much more important to get people with varied perspectives/skills into EA.
Looking at the value of I) varied perspectives and II) skills in turn.
Regarding I), I’d also want to select people who reason well and scrutinize widely held effective altruist assumptions. But, I wouldn’t aim to maximize the variety of perspectives in EA for the sake of having different views alone (as this doesn’t account for the merit of each view).
And again, generating perspectives with lots of merit is likely linked to high IQ.
On II), I agree that having EAs with various skills is important given that EA-oriented work is becoming increasingly diverse (e.g., doing AI Safety Research, building pandemic shelters, drafting legislation that governs x-risks).
I liked it a lot. Given that he probably wasn’t involved beforehand, the author got a detailed picture of EA’s current state.
Thanks for the write-up. Just adding a note on how this distinction has practical implications for how to design databases containing hazardous sequences that are required for gene synthesis screening systems.
With gene synthesis screening, companies want to stop bad actors from getting access to the physical DNA or RNA of potential pandemic pathogens. Now, let’s say researchers find the sequence of a novel pathogen that would likely spark a pandemic if released. Most would want this sequence to be added to synthesis screening databases. But some also want this database to be public. The information hazards involved in making such information publicly available could be large, especially if there is attached discussion of how exactly these sequences are dangerous.
Without saying much about the merits of various commenters’ arguments, I wanted to check if this is a rhetorical question:
Is anyone on this forum in a better position than the Secretary-General of the UN to analyze, for example, the impact of Israel’s actions on future, unrelated conflicts?
If so, this is an appeal to authority that isn’t very helpful in advancing this discussion. If it’s an actual question, never mind.
Thanks for writing this up. I just wanted to note, the OWID graph that appears while hovering over a hyperlink is neat! @JP Addison or whoever created that, cool work.
I just wanted to note that I appreciated this post and the subsequent discussion, as it quickly allowed me to get a better model of the value of antivirals. Publicly visible discussions around biosecurity interventions are rare, making it hard to understand other people’s models.
I appreciate that there are infohazards considerations here, but I feel it’s too hard for people to scrutinize the views of others because of this.
Appreciated the 5-minute summary; I think more reports of this length should have two summaries, one TL;DR, the other similar to your 5 min summary.
Noting my excitement that you picked up on the idea and will actually make this happen!
The structure you lay out sounds good.
Regarding the winning team, will there be financial rewards? I’d give it >70% that someone would fund at least a ~$1000 award for the best team.
Despite how promising and scalable we think some biosecurity interventions are, we don’t necessarily think that biosecurity should grow to be a substantially larger fraction of longtermist effort than it is currently.
Agreed that it shouldn’t grow substantially, but ~doubling the share of highly-engaged EAs working on biosecurity feels reasonable to me.
Spencer Greenberg also comes to mind; he once noted that his agreeableness is in the 77th percentile. I’d consider him a generator.
I do mean EAs with a longtermist focus. While writing about highly-engaged EAs, I had Benjamin Todd’s EAG talk in mind, in which he pointed out that only around 4% of highly-engaged EAs are working in bio.
And thanks for pointing out I should be more precise. To qualify my statement, I’m 75% confident that this should happen.
That’s a good pointer, thanks! I’ll drop the reference to Diggans and Leproust for now.
Appreciated this post! Have you considered crossposting this to Lesswrong? Seems like an important audience for this.
@CarlaZoeC or Luke Kemp, could you create another forum post solely focused on your article? This might lead to more focused discussions, separating debate on community norms vs discussing arguments within your piece.
I also wanted to express that I’m sorry this experience has been so stressful. It’s crucial to facilitate internal critique of EA, especially as the movement is becoming more powerful, and I feel pieces like yours are very useful to launch constructive discussions.