I recently attended EAGxOxford, EAGxBoston, and EAG London. In total, I had about 101 one-on-one conversations (+-10; depends on how you count casual/informal 1-1s). The vast majority of these were with people interested in longtermist community-building or AI safety.
Here are three of my biggest takeaways:
1) There are many people doing interesting work who aren’t well-connected
I recently moved to Berkeley, one of the major EA hubs for AI safety research and longtermist movement-building. I live with several people who are skilling up in alignment research, and I work out of an office with people who regularly talk about the probability of doom, timelines, takeoff speeds, MIRI’s agenda, Paul and Eliezer dialogues, ELK, agent foundations, interpretability, and, well, you get it.
At the EAGs, I was excited to meet many people (>30) interested in AI safety and longtermist community-building. Several (>10) of them were already dedicating a large portion of their time to AI safety or longtermist community-building (e.g., by spending a summer on AI safety research, leading a local EA chapter, or contracting for EA orgs.)
One thing stood out to me, though: Many of the people I spoke to, including those who were already investing >100 hours into EA work, weren’t aware of the people/models/work in Berkeley and other EA hubs.
Here’s a hypothetical example:
Alice: I’ve spent the last several semesters skilling up in AI safety research. I took several ML classes, and I’m going to be spending the summer working under [professor] at [my university] on [ML project that doesn’t really have to do with AI safety].
Me: Wow, that’s great! Have you considered working on the Eliciting Latent Knowledge challenge, or reading Evan Hubinger’s Risks from Learned Optimization sequence, or trying to distill alignment articles?
Alice: Oh, no! I haven’t heard of most of these things… do you think it would be useful for me to do that instead?
Me: Well, I’m not sure what you should do. But I would encourage you to at least consider these options and at least be familiar with these resources and opportunities. As an example, did you ever considering applying to the Long-Term Future Fund to skill-up on your own, and maybe visit Berkeley and some other EA Hubs?
The point here is not that Alice should immediately drop what she’s doing. But I found it interesting how many people didn’t even realize what options they had available. Alice, for example, could apply for a grant to skill-up in AI safety research in an EA Hub. But she often doesn’t even realize this, or even if she does, she doesn’t seriously consider it when she’s thinking about her summer plans.
I don’t think people should blindly defer to the people/models in EA hubs. But I do think that exposure to these people/models will generally help people make more informed decisions. Two quick examples:
People skilling-up in AI safety researcher would generally benefit from understanding the major criticisms/concerns with current alignment research agendas. It seems useful to at least be exposed to some of the doomy people and understand why they’re so doomy.
People skilling-up in longtermist community-building could benefit from understanding the major criticisms/concerns with general community-building efforts. It seems useful to at least be exposed to the arguments around impact being heavy-tailed, mass outreach compromising the epistemics of the community and making it less attractive for people who take ideas seriously, and concerns around community-builders not knowing enough about the issues they are community-building for.
One of the easiest ways to do this, I claim, is to talk directly to people doing this kind of work. After 1:1s with people who were doing (or seriously considering) longtermist work, I often asked, “Who would be good for this person to talk to?” and then I immediately threw them into some group chats.
More broadly, I’ve updated in the direction of the following claim: There are people doing (or capable of doing) meaningful longtermist work outside of major EA hubs. I’m excited about interventions that try to find these individuals and connect them to people who can support their work, challenge their thinking, and introduce them to new opportunities.
2) Considering wide action spaces is rare and valuable
It’s extremely common for people to think about the opportunities that are in front of them, rather than considering the entire action space of possibilities.
A classic example is when I met Bob, a community-builder at Peter Singer University.
Bob: I’ve been running the PSU group for the last year, and it’s been going pretty well. We have an AI safety group now, and we’re thinking about ways to do more projects and contests. I graduate this year, and I’m planning to do community-building full-time at PSU.
Me: Wow, great work at PSU, Bob! Out of curiosity, have you considered any other ways you could use your community-building aptitudes? Like, what if you take a step back… what do you think are the most important challenges that we’re facing? And how could a community-builder—not necessarily you—how could some imaginary person who just gets dropped in from the sky make the biggest impact?
Bob: [Mentions something pretty cool and ambitious].
Me: Yeah, that seems worth thinking more about. I also wonder if you’ve thought about supporting community-building efforts at MIT, or in India, or running a global alignment competition, or running a research scholars program, or…
Bob: Woah, I haven’t thought about those, but like, why me? I don’t know anything about [India/MIT/competitions/research programs].
Me: Sure… and I’m not saying you would be a good fit for any of this. I barely know you! But I’d be pretty excited for you to at least consider some of these wilder options, rather seriously, for at least 10-60 minutes, before you fully dismiss them. And I think people often underestimate how much they could know about a particular topic if they really tried.
I think “considering wide action spaces” and “taking weird ideas seriously” are two of the traits that I most commonly see in highly impactful people. To be clear, I think considerations of personal fit are important, and we don’t want anyone trying anything. But I claim that people generally default to dismissing ideas prematurely and failing to seriously consider what it would look like to do something that deviates from the natural, intuitive, default pathways.
If you are a student at PSU, I encourage you to think seriously about projects, internships, research projects, skilling-up quests, and other opportunities that exist outside of PSU. Maybe the best thing for you to do is to stay, but you won’t know unless you consider the wide action space.
3) People should write down their ideas
At least 10 times during EAGs, someone was describing something they had thought in some detail (examples: a project proposal, a grant idea, comparisons between career options they had been considering).
And I asked, “Wow, have you written any of this up?”
And the person (usually) responded, “Oh… uh. No—well, not yet! I might write it up later/I’m planning to write it up/Maybe after the conference I’ll write it up/I’m nervous to write it up/I don’t have enough to actually write up…”
Some benefits of writing that I’ve noticed:
Writing helps me think better. For instance, writing often forces me to be more concrete about my ideas, and it often helps me identify new uncertainties/confusions.
Writing improves the quantity and quality of feedback that I receive. Some EAs are much better at critiquing ideas in writing than in conversation.
When other people share their writing with me, I find it useful to be able to reflect on the ideas before conversing with them.
When other people share their writing with me, I generally take them more seriously. I update in the direction of “ah, they have thought seriously about this, and they might actually want to do this!”
If you’re reading this, I encourage you to take 30-60 minutes to start writing something. Here are some examples of things that I’ve been encouraging my friends (and myself) to write up:
How I’m Currently Thinking about My Career & Path to Impact
What do I think are the World’s Biggest Problems, and what are my Biggest Uncertainties?
How to Support Me When I am Upset
Should I Take this Job/Internship?
Bugs, How I am Working on them, and How my Friends can Help
I think EAGxOxford was more valuable for me than EAGxBoston. This was mostly because I am based in the US, so there were many people in the UK who didn’t know me, or others in my network, or ideas that have been swimming around the US community-building/AI alignment scene. Slight update toward going to conferences that cause me to meet people outside my “bubble” or generally visiting non-US EA Hubs.
I learned a lot about S-risks. Grateful to Linh Chi Nguyen for explaining multi-AI scenarios, spiteful preferences, “near-miss” scenarios, and astronomical misuse. I went from thinking “yeah s-risk stuff seems important” to “oh wow, there are some very specific and tangible problems in AI safety that are especially important from an s-risk perspective.”
I’ve also been thinking about s-risks in light of the Death with Dignity post and follow-up posts (like this, this, and this). If nothing else, it reminds me that there are outcomes even worse than “everyone dies.” If we fail to produce aligned AGI, maybe we can at least produce AGI that doesn’t torture anyone. (I imagine some people have written about this—please link in the comments if you know more about this!)
A lot of people are interested in alignment contests! Aris Richardson (from UC Berkeley) is currently running an alignment distillation contest for college students. If you want to talk about alignment contests, feel free to reach out to me (or her!)
A lot of people are interested in supporting AI safety researchers! Redwood research is hiringfor some exciting roles, including Head of Community, Operations Manager, Recruiting & MLAB Lead, and IT Analyst. I encourage you to apply, even if you’re not sure about your fit.
Three Reflections from 101 EA Global Conversations
I recently attended EAGxOxford, EAGxBoston, and EAG London. In total, I had about 101 one-on-one conversations (+-10; depends on how you count casual/informal 1-1s). The vast majority of these were with people interested in longtermist community-building or AI safety.
Here are three of my biggest takeaways:
1) There are many people doing interesting work who aren’t well-connected
I recently moved to Berkeley, one of the major EA hubs for AI safety research and longtermist movement-building. I live with several people who are skilling up in alignment research, and I work out of an office with people who regularly talk about the probability of doom, timelines, takeoff speeds, MIRI’s agenda, Paul and Eliezer dialogues, ELK, agent foundations, interpretability, and, well, you get it.
At the EAGs, I was excited to meet many people (>30) interested in AI safety and longtermist community-building. Several (>10) of them were already dedicating a large portion of their time to AI safety or longtermist community-building (e.g., by spending a summer on AI safety research, leading a local EA chapter, or contracting for EA orgs.)
One thing stood out to me, though: Many of the people I spoke to, including those who were already investing >100 hours into EA work, weren’t aware of the people/models/work in Berkeley and other EA hubs.
Here’s a hypothetical example:
Alice: I’ve spent the last several semesters skilling up in AI safety research. I took several ML classes, and I’m going to be spending the summer working under [professor] at [my university] on [ML project that doesn’t really have to do with AI safety].
Me: Wow, that’s great! Have you considered working on the Eliciting Latent Knowledge challenge, or reading Evan Hubinger’s Risks from Learned Optimization sequence, or trying to distill alignment articles?
Alice: Oh, no! I haven’t heard of most of these things… do you think it would be useful for me to do that instead?
Me: Well, I’m not sure what you should do. But I would encourage you to at least consider these options and at least be familiar with these resources and opportunities. As an example, did you ever considering applying to the Long-Term Future Fund to skill-up on your own, and maybe visit Berkeley and some other EA Hubs?
The point here is not that Alice should immediately drop what she’s doing. But I found it interesting how many people didn’t even realize what options they had available. Alice, for example, could apply for a grant to skill-up in AI safety research in an EA Hub. But she often doesn’t even realize this, or even if she does, she doesn’t seriously consider it when she’s thinking about her summer plans.
I don’t think people should blindly defer to the people/models in EA hubs. But I do think that exposure to these people/models will generally help people make more informed decisions. Two quick examples:
People skilling-up in AI safety researcher would generally benefit from understanding the major criticisms/concerns with current alignment research agendas. It seems useful to at least be exposed to some of the doomy people and understand why they’re so doomy.
People skilling-up in longtermist community-building could benefit from understanding the major criticisms/concerns with general community-building efforts. It seems useful to at least be exposed to the arguments around impact being heavy-tailed, mass outreach compromising the epistemics of the community and making it less attractive for people who take ideas seriously, and concerns around community-builders not knowing enough about the issues they are community-building for.
One of the easiest ways to do this, I claim, is to talk directly to people doing this kind of work. After 1:1s with people who were doing (or seriously considering) longtermist work, I often asked, “Who would be good for this person to talk to?” and then I immediately threw them into some group chats.
More broadly, I’ve updated in the direction of the following claim: There are people doing (or capable of doing) meaningful longtermist work outside of major EA hubs. I’m excited about interventions that try to find these individuals and connect them to people who can support their work, challenge their thinking, and introduce them to new opportunities.
2) Considering wide action spaces is rare and valuable
It’s extremely common for people to think about the opportunities that are in front of them, rather than considering the entire action space of possibilities.
A classic example is when I met Bob, a community-builder at Peter Singer University.
Bob: I’ve been running the PSU group for the last year, and it’s been going pretty well. We have an AI safety group now, and we’re thinking about ways to do more projects and contests. I graduate this year, and I’m planning to do community-building full-time at PSU.
Me: Wow, great work at PSU, Bob! Out of curiosity, have you considered any other ways you could use your community-building aptitudes? Like, what if you take a step back… what do you think are the most important challenges that we’re facing? And how could a community-builder—not necessarily you—how could some imaginary person who just gets dropped in from the sky make the biggest impact?
Bob: [Mentions something pretty cool and ambitious].
Me: Yeah, that seems worth thinking more about. I also wonder if you’ve thought about supporting community-building efforts at MIT, or in India, or running a global alignment competition, or running a research scholars program, or…
Bob: Woah, I haven’t thought about those, but like, why me? I don’t know anything about [India/MIT/competitions/research programs].
Me: Sure… and I’m not saying you would be a good fit for any of this. I barely know you! But I’d be pretty excited for you to at least consider some of these wilder options, rather seriously, for at least 10-60 minutes, before you fully dismiss them. And I think people often underestimate how much they could know about a particular topic if they really tried.
I think “considering wide action spaces” and “taking weird ideas seriously” are two of the traits that I most commonly see in highly impactful people. To be clear, I think considerations of personal fit are important, and we don’t want anyone trying anything. But I claim that people generally default to dismissing ideas prematurely and failing to seriously consider what it would look like to do something that deviates from the natural, intuitive, default pathways.
If you are a student at PSU, I encourage you to think seriously about projects, internships, research projects, skilling-up quests, and other opportunities that exist outside of PSU. Maybe the best thing for you to do is to stay, but you won’t know unless you consider the wide action space.
3) People should write down their ideas
At least 10 times during EAGs, someone was describing something they had thought in some detail (examples: a project proposal, a grant idea, comparisons between career options they had been considering).
And I asked, “Wow, have you written any of this up?”
And the person (usually) responded, “Oh… uh. No—well, not yet! I might write it up later/I’m planning to write it up/Maybe after the conference I’ll write it up/I’m nervous to write it up/I don’t have enough to actually write up…”
Some benefits of writing that I’ve noticed:
Writing helps me think better. For instance, writing often forces me to be more concrete about my ideas, and it often helps me identify new uncertainties/confusions.
Writing improves the quantity and quality of feedback that I receive. Some EAs are much better at critiquing ideas in writing than in conversation.
When other people share their writing with me, I find it useful to be able to reflect on the ideas before conversing with them.
When other people share their writing with me, I generally take them more seriously. I update in the direction of “ah, they have thought seriously about this, and they might actually want to do this!”
If you’re reading this, I encourage you to take 30-60 minutes to start writing something. Here are some examples of things that I’ve been encouraging my friends (and myself) to write up:
How I’m Currently Thinking about My Career & Path to Impact
What do I think are the World’s Biggest Problems, and what are my Biggest Uncertainties?
How to Support Me When I am Upset
Should I Take this Job/Internship?
Bugs, How I am Working on them, and How my Friends can Help
If you write something down by April 30, feel free to submit to the Community Builder Writing Contest.
Miscellaneous Reflections
I think EAGxOxford was more valuable for me than EAGxBoston. This was mostly because I am based in the US, so there were many people in the UK who didn’t know me, or others in my network, or ideas that have been swimming around the US community-building/AI alignment scene. Slight update toward going to conferences that cause me to meet people outside my “bubble” or generally visiting non-US EA Hubs.
I learned a lot about S-risks. Grateful to Linh Chi Nguyen for explaining multi-AI scenarios, spiteful preferences, “near-miss” scenarios, and astronomical misuse. I went from thinking “yeah s-risk stuff seems important” to “oh wow, there are some very specific and tangible problems in AI safety that are especially important from an s-risk perspective.”
I’ve also been thinking about s-risks in light of the Death with Dignity post and follow-up posts (like this, this, and this). If nothing else, it reminds me that there are outcomes even worse than “everyone dies.” If we fail to produce aligned AGI, maybe we can at least produce AGI that doesn’t torture anyone. (I imagine some people have written about this—please link in the comments if you know more about this!)
A lot of people are interested in alignment contests! Aris Richardson (from UC Berkeley) is currently running an alignment distillation contest for college students. If you want to talk about alignment contests, feel free to reach out to me (or her!)
A lot of people are interested in supporting AI safety researchers! Redwood research is hiring for some exciting roles, including Head of Community, Operations Manager, Recruiting & MLAB Lead, and IT Analyst. I encourage you to apply, even if you’re not sure about your fit.
If you liked this piece, you might also like this reflection from the EA student summit (somewhat outdated) and this reflection from a few months ago (less outdated).
I’m grateful to Madhu Sriram, Luise Wöhlke, Lara Thurnherr, and Harriet Patterson for feedback on a draft of this post.