Something like a big version of SERI-Mats … (My impression is that SERI-Mats could become this one day, but I’d also be excited to see more programs “compete” with SERI-Mats).
At EAG-SF I asked a MATS organizer if we could get other versions of MATS, e.g. a MATS competitor at MIT. Their response was that only one of the two could survive because there are currently only ~15 people capable of doing this kind of mentorship. Mentors are the bottleneck for scaling up programs like MATS, not field builders.
Targeted Outreach to Experienced Researchers
Isn’t Vael Gates already mostly focused on this? (“My projects tend to be aimed more at outreach and at older populations– AI researchers, academia and industry.”) Curious what the main benefits are of a separate project.
Understanding AI trends and AI safety outreach in China
See this comment: Tianxia focuses on building the longtermist community, while Concordia focuses on all things AI, including recruiting STEM undergrads and AI grad students to start working on AI safety. I think you already know this, so I’m wondering why you think it’s not enough to focus on scaling up these existing orgs.
Something that helps people skill-up in AIS, management, community-building, applied rationality, and other useful stuff.
I don’t see why people need to be good at management and community building if they end up doing AIS technical research. Maybe you’re using “generalists” to mean “people who will start new AIS orgs/projects”?
Help them find therapists, PAs, nutritionists, friends, etc.
Ops teams can take care of some of this. AI Safety Support offers a completely free health coach for people working on AI safety. More importantly, I think an executive assistant who works exclusively for Paul Christiano would save him more time than a larger org that can’t work with him as closely. MacAskill certainly has assistants, and the top alignment researchers should as well. I think your idea is to have an org that executive assistants can outsource some common tasks to?
I don’t see why people need to be good at management and community building if they end up doing AIS technical research. Maybe you’re using “generalists” to mean “people who will start new AIS orgs/projects”?
I think the implicit assumption is that if you’re someone who wants to make the “AI story go well,” a necessary pre-requisite is understanding AIS in a fair amount of detail. (See e.g. this elucidation by Owen C-B). I’m not sure I believe this myself, but at the very least it sounds plausible.
Love seeing posts like this!
At EAG-SF I asked a MATS organizer if we could get other versions of MATS, e.g. a MATS competitor at MIT. Their response was that only one of the two could survive because there are currently only ~15 people capable of doing this kind of mentorship. Mentors are the bottleneck for scaling up programs like MATS, not field builders.
Isn’t Vael Gates already mostly focused on this? (“My projects tend to be aimed more at outreach and at older populations– AI researchers, academia and industry.”) Curious what the main benefits are of a separate project.
See this comment: Tianxia focuses on building the longtermist community, while Concordia focuses on all things AI, including recruiting STEM undergrads and AI grad students to start working on AI safety. I think you already know this, so I’m wondering why you think it’s not enough to focus on scaling up these existing orgs.
Might want to mention CAIS here?
I don’t see why people need to be good at management and community building if they end up doing AIS technical research. Maybe you’re using “generalists” to mean “people who will start new AIS orgs/projects”?
Ops teams can take care of some of this. AI Safety Support offers a completely free health coach for people working on AI safety. More importantly, I think an executive assistant who works exclusively for Paul Christiano would save him more time than a larger org that can’t work with him as closely. MacAskill certainly has assistants, and the top alignment researchers should as well. I think your idea is to have an org that executive assistants can outsource some common tasks to?
I think the implicit assumption is that if you’re someone who wants to make the “AI story go well,” a necessary pre-requisite is understanding AIS in a fair amount of detail. (See e.g. this elucidation by Owen C-B). I’m not sure I believe this myself, but at the very least it sounds plausible.