If Redwood and Anthropic are flooded with applications from underqualified applicants, I think this is just because they barely have any hard requirements on their job postings. It’s a lot of fluff like “Have broad knowledge of many topics in computer science, math, and machine learning, and have enthusiasm for quickly picking up new topics.” In contrast, most job postings say something like, you should know these specific topics, have x years of experience, have an MS degree, etc. So I don’t think people should feel discouraged just from their low offer rate very much. EDIT: If only 1% of applicants get accepted, the reason is probably something like “you need significant experience in natural language processing or data engineering or something, even though this wasn’t mentioned in the job posting” rather than “you need to be smarter than 99% of EAs”. (That said, I think it can make sense for Redwood and Anthropic to write vague requirements on their job postings, so that they don’t miss out on great candidates who otherwise wouldn’t have applied.)
So rather than looking at the offer rate for Redwood and Anthropic, I think the more relevant question is, how much harder is it to get an AI safety position compared to other AI positions? Many universities have general AI clubs which may be fairly popular and presumably help members pursue a career in AI. But quality AI positions can be hard to get. Still, this doesn’t stop the viability or fruitfulness of AI clubs. Likewise, I think it would often make sense to have an AI safety club. I don’t think that an EA club should go all-in on AI safety and make it sound like all their members should only be trying to get AI safety jobs though.
EDIT: Of course, just because an AI club is viable doesn’t mean that an AI safety club is. But I’m optimistic that it is, at least at certain universities. At EA at Georgia Tech, which we started just last year, we currently have 35+ people in our AGI Safety Fundamentals Program. Starting this week, we’ll be having weekly events with the AI alignment speaker series that Harvard EA has organized this semester. We’re considering spinning this off into a separate AI safety club before the next school year, and then that could run additional programming like general discussion events or social events. EA Oxford started a dedicated AI safety club this semester with an impressive lineup of guest speakers, and EA MIT’s new AI safety club is going well as well.
What’s the distinction between “community building” and “movement building”?
So I don’t think people should feel discouraged just from their low offer rate very much.
To clarify, is your main point here that that AI safety orgs could absorb a lot more talent, if folks were more qualified? If so, that’s also my understanding. I doubt, however, that general university groups would be the optimal way to build qualified and motivated applicants, because student clubs inherently spend a lot of time on non-development focused activities. It seems like outsourcing the recruitment pipeline to, say, Cambridge’s AGI Safety Fundamentals program, GCP guides program, plus cross-university skill-building workshops would accomplish the majority of what a successful university AI safety club might accomplish, plus some, with a lot less organizer time.
Given that those programs aren’t fully built out, it might make sense for organizers to spend more of their time helping to build up those programs, rather than devote a ton of time to an AI club at their home university.
What’s the distinction between “community building” and “movement building”?
Good catch—I added “movement building” late last night and it’s way too vague. I meant it to encompass important things that recruiting doesn’t really touch on, like upskilling, but it is way too unspecific. I’ll add a note
Yup I think it would be helpful if more people seriously advertised the AGI Safety Fundamentals program or recommended the GCP Guides program (once that builds capacity to take on more people), if they don’t have time to run those programs locally or have more valuable things to do. Something else I would add to the pipeline is having students learn more about machine learning through courses, MOOCs, bootcamps, or research opportunities. People are more likely to get engaged by things that are local and in-person, but I think this gap between in-person vs virtual outsourced engagement can be minimized somewhat if you write the right marketing and still have some in-person activities, like weekly group lunches/dinners.
I’d be excited to see cross-university skill-building workshops. Do you have more details on what you’re envisioning here? What sorts of workshops do you think would be most useful? But it’s also possible that creating these isn’t in a student’s comparative advantage, especially if they aren’t already that knowledgeable about the skills they want to teach.
But it’s also possible that creating these isn’t in a student’s comparative advantage, especially if they aren’t already that knowledgeable about the skills they want to teach.
Right, this is what I suspect. It’s naturally more efficient to expand a pre-existing program than create a new one from scratch, especially in highly technical fields.
Do you have more details on what you’re envisioning here? What sorts of workshops do you think would be most useful?
I don’t have a great inside view on this, but the sorts of workshops Sydney has been running seem pretty popular (we had a couple USC fellows attend her “Impact Generator” workshop and they found it both helpful and motivating.) Lightcone in the Bay is doing a ton of that too, and GCP was planning to build out workshops after fine-tuning their Guides program.
Without confidently claiming that it’s the case with these organisations, it seems worth flagging that if the sort of hard cutoffs you’re talking about don’t track talent particularly well, it may be worth it for orgs to pay the cost of having to review more applications rather than to risk some of the few talented people self excluding. It’s noteworthy that I can instantly think of three field leaders in AI safety who either didn’t start or didn’t finish undergrad.
Having said that, Andy Jones of Anthropic did put a pretty clear bar into his recent post pointing out the need for more engineers in safety:
Could write a substantial pull request for a major ML library.
If Redwood and Anthropic are flooded with applications from underqualified applicants, I think this is just because they barely have any hard requirements on their job postings. It’s a lot of fluff like “Have broad knowledge of many topics in computer science, math, and machine learning, and have enthusiasm for quickly picking up new topics.” In contrast, most job postings say something like, you should know these specific topics, have x years of experience, have an MS degree, etc. So I don’t think people should feel discouraged just from their low offer rate very much. EDIT: If only 1% of applicants get accepted, the reason is probably something like “you need significant experience in natural language processing or data engineering or something, even though this wasn’t mentioned in the job posting” rather than “you need to be smarter than 99% of EAs”. (That said, I think it can make sense for Redwood and Anthropic to write vague requirements on their job postings, so that they don’t miss out on great candidates who otherwise wouldn’t have applied.)
So rather than looking at the offer rate for Redwood and Anthropic, I think the more relevant question is, how much harder is it to get an AI safety position compared to other AI positions? Many universities have general AI clubs which may be fairly popular and presumably help members pursue a career in AI. But quality AI positions can be hard to get. Still, this doesn’t stop the viability or fruitfulness of AI clubs. Likewise, I think it would often make sense to have an AI safety club. I don’t think that an EA club should go all-in on AI safety and make it sound like all their members should only be trying to get AI safety jobs though.
EDIT: Of course, just because an AI club is viable doesn’t mean that an AI safety club is. But I’m optimistic that it is, at least at certain universities. At EA at Georgia Tech, which we started just last year, we currently have 35+ people in our AGI Safety Fundamentals Program. Starting this week, we’ll be having weekly events with the AI alignment speaker series that Harvard EA has organized this semester. We’re considering spinning this off into a separate AI safety club before the next school year, and then that could run additional programming like general discussion events or social events. EA Oxford started a dedicated AI safety club this semester with an impressive lineup of guest speakers, and EA MIT’s new AI safety club is going well as well.
What’s the distinction between “community building” and “movement building”?
To clarify, is your main point here that that AI safety orgs could absorb a lot more talent, if folks were more qualified? If so, that’s also my understanding. I doubt, however, that general university groups would be the optimal way to build qualified and motivated applicants, because student clubs inherently spend a lot of time on non-development focused activities. It seems like outsourcing the recruitment pipeline to, say, Cambridge’s AGI Safety Fundamentals program, GCP guides program, plus cross-university skill-building workshops would accomplish the majority of what a successful university AI safety club might accomplish, plus some, with a lot less organizer time.
Given that those programs aren’t fully built out, it might make sense for organizers to spend more of their time helping to build up those programs, rather than devote a ton of time to an AI club at their home university.
Good catch—I added “movement building” late last night and it’s way too vague. I meant it to encompass important things that recruiting doesn’t really touch on, like upskilling, but it is way too unspecific. I’ll add a note
Yup I think it would be helpful if more people seriously advertised the AGI Safety Fundamentals program or recommended the GCP Guides program (once that builds capacity to take on more people), if they don’t have time to run those programs locally or have more valuable things to do. Something else I would add to the pipeline is having students learn more about machine learning through courses, MOOCs, bootcamps, or research opportunities. People are more likely to get engaged by things that are local and in-person, but I think this gap between in-person vs virtual outsourced engagement can be minimized somewhat if you write the right marketing and still have some in-person activities, like weekly group lunches/dinners.
I’d be excited to see cross-university skill-building workshops. Do you have more details on what you’re envisioning here? What sorts of workshops do you think would be most useful? But it’s also possible that creating these isn’t in a student’s comparative advantage, especially if they aren’t already that knowledgeable about the skills they want to teach.
Right, this is what I suspect. It’s naturally more efficient to expand a pre-existing program than create a new one from scratch, especially in highly technical fields.
I don’t have a great inside view on this, but the sorts of workshops Sydney has been running seem pretty popular (we had a couple USC fellows attend her “Impact Generator” workshop and they found it both helpful and motivating.) Lightcone in the Bay is doing a ton of that too, and GCP was planning to build out workshops after fine-tuning their Guides program.
Without confidently claiming that it’s the case with these organisations, it seems worth flagging that if the sort of hard cutoffs you’re talking about don’t track talent particularly well, it may be worth it for orgs to pay the cost of having to review more applications rather than to risk some of the few talented people self excluding. It’s noteworthy that I can instantly think of three field leaders in AI safety who either didn’t start or didn’t finish undergrad.
Having said that, Andy Jones of Anthropic did put a pretty clear bar into his recent post pointing out the need for more engineers in safety: Could write a substantial pull request for a major ML library.
Yeah, this post is particularly clear.