The organizers of such a group are presumably working towards careers in AI safety themselves. What do you think about the opportunity cost of their time?
To bring more people into the field, this strategy seems to delay the progress of the currently most advanced students within the AI safety pipeline. Broad awareness of AI risk among potentially useful individuals should absolutely by higher, but it doesn’t seem like the #1 bottle neck compared to developing people from “interested” to “useful contributor”. If somebody is on that cusp themselves, should they focus on personal development or outreach?
Field-building and up-skilling don’t have to be orthogonal. I’m hopeful that a lot of an organizer’s time in such a group would involve doing the same things general members going through the system would be doing, like facilitating interesting reading group discussions or working on interesting AI alignment research projects. As the too much time post suggests, maybe just doing the cool learning stuff is a great way to show that we’re serious, get new people interested, and keep our group engaged.
Like Trevor Levin says in that reply, I think field-building is more valuable now than it will be as we get closer to AGI, and I think direct work will be more valuable later than it is now. Moreso, I think field-building while you’re a university student is significantly more valuable than field-building after you’ve graduated.
I don’t necessarily think the most advanced students will always need to be organizers under this model. I think there’s a growing body of EAs who want to help with AI alignment field-building but don’t necessarily think they’re the best fit for direct work (maybe they’re underconfident though), and this could be a great opportunity for them to help with little opportunity costs.
I’m really hopeful about several new orgs people are starting for field-wide infrastructure that could help offset a lot of the operational costs of this, including orgs that might be able to hire professional ops people to support a local group.
That’s not to say I recommend every student who’s really into AI safety delay their personal growth to work on starting a university group. Just that if you have help and think you could have a big impact, it might be worth considering letting off the solo up-skilling pedal to add in some more field-building.
Agreed with #1, in that for people doing AI safety research themselves and doing AI safety community-building, each plausibly makes you more effective at the other; the time spent figuring out how to communicate these concepts might be helpful in getting a full map of the field, and certainly being knowledgeable yourself makes you more credible and a more exciting field-builder. (The flip side of “Community Builders Spend Too Much Time Community-Building” is “Community Builders Who Do Other Things Are Especially Valuable,” at least in per-hour terms. (This might not be the case for higher-level EA meta people.) I think Alexander Davies of HAIST has a great sense of this and is quite sensitive to how seriously community builders will be taken given various levels of AI technical familiarity.
I also think #3 is important. Once you have a core group of AI safety-interested students, it’s important to figure out who is better suited to spend more time organizing events and doing outreach and who should just be heads-down skill-building. (It’s important to get a critical mass such that this is even possible; EA MIT finally got enough organizers this spring that one student who really didn’t want to be community-building could finally focus on his own upskilling.)
In general, I think modeling it in “quality-adjusted AI Safety research years” (or QuASaRs, name patent-pending) could be useful; if you have some reason to think you’re exceptionallypromising yourself, you’re probably unlikely to produce more QuASaRs in expectation by field-building, especially because you should be using your last year of impact as the counterfactual. But if you don’t (yet) — a “mere genius” in the language of my post — it seems pretty likely that you could produce lots of QuASaRs, especially at a top university like Stanford.
The organizers of such a group are presumably working towards careers in AI safety themselves. What do you think about the opportunity cost of their time?
To bring more people into the field, this strategy seems to delay the progress of the currently most advanced students within the AI safety pipeline. Broad awareness of AI risk among potentially useful individuals should absolutely by higher, but it doesn’t seem like the #1 bottle neck compared to developing people from “interested” to “useful contributor”. If somebody is on that cusp themselves, should they focus on personal development or outreach?
Trevor Levin and Ben Todd had an interesting discussion of toy models on this question here: https://forum.effectivealtruism.org/posts/ycCBeG5SfApC3mcPQ/even-more-early-career-eas-should-try-ai-safety-technical?commentId=tLMQtbY3am3mzB3Yk#comments
Good points! This reminds me of the recent Community Builders Spend Too Much Time Community Building post. Here are some thoughts about this issue:
Field-building and up-skilling don’t have to be orthogonal. I’m hopeful that a lot of an organizer’s time in such a group would involve doing the same things general members going through the system would be doing, like facilitating interesting reading group discussions or working on interesting AI alignment research projects. As the too much time post suggests, maybe just doing the cool learning stuff is a great way to show that we’re serious, get new people interested, and keep our group engaged.
Like Trevor Levin says in that reply, I think field-building is more valuable now than it will be as we get closer to AGI, and I think direct work will be more valuable later than it is now. Moreso, I think field-building while you’re a university student is significantly more valuable than field-building after you’ve graduated.
I don’t necessarily think the most advanced students will always need to be organizers under this model. I think there’s a growing body of EAs who want to help with AI alignment field-building but don’t necessarily think they’re the best fit for direct work (maybe they’re underconfident though), and this could be a great opportunity for them to help with little opportunity costs.
I’m really hopeful about several new orgs people are starting for field-wide infrastructure that could help offset a lot of the operational costs of this, including orgs that might be able to hire professional ops people to support a local group.
That’s not to say I recommend every student who’s really into AI safety delay their personal growth to work on starting a university group. Just that if you have help and think you could have a big impact, it might be worth considering letting off the solo up-skilling pedal to add in some more field-building.
Agreed with #1, in that for people doing AI safety research themselves and doing AI safety community-building, each plausibly makes you more effective at the other; the time spent figuring out how to communicate these concepts might be helpful in getting a full map of the field, and certainly being knowledgeable yourself makes you more credible and a more exciting field-builder. (The flip side of “Community Builders Spend Too Much Time Community-Building” is “Community Builders Who Do Other Things Are Especially Valuable,” at least in per-hour terms. (This might not be the case for higher-level EA meta people.) I think Alexander Davies of HAIST has a great sense of this and is quite sensitive to how seriously community builders will be taken given various levels of AI technical familiarity.
I also think #3 is important. Once you have a core group of AI safety-interested students, it’s important to figure out who is better suited to spend more time organizing events and doing outreach and who should just be heads-down skill-building. (It’s important to get a critical mass such that this is even possible; EA MIT finally got enough organizers this spring that one student who really didn’t want to be community-building could finally focus on his own upskilling.)
In general, I think modeling it in “quality-adjusted AI Safety research years” (or QuASaRs, name patent-pending) could be useful; if you have some reason to think you’re exceptionally promising yourself, you’re probably unlikely to produce more QuASaRs in expectation by field-building, especially because you should be using your last year of impact as the counterfactual. But if you don’t (yet) — a “mere genius” in the language of my post — it seems pretty likely that you could produce lots of QuASaRs, especially at a top university like Stanford.