Agreed with #1, in that for people doing AI safety research themselves and doing AI safety community-building, each plausibly makes you more effective at the other; the time spent figuring out how to communicate these concepts might be helpful in getting a full map of the field, and certainly being knowledgeable yourself makes you more credible and a more exciting field-builder. (The flip side of “Community Builders Spend Too Much Time Community-Building” is “Community Builders Who Do Other Things Are Especially Valuable,” at least in per-hour terms. (This might not be the case for higher-level EA meta people.) I think Alexander Davies of HAIST has a great sense of this and is quite sensitive to how seriously community builders will be taken given various levels of AI technical familiarity.
I also think #3 is important. Once you have a core group of AI safety-interested students, it’s important to figure out who is better suited to spend more time organizing events and doing outreach and who should just be heads-down skill-building. (It’s important to get a critical mass such that this is even possible; EA MIT finally got enough organizers this spring that one student who really didn’t want to be community-building could finally focus on his own upskilling.)
In general, I think modeling it in “quality-adjusted AI Safety research years” (or QuASaRs, name patent-pending) could be useful; if you have some reason to think you’re exceptionallypromising yourself, you’re probably unlikely to produce more QuASaRs in expectation by field-building, especially because you should be using your last year of impact as the counterfactual. But if you don’t (yet) — a “mere genius” in the language of my post — it seems pretty likely that you could produce lots of QuASaRs, especially at a top university like Stanford.
Agreed with #1, in that for people doing AI safety research themselves and doing AI safety community-building, each plausibly makes you more effective at the other; the time spent figuring out how to communicate these concepts might be helpful in getting a full map of the field, and certainly being knowledgeable yourself makes you more credible and a more exciting field-builder. (The flip side of “Community Builders Spend Too Much Time Community-Building” is “Community Builders Who Do Other Things Are Especially Valuable,” at least in per-hour terms. (This might not be the case for higher-level EA meta people.) I think Alexander Davies of HAIST has a great sense of this and is quite sensitive to how seriously community builders will be taken given various levels of AI technical familiarity.
I also think #3 is important. Once you have a core group of AI safety-interested students, it’s important to figure out who is better suited to spend more time organizing events and doing outreach and who should just be heads-down skill-building. (It’s important to get a critical mass such that this is even possible; EA MIT finally got enough organizers this spring that one student who really didn’t want to be community-building could finally focus on his own upskilling.)
In general, I think modeling it in “quality-adjusted AI Safety research years” (or QuASaRs, name patent-pending) could be useful; if you have some reason to think you’re exceptionally promising yourself, you’re probably unlikely to produce more QuASaRs in expectation by field-building, especially because you should be using your last year of impact as the counterfactual. But if you don’t (yet) — a “mere genius” in the language of my post — it seems pretty likely that you could produce lots of QuASaRs, especially at a top university like Stanford.