This post surprised me because I remember seeing a few comments on the forum recently that expressed sentiments along the lines of “EA is maybe getting too focused on AI” and “too many people are going into AI Safety and we might end up shortstaffed on other useful skills.” I don’t really remember what they were. I think one mentioned a recent podcast from 80,000 hours.
But I heard that Redwood’s second Alignment focused Machine Learning Bootcamp got over 800 applicants for 40 spots. So I’m wondering if a surge of technical researchers is imminent? Or would make it less valuable for someone like me who has 0 coding or machine learning experience to consider this path?
(Maybe obvious point, but) there just aren’t that many people doing longtermist EA work, so basically every problem will look understaffed, relative to the scale of the problem.
This post surprised me because I remember seeing a few comments on the forum recently that expressed sentiments along the lines of “EA is maybe getting too focused on AI” and “too many people are going into AI Safety and we might end up shortstaffed on other useful skills.” I don’t really remember what they were. I think one mentioned a recent podcast from 80,000 hours.
But I heard that Redwood’s second Alignment focused Machine Learning Bootcamp got over 800 applicants for 40 spots. So I’m wondering if a surge of technical researchers is imminent? Or would make it less valuable for someone like me who has 0 coding or machine learning experience to consider this path?
There are tons of people vaguely considering working on alignment, and not a lot of people actually working on alignment.
Yes! This is basically the whole post condensed into one sentence
(Maybe obvious point, but) there just aren’t that many people doing longtermist EA work, so basically every problem will look understaffed, relative to the scale of the problem.