Doesn’t this depend on what you consider the “top tier areas for making AI go well” (which doesn’t seem to be defined by the post)? If that happens to be AI safety research institutes focused specifically on preventing “AI doom” via stuff you consider to be non-harmful, then naively I’d expect nearly all of them to be aligned with the movement focused on that priority, given that those are relatively small niches, the OP and their organisation and the wider EA movement are actively nudging people into them based on EA assumption that they’re the top tier ones, and anyone looking more broadly at AI as a professional interest will find a whole host of lucrative alternatives where they won’t be scrutinised on their alignment at interview and can go and make cool tools and/or lots of money on options.
If you define it as “areas which have the most influence on how AI is built” then those are more the people @titotal was talking about, and yeah, they don’t seem particularly aligned with EA, not even the ones that say safety-ish things as a marketing strategy and took money from EA funds.
And if you define “safety” more broadly there are plenty of other AI research areas focusing on stuff like cultural bias or job market impact. But you and your organisation and 80000 hours probably don’t consider them top tier for effectiveness and (not coincidentally) I suspect these have very low proportions of EAs. Same goes for defence companies who’ve decided the “safest” approach to AI is to win the arms race. Similarly, it’s no surprise that people who happen to be very concerned about morality and utilitarianism and doing the best they can with their 80k hours of working life who get their advice from Brutger don’t become AI researchers at all, despite the similarities of their moral views.
Doesn’t this depend on what you consider the “top tier areas for making AI go well” (which doesn’t seem to be defined by the post)? If that happens to be AI safety research institutes focused specifically on preventing “AI doom” via stuff you consider to be non-harmful, then naively I’d expect nearly all of them to be aligned with the movement focused on that priority, given that those are relatively small niches, the OP and their organisation and the wider EA movement are actively nudging people into them based on EA assumption that they’re the top tier ones, and anyone looking more broadly at AI as a professional interest will find a whole host of lucrative alternatives where they won’t be scrutinised on their alignment at interview and can go and make cool tools and/or lots of money on options.
If you define it as “areas which have the most influence on how AI is built” then those are more the people @titotal was talking about, and yeah, they don’t seem particularly aligned with EA, not even the ones that say safety-ish things as a marketing strategy and took money from EA funds.
And if you define “safety” more broadly there are plenty of other AI research areas focusing on stuff like cultural bias or job market impact. But you and your organisation and 80000 hours probably don’t consider them top tier for effectiveness and (not coincidentally) I suspect these have very low proportions of EAs. Same goes for defence companies who’ve decided the “safest” approach to AI is to win the arms race. Similarly, it’s no surprise that people who happen to be very concerned about morality and utilitarianism and doing the best they can with their 80k hours of working life who get their advice from Brutger don’t become AI researchers at all, despite the similarities of their moral views.