There’s definitely some selection bias (I know a lot of EAs), but anecdotally, I feel that almost all the people who, in my view, are “top-tier positive contributors” to shaping AGI seem to exemplify EA-type values (though it’s not necessarily their primary affinity group).
Some “make AGI go well influencers” who have commented or posted on the EA Forum and, in my view, are at the very least EA-adjacent include Rohin Shah, Neel Nanda, Buck Shlegeris, Ryan Greenblatt, Evan Hubinger, Oliver Habryka, Beth Barnes, Jaime Sevilla, Adam Gleave, Eliezer Yudkowsky, Davidad, Ajeya Cotra, Holden Karnofsky …. most of these people work on technical safety, but I think the same story is roughly true for AI governance and other “make AGI go well” areas.[1]
I personally wouldn’t describe all of the above people’s work as being in my absolute top tier according to my idiosyncratic worldview (note that many of them are working on at least somewhat conflicting agendas so they can’t all be in my top tier), and it could be true that “early EA” was a strong attractor for such people, but EA has since lost its ability to attract “future AI thought leaders”. [2]
I also want to make a stronger, but harder to justify, claim that the vast majority of people doing top-tier work in AI safety are ~EAs. For example, many people would consider Redwood Research’s work top tier, and both Buck and Ryan (according to me) exemplify EA values (scope sensitivity, altruism, ambition, etc.). Imo, some combination of, scope sensitivity, impartiality, altruism, and willingness to take weird ideas seriously seems extremely useful (and maybe even critical) for doing the most important “make AI go well” work.
I know that some of these people wouldn’t “identify as EA” but that’s not particularly relevant. The think I’m trying to point at is a set of values that are common in EA but rare in AI/ambitous people/elites/the general public.
I would say the main people “shaping AGI” are the people actually building models at frontier AI companies. It doesn’t matter how aligned “AI safety” people are if they don’t have a significant say on how AI gets built.
I would not say that “almost all” of the people at top AI companies exemplify EA-style values. The most influential person in AI is Sam Altman, who has publicly split with EA after EA board members tried to fire him for being a serial liar.
I agree with some parts of your comment though it’s not particularly relevant to the thesis that most people with significant responsibility for most of the top-tier (according to my view on top tier areas for making AGI go well) have values that are much more EA like than would naively be expected.
Doesn’t this depend on what you consider the “top tier areas for making AI go well” (which doesn’t seem to be defined by the post)? If that happens to be AI safety research institutes focused specifically on preventing “AI doom” via stuff you consider to be non-harmful, then naively I’d expect nearly all of them to be aligned with the movement focused on that priority, given that those are relatively small niches, the OP and their organisation and the wider EA movement are actively nudging people into them based on EA assumption that they’re the top tier ones, and anyone looking more broadly at AI as a professional interest will find a whole host of lucrative alternatives where they won’t be scrutinised on their alignment at interview and can go and make cool tools and/or lots of money on options.
If you define it as “areas which have the most influence on how AI is built” then those are more the people @titotal was talking about, and yeah, they don’t seem particularly aligned with EA, not even the ones that say safety-ish things as a marketing strategy and took money from EA funds.
And if you define “safety” more broadly there are plenty of other AI research areas focusing on stuff like cultural bias or job market impact. But you and your organisation and 80000 hours probably don’t consider them top tier for effectiveness and (not coincidentally) I suspect these have very low proportions of EAs. Same goes for defence companies who’ve decided the “safest” approach to AI is to win the arms race. Similarly, it’s no surprise that people who happen to be very concerned about morality and utilitarianism and doing the best they can with their 80k hours of working life who get their advice from Brutger don’t become AI researchers at all, despite the similarities of their moral views.
I think this may have been a misunderstanding, because I also misunderstood your comment at first. At first you refer simply to the people who play the biggest role in shaping AGI—but then later (and in this comment) you refer to people who contribute most to making AGI go well—a very important distinction!
I’m sure it was a misunderstanding, but fwiw, in the first paragraph, I do say “positive contributors” by which I meant people having a positive impact.
There’s definitely some selection bias (I know a lot of EAs), but anecdotally, I feel that almost all the people who, in my view, are “top-tier positive contributors” to shaping AGI seem to exemplify EA-type values (though it’s not necessarily their primary affinity group).
Some “make AGI go well influencers” who have commented or posted on the EA Forum and, in my view, are at the very least EA-adjacent include Rohin Shah, Neel Nanda, Buck Shlegeris, Ryan Greenblatt, Evan Hubinger, Oliver Habryka, Beth Barnes, Jaime Sevilla, Adam Gleave, Eliezer Yudkowsky, Davidad, Ajeya Cotra, Holden Karnofsky …. most of these people work on technical safety, but I think the same story is roughly true for AI governance and other “make AGI go well” areas.[1]
I personally wouldn’t describe all of the above people’s work as being in my absolute top tier according to my idiosyncratic worldview (note that many of them are working on at least somewhat conflicting agendas so they can’t all be in my top tier), and it could be true that “early EA” was a strong attractor for such people, but EA has since lost its ability to attract “future AI thought leaders”. [2]
I also want to make a stronger, but harder to justify, claim that the vast majority of people doing top-tier work in AI safety are ~EAs. For example, many people would consider Redwood Research’s work top tier, and both Buck and Ryan (according to me) exemplify EA values (scope sensitivity, altruism, ambition, etc.). Imo, some combination of, scope sensitivity, impartiality, altruism, and willingness to take weird ideas seriously seems extremely useful (and maybe even critical) for doing the most important “make AI go well” work.
I know that some of these people wouldn’t “identify as EA” but that’s not particularly relevant. The think I’m trying to point at is a set of values that are common in EA but rare in AI/ambitous people/elites/the general public.
It also seems good to mention there are some people who are EAs (according to my defintion) having a large negative impact on AI risk.
I would say the main people “shaping AGI” are the people actually building models at frontier AI companies. It doesn’t matter how aligned “AI safety” people are if they don’t have a significant say on how AI gets built.
I would not say that “almost all” of the people at top AI companies exemplify EA-style values. The most influential person in AI is Sam Altman, who has publicly split with EA after EA board members tried to fire him for being a serial liar.
I agree with some parts of your comment though it’s not particularly relevant to the thesis that most people with significant responsibility for most of the top-tier (according to my view on top tier areas for making AGI go well) have values that are much more EA like than would naively be expected.
Doesn’t this depend on what you consider the “top tier areas for making AI go well” (which doesn’t seem to be defined by the post)? If that happens to be AI safety research institutes focused specifically on preventing “AI doom” via stuff you consider to be non-harmful, then naively I’d expect nearly all of them to be aligned with the movement focused on that priority, given that those are relatively small niches, the OP and their organisation and the wider EA movement are actively nudging people into them based on EA assumption that they’re the top tier ones, and anyone looking more broadly at AI as a professional interest will find a whole host of lucrative alternatives where they won’t be scrutinised on their alignment at interview and can go and make cool tools and/or lots of money on options.
If you define it as “areas which have the most influence on how AI is built” then those are more the people @titotal was talking about, and yeah, they don’t seem particularly aligned with EA, not even the ones that say safety-ish things as a marketing strategy and took money from EA funds.
And if you define “safety” more broadly there are plenty of other AI research areas focusing on stuff like cultural bias or job market impact. But you and your organisation and 80000 hours probably don’t consider them top tier for effectiveness and (not coincidentally) I suspect these have very low proportions of EAs. Same goes for defence companies who’ve decided the “safest” approach to AI is to win the arms race. Similarly, it’s no surprise that people who happen to be very concerned about morality and utilitarianism and doing the best they can with their 80k hours of working life who get their advice from Brutger don’t become AI researchers at all, despite the similarities of their moral views.
I think this may have been a misunderstanding, because I also misunderstood your comment at first. At first you refer simply to the people who play the biggest role in shaping AGI—but then later (and in this comment) you refer to people who contribute most to making AGI go well—a very important distinction!
I’m sure it was a misunderstanding, but fwiw, in the first paragraph, I do say “positive contributors” by which I meant people having a positive impact.