AI safety becomes the single community that’s the most knowledgeable about cutting-edge ML systems. The smartest up-and-coming ML researchers find themselves constantly coming to AI safety spaces, because that’s the place to go if you want to nerd out about the models. It feels like the early days of hacker culture.
I’d like to constructively push back on this: The research and open-source communities outside AI Safety that I’m embedded in are arguably just as, if not more hands-on, since their attitude towards deployment is usually more … unrestricted. For context, I mess around with generative agents and learning agents.
I broadly agree that the AI Safety community is very smart people working on very challenging and impactful problems. I’m just skeptical that what you’ve described is particularly unique to AI Safety, and think that descriptiom would apply to many ML-related spaces. Then again, I could be extremely inexperienced and unaware of the knowledge gap between top AI Safety researchers and everyone else.
Re: Environmentalism
much more similar to the environmentalist movement. It has broader reach, but alienates a lot of the most competent people in the relevant fields. ML researchers who find themselves in AI safety spaces are told they’re “worse than Hitler”
I was a climate activist organising FridaysForFuture (FFF) protests, and I don’t recall this was ever the prevailing perception/attitude. Mainstream activist movements and scientists put up a united front, and they still mutually support each other today. Even if it was superficial, FFF always emphasised “listen to the science”.
From a survey of FFF activists:
Our data show that activists overwhelmingly derive their goals from scientific knowledge and reject the idea that science could be used imprecisely just as an instrument to attain their goals.[1]
I’m also fairly certain the environmentalist was a counterfactual net positive, with Will Macaskill himself commenting on the role of climate advocacy in funding solar energy research and accelerating climate commitments in What We Owe The Future. However, I will admit that the anti-nuclear stance was exactly as dumb as you’ve implied, and it embarrasses me how many activists expressed it.
Re: Enemy of my Enemy
Personally, I draw a meaningful distinction between being anti-AI capabilities and pro-AI Safety. Both are strongly and openly concerned about rapid AI progress, but the two groups have very different motivations, proposed solutions and degree of epistemic rigour. Being anti-AI does not mean pro AI Safety, the former is a much larger umbrella movement of people expressing strong opinions on a disruptive, often misunderstood field.
I’d like to constructively push back on this: The research and open-source communities outside AI Safety that I’m embedded in are arguably just as, if not more hands-on, since their attitude towards deployment is usually more … unrestricted.
I think we agree: I’m describing a possible future for AI safety, not making the claim that it’s anything like this now.
I was a climate activist organising FridaysForFuture (FFF) protests, and I don’t recall this was ever the prevailing perception/attitude.
Not sure what you mean by this but in some AI safety spaces ML capabilities researchers are seen as opponents. I think the relevant analogy here would be, e.g. an oil executive who’s interested in learning more about how to reduce the emissions their company produces, who I expect would get a pretty cold reception.
Re “alienation”, I’m also thinking of stuff like the climate activists who are blocking highways, blocking offices, etc.
I’m also fairly certain the environmentalist was a counterfactual net positive, with Will Macaskill himself commenting on the role of climate advocacy in funding solar energy research and accelerating climate commitments in What We Owe The Future. However, I will admit that the anti-nuclear stance was exactly as dumb as you’ve implied, and it embarrasses me how many activists expressed it.
Makes sense! Yeah, I agree that a lot has been done to accelerate research into renewables; I just feel less confident than you about how this balances out compared with nuclear.
Personally, I draw a meaningful distinction between being anti-AI capabilities and pro-AI Safety.
I like this distinction, feels like a useful one. Thanks for the comment!
Re: Hacker culture
I’d like to constructively push back on this: The research and open-source communities outside AI Safety that I’m embedded in are arguably just as, if not more hands-on, since their attitude towards deployment is usually more … unrestricted. For context, I mess around with generative agents and learning agents.
I broadly agree that the AI Safety community is very smart people working on very challenging and impactful problems. I’m just skeptical that what you’ve described is particularly unique to AI Safety, and think that descriptiom would apply to many ML-related spaces. Then again, I could be extremely inexperienced and unaware of the knowledge gap between top AI Safety researchers and everyone else.
Re: Environmentalism
I was a climate activist organising FridaysForFuture (FFF) protests, and I don’t recall this was ever the prevailing perception/attitude. Mainstream activist movements and scientists put up a united front, and they still mutually support each other today. Even if it was superficial, FFF always emphasised “listen to the science”.
From a survey of FFF activists:
I’m also fairly certain the environmentalist was a counterfactual net positive, with Will Macaskill himself commenting on the role of climate advocacy in funding solar energy research and accelerating climate commitments in What We Owe The Future. However, I will admit that the anti-nuclear stance was exactly as dumb as you’ve implied, and it embarrasses me how many activists expressed it.
Re: Enemy of my Enemy
Personally, I draw a meaningful distinction between being anti-AI capabilities and pro-AI Safety. Both are strongly and openly concerned about rapid AI progress, but the two groups have very different motivations, proposed solutions and degree of epistemic rigour. Being anti-AI does not mean pro AI Safety, the former is a much larger umbrella movement of people expressing strong opinions on a disruptive, often misunderstood field.
Frontiers | “Listen to the science!”—The role of scientific knowledge for the Fridays for Future movement (frontiersin.org)
I think we agree: I’m describing a possible future for AI safety, not making the claim that it’s anything like this now.
Not sure what you mean by this but in some AI safety spaces ML capabilities researchers are seen as opponents. I think the relevant analogy here would be, e.g. an oil executive who’s interested in learning more about how to reduce the emissions their company produces, who I expect would get a pretty cold reception.
Re “alienation”, I’m also thinking of stuff like the climate activists who are blocking highways, blocking offices, etc.
Makes sense! Yeah, I agree that a lot has been done to accelerate research into renewables; I just feel less confident than you about how this balances out compared with nuclear.
I like this distinction, feels like a useful one. Thanks for the comment!