I’m a little confused as to why we consider the leaders of AI companies (Altman, Hassabis, Amodei etc.) to be “thought leaders” in the field of AI safety in particular. Their job descriptions are to grow the company and increase shareholder value, so their public persona and statements has to reflect that. Surely they are far too compromised for their opinions to be taken too seriously, they couldn’t make strong statements against AI growth and development even if they wanted to, because of their job and position.
The recent post “Sam Altman’s chip ambitions undercut OpenAI’s safety strategy” seems correct and important, while also almost absurdly obvious—the guy is trying to grow his company and they need more and better chips. We don’t seriously listen to big tobacco CEOs about the dangers of smoking, or Oil CEOs about the dangers of climate change, or Factory Farming CEOs about animal suffering, so why do we seem to take the opinions of AI bosses about safety even in moderate good faith? The past is often the best predictor of the future, and the past here says that CEOs will grow their companies, while trying however possible to maintain public goodwill as to minimise the backlash.
I agree that these CEOs could be considered thought leaders in AI in general and the Future and potential of AI, and their statements about safety and the future are critically practically important and should be engaged with seriously. But I don’t really see the point of engaging with them as thought leaders in the AI safety discussion, it would make more sense to me to rather engage with intellectuals and commentators who can fully and transparently share their views without crippling levels of compromisation.
I’m interested to hear though arguments in favour of taking their thoughts more seriously.
Who is considering Altman and Hassabis thought leaders in AI safety? I wouldn’t even consider Altman a thought leader in AI—his extraordinary skill seems mostly social and organizational.
There’s maybe an argument for Amodei, as Anthropic is currently the only one of the company whose commitment to safety over scaling is at least reasonably plausible.
Thanks that’s a helpful perspective and I would be happy if it was true that they weren’t considered AI safety thought leaders. I do feel like they are often seen this way though in the public sphere, and sometimes here on the forum too.
I realize that my question sounded rethorical, but I’m actually interested in your sources or reasons for your impression. I certainly don’t have a good idea of the general opinion and the media I consume is biased towards what I consider reasonable takes. That being said, I haven’t encountered the position you’re concerned about very much and would be interested to be hear where you did. Regarding this forum, I imagine one could read into some answers, but overall I don’t get the impression that the AI CEO’s are seen as big safety proponents.
I think thoughtleader sometimes means “has thoughts at the leading edge” and sometimes mean “leads the thoughts of the herd on a subject” and that there is sometimes a deliberate ambiguity between the two.
I’m a little confused as to why we consider the leaders of AI companies (Altman, Hassabis, Amodei etc.) to be “thought leaders” in the field of AI safety in particular. Their job descriptions are to grow the company and increase shareholder value, so their public persona and statements has to reflect that. Surely they are far too compromised for their opinions to be taken too seriously, they couldn’t make strong statements against AI growth and development even if they wanted to, because of their job and position.
The recent post “Sam Altman’s chip ambitions undercut OpenAI’s safety strategy” seems correct and important, while also almost absurdly obvious—the guy is trying to grow his company and they need more and better chips. We don’t seriously listen to big tobacco CEOs about the dangers of smoking, or Oil CEOs about the dangers of climate change, or Factory Farming CEOs about animal suffering, so why do we seem to take the opinions of AI bosses about safety even in moderate good faith? The past is often the best predictor of the future, and the past here says that CEOs will grow their companies, while trying however possible to maintain public goodwill as to minimise the backlash.
I agree that these CEOs could be considered thought leaders in AI in general and the Future and potential of AI, and their statements about safety and the future are critically practically important and should be engaged with seriously. But I don’t really see the point of engaging with them as thought leaders in the AI safety discussion, it would make more sense to me to rather engage with intellectuals and commentators who can fully and transparently share their views without crippling levels of compromisation.
I’m interested to hear though arguments in favour of taking their thoughts more seriously.
Who is considering Altman and Hassabis thought leaders in AI safety? I wouldn’t even consider Altman a thought leader in AI—his extraordinary skill seems mostly social and organizational. There’s maybe an argument for Amodei, as Anthropic is currently the only one of the company whose commitment to safety over scaling is at least reasonably plausible.
Thanks that’s a helpful perspective and I would be happy if it was true that they weren’t considered AI safety thought leaders. I do feel like they are often seen this way though in the public sphere, and sometimes here on the forum too.
I realize that my question sounded rethorical, but I’m actually interested in your sources or reasons for your impression. I certainly don’t have a good idea of the general opinion and the media I consume is biased towards what I consider reasonable takes. That being said, I haven’t encountered the position you’re concerned about very much and would be interested to be hear where you did. Regarding this forum, I imagine one could read into some answers, but overall I don’t get the impression that the AI CEO’s are seen as big safety proponents.
I think thoughtleader sometimes means “has thoughts at the leading edge” and sometimes mean “leads the thoughts of the herd on a subject” and that there is sometimes a deliberate ambiguity between the two.