I agree that it would be good to have citations. In case neither Ozzie nor anyone else here finds it a good use of their time to do it—I’ve been following OpenAIs and Sam Altman’s messaging specifically for a while and Ozzie’s summary of their (conflicting) messaging seems roughly accurate to me. It’s easy to notice the inconsistencies in Sam Altman’s messaging, especially when it comes to safety.
Another commenter (whose name I forgot, I think he was from CLTR) put it nicely: It feels likeAltman does not have one consistent set of beliefs (like an ethics/safety researcher would) but tends to say different things that are useful for achieving his goals (like many CEOs do), and he seems to do that more than other AI lab executives at Anthropic or Deepmind.
Thanks for sharing your impressions. But even if many observers have this impression, it still seems like it could be quite valuable to track down exactly what was said, because there’s some gap between:
(a) has nuanced models of the world and will strategically select different facets of those to share on different occasions; and
(b) will strategically select what to say on different occasions without internal validity or consistency.
… but either of these could potentially create the impressions in observers of inconsistency. (Not to say that (a) is ideal, but I think that (b) is clearly more egregious.)
I imagine we’re basically all in agreement on this.
Only question is who might want to / be able to do much of it. It does seem like it could be a fairly straightforward project, though it feels like it would be work.
It could be partially crowdsourced. People could add links to interviews to a central location as they come across them, quotes can be taken from news articles, maybe some others can do AI transcription of other interviews. I think subtitles from YouTube videos can also be downloaded?
I agree that it would be good to have citations. In case neither Ozzie nor anyone else here finds it a good use of their time to do it—I’ve been following OpenAIs and Sam Altman’s messaging specifically for a while and Ozzie’s summary of their (conflicting) messaging seems roughly accurate to me. It’s easy to notice the inconsistencies in Sam Altman’s messaging, especially when it comes to safety.
Another commenter (whose name I forgot, I think he was from CLTR) put it nicely: It feels like Altman does not have one consistent set of beliefs (like an ethics/safety researcher would) but tends to say different things that are useful for achieving his goals (like many CEOs do), and he seems to do that more than other AI lab executives at Anthropic or Deepmind.
Thanks for sharing your impressions. But even if many observers have this impression, it still seems like it could be quite valuable to track down exactly what was said, because there’s some gap between:
(a) has nuanced models of the world and will strategically select different facets of those to share on different occasions; and
(b) will strategically select what to say on different occasions without internal validity or consistency.
… but either of these could potentially create the impressions in observers of inconsistency. (Not to say that (a) is ideal, but I think that (b) is clearly more egregious.)
I imagine we’re basically all in agreement on this.
Only question is who might want to / be able to do much of it. It does seem like it could be a fairly straightforward project, though it feels like it would be work.
It could be partially crowdsourced. People could add links to interviews to a central location as they come across them, quotes can be taken from news articles, maybe some others can do AI transcription of other interviews. I think subtitles from YouTube videos can also be downloaded?
Yes, that comment was made by Lukas Gloor here, when I asked what people thought Sam Altman’s beliefs are.