The broader question I’m confused about is how much to update on the local/object-level of whether the labs are doing “kind of reasonable” stuff, vs what their overall incentives and positions in the ecosystem points them to doing.
eg your site puts OpenAI and Anthropic as the least-bad options based on their activities, but from an incentives/organizational perspective, their place in the ecosystem is just really bad for safety. Contrast with, e.g., being situated within a large tech company[1] where having an AI scaling lab is just one revenue source among many, or Meta’s alleged “scorched Earth” strategy where they are trying very hard to commoditize the component of LLMs.
The broader question I’m confused about is how much to update on the local/object-level of whether the labs are doing “kind of reasonable” stuff, vs what their overall incentives and positions in the ecosystem points them to doing.
eg your site puts OpenAI and Anthropic as the least-bad options based on their activities, but from an incentives/organizational perspective, their place in the ecosystem is just really bad for safety. Contrast with, e.g., being situated within a large tech company[1] where having an AI scaling lab is just one revenue source among many, or Meta’s alleged “scorched Earth” strategy where they are trying very hard to commoditize the component of LLMs.
eg GDM employees have Google/Alphabet stock, most of the variance in their earnings isn’t going to come from AI, at least in the short term.