I’ve only heard of them once before (and just as an example of a lab building large models, rather than as a lab with a major AI safety focus)
Looking at their website now, I do see mention of safety and responsibility, but it sounds more like “ensure that people don’t get harmed by near-term usage by humans of Cohere’s models” rather than “contribute to x-risk-reducing AI safety research, and ensure Cohere doesn’t change AI timelines, competitive dynamics, etc. in harmful ways”.
I was surprised by the mention of Cohere because:
I’ve only heard of them once before (and just as an example of a lab building large models, rather than as a lab with a major AI safety focus)
Looking at their website now, I do see mention of safety and responsibility, but it sounds more like “ensure that people don’t get harmed by near-term usage by humans of Cohere’s models” rather than “contribute to x-risk-reducing AI safety research, and ensure Cohere doesn’t change AI timelines, competitive dynamics, etc. in harmful ways”.
E.g., I don’t see something like OpenAI’s “competitive race” clause.
I also don’t see anyone on the team page with something like “AI safety” or “AI governance/policy” in their job title.
“Ex-Googlers raise $40 million to democratize natural-language AI” sounds at first glance more risk-increasing than risk-reducing (though the headline wasn’t written by Cohere and I haven’t read the article)
Would you say that Cohere are in fact much more concerned about and/or active on extreme AI catastrophe risks than their site indicates?
(This comment is just quickly written personal views/confusions. Also feel free to reply via DM if that feels more appropriate.)