Iāve only heard of them once before (and just as an example of a lab building large models, rather than as a lab with a major AI safety focus)
Looking at their website now, I do see mention of safety and responsibility, but it sounds more like āensure that people donāt get harmed by near-term usage by humans of Cohereās modelsā rather than ācontribute to x-risk-reducing AI safety research, and ensure Cohere doesnāt change AI timelines, competitive dynamics, etc. in harmful waysā.
I was surprised by the mention of Cohere because:
Iāve only heard of them once before (and just as an example of a lab building large models, rather than as a lab with a major AI safety focus)
Looking at their website now, I do see mention of safety and responsibility, but it sounds more like āensure that people donāt get harmed by near-term usage by humans of Cohereās modelsā rather than ācontribute to x-risk-reducing AI safety research, and ensure Cohere doesnāt change AI timelines, competitive dynamics, etc. in harmful waysā.
E.g., I donāt see something like OpenAIās ācompetitive raceā clause.
I also donāt see anyone on the team page with something like āAI safetyā or āAI governance/āpolicyā in their job title.
āEx-Googlers raise $40 million to democratize natural-language AIā sounds at first glance more risk-increasing than risk-reducing (though the headline wasnāt written by Cohere and I havenāt read the article)
Would you say that Cohere are in fact much more concerned about and/āor active on extreme AI catastrophe risks than their site indicates?
(This comment is just quickly written personal views/āconfusions. Also feel free to reply via DM if that feels more appropriate.)