According to this article, only Deepmind gave the UK AI Institute (partial?) access to their model before release. This seems like a pro-social thing to do so maybe this could be worth tracking in some way if possible.
Yep, that’s related to my “Give some third parties access to models to do model evals for dangerous capabilities” criterion. See here and here.
As I discuss here, it seems DeepMind shared super limited access with UKAISI (only access to a system with safety training + safety filters), so don’t give them too much credit.
I suspect Politico is wrong and the labs never committed to give early access to UKAISI. (I know you didn’t assert that they committed that.)
Cool idea, thanks for working on it.
According to this article, only Deepmind gave the UK AI Institute (partial?) access to their model before release. This seems like a pro-social thing to do so maybe this could be worth tracking in some way if possible.
Yep, that’s related to my “Give some third parties access to models to do model evals for dangerous capabilities” criterion. See here and here.
As I discuss here, it seems DeepMind shared super limited access with UKAISI (only access to a system with safety training + safety filters), so don’t give them too much credit.
I suspect Politico is wrong and the labs never committed to give early access to UKAISI. (I know you didn’t assert that they committed that.)