What incentives and mechanisms do you think would be most effective at getting industrial and academic labs to provide structured access to their models?
(1) Make it really easy. Have accessible software tools out there, so labs don’t have to build everything from scratch. (2) Sponsor relevant technical research. I’m especially thinking of research falling under “AI security”. E.g. how easy is model-stealing, given different forms of access? (3) Have certain labs act as early adopters. They experiment with the best setup and set an example for other labs. (4) More public advocacy in favour of structured access. (5) Set up a conference track where there’s a specific role for labs sharing large models in a structured way. The expectations of the content of the paper would be different, e.g. they don’t need to have scientifically interesting findings already. The authors explain everything included, e.g. “we have model checkpoints corresponding to XYZ different points in the training run”. Analogous to a paper that introduces a new dataset.
(1) seems worth funding to the extent that it’s fund-able (like if it were an open source software project)
I’m less optimistic about public advocacy. As ML models have had a greater impact on peoples lives, there’s already been more of a public movement looking for more transparency and accountability for these models (which could include structured access). It seems like this isn’t a very strong incentive to existing companies’ products.
(5) I like a lot, and would fit well with structured evaluation programmes, like BIG-Bench
What incentives and mechanisms do you think would be most effective at getting industrial and academic labs to provide structured access to their models?
Good question. A few possible strategies:
(1) Make it really easy. Have accessible software tools out there, so labs don’t have to build everything from scratch.
(2) Sponsor relevant technical research. I’m especially thinking of research falling under “AI security”. E.g. how easy is model-stealing, given different forms of access?
(3) Have certain labs act as early adopters. They experiment with the best setup and set an example for other labs.
(4) More public advocacy in favour of structured access.
(5) Set up a conference track where there’s a specific role for labs sharing large models in a structured way. The expectations of the content of the paper would be different, e.g. they don’t need to have scientifically interesting findings already. The authors explain everything included, e.g. “we have model checkpoints corresponding to XYZ different points in the training run”. Analogous to a paper that introduces a new dataset.
(1) seems worth funding to the extent that it’s fund-able (like if it were an open source software project)
I’m less optimistic about public advocacy. As ML models have had a greater impact on peoples lives, there’s already been more of a public movement looking for more transparency and accountability for these models (which could include structured access). It seems like this isn’t a very strong incentive to existing companies’ products.
(5) I like a lot, and would fit well with structured evaluation programmes, like BIG-Bench