People outside of labs are less likely to have access to the very best models and will have less awareness of where the state of the art is.
Warning shots are somewhat less likely as highly-advanced models may never be deployed externally.
We should expect to know less about where we’re at in terms of AI progress.
Working at labs is perhaps more important than ever to improve safety and researchers outside of labs may have little ability to contribute meaningfully.
Whistleblowing and reporting requirements could become more important as without them government would have little ability to regulate frontier AI.
Any regulation based solely on deployment (which has been quite common) should be adjusted to take into account that the most dangerous models may be used internally long before they’re deployed.
For what it’s worth, I think that the last year was an update against many of these claims. Open source models currently seem to be closer to state of the art than they did a year ago or two years ago. Currently, researchers at labs seem mostly in worse positions to do research than researchers outside labs.
Do you have anything you recommend reading on that?
I guess I see a lot of the value of people at labs happening around the time of AGI and in the period leading up to ASI (if we get there). At that point I expect things to be very locked down such that external researchers don’t really know what’s happening and have a tough time interacting with lab insiders. I thought this recent post from you kind of supported the claim that working inside the labs would be good? - i.e. surely 11 people on the inside is better than 10? (and 30 far far better)
I do agree OS models help with all this and I guess it’s true that we kinda know the architecture and maybe internal models won’t diverge in any fundamental way from what’s available OS. To the extent OS keeps going warning shots do seem more likely—I guess it’ll be pretty decisive if the PRC lets Deepseek keep OSing their stuff (I kinda suspect not? But no idea really).
I guess rather than concrete implications I should indicate these are more ‘updates given more internal deployment’ some of which are pushed back against by surprisingly capable OS models (maybe I’ll add some caveats)
For what it’s worth, I think that the last year was an update against many of these claims. Open source models currently seem to be closer to state of the art than they did a year ago or two years ago. Currently, researchers at labs seem mostly in worse positions to do research than researchers outside labs.
I very much agree that regulations should cover internal deployment, though, and I’ve been discussing risks from internal deployment for years.
Do you have anything you recommend reading on that?
I guess I see a lot of the value of people at labs happening around the time of AGI and in the period leading up to ASI (if we get there). At that point I expect things to be very locked down such that external researchers don’t really know what’s happening and have a tough time interacting with lab insiders. I thought this recent post from you kind of supported the claim that working inside the labs would be good? - i.e. surely 11 people on the inside is better than 10? (and 30 far far better)
I do agree OS models help with all this and I guess it’s true that we kinda know the architecture and maybe internal models won’t diverge in any fundamental way from what’s available OS. To the extent OS keeps going warning shots do seem more likely—I guess it’ll be pretty decisive if the PRC lets Deepseek keep OSing their stuff (I kinda suspect not? But no idea really).
I guess rather than concrete implications I should indicate these are more ‘updates given more internal deployment’ some of which are pushed back against by surprisingly capable OS models (maybe I’ll add some caveats)