Tentative implications:
People outside of labs are less likely to have access to the very best models and will have less awareness of where the state of the art is.
Warning shots are somewhat less likely as highly-advanced models may never be deployed externally.
We should expect to know less about where we’re at in terms of AI progress.
Working at labs is perhaps more important than ever to improve safety and researchers outside of labs may have little ability to contribute meaningfully.
Whistleblowing and reporting requirements could become more important as without them government would have little ability to regulate frontier AI.
Any regulation based solely on deployment (which has been quite common) should be adjusted to take into account that the most dangerous models may be used internally long before they’re deployed.
For what it’s worth, I think that the last year was an update against many of these claims. Open source models currently seem to be closer to state of the art than they did a year ago or two years ago. Currently, researchers at labs seem mostly in worse positions to do research than researchers outside labs.
I very much agree that regulations should cover internal deployment, though, and I’ve been discussing risks from internal deployment for years.
I agree with you that people seem to somewhat overrate getting jobs in AI companies.
However, I do think there’s good work to do inside AI companies. Currently, a lot of the quality-adjusted safety research happens inside AI companies. And see here for my rough argument that it’s valuable to have safety-minded people inside AI companies at the point where they develop catastrophically dangerous AI.