Stop calling them labs
Note: This started as a quick take, but it got too long so I made it a full post. Itâs still kind of a rant; a stronger post would include sources and would have gotten feedback from people more knowledgeable than I. But in the spirit of Draft Amnesty Week, Iâm writing this in one sitting and smashing that Submit button.
Many people continue to refer to companies like OpenAI, Anthropic, and Google DeepMind as âfrontier AI labsâ. I think we should drop âlabsâ entirely when discussing these companies, calling them âAI companiesâ[1] instead. While these companies may have once been primarily research laboratories, they are no longer so. Continuing to call them labs makes them sound like harmless groups focused on pushing the frontier of human knowledge, when in reality they are profit-seeking corporations focused on building products and capturing value in the marketplace.
Laboratories do not directly publish software products that attract hundreds of millions of users and billions in revenue. Laboratories do not hire armies of lobbyists to control the regulation of their work. Laboratories do not compete for tens of billions in external investments or announce many-billion-dollar capital expenditures in partnership with governments both foreign and domestic.
People call these companies labs due to some combination of marketing and historical accident. To my knowledge no one ever called Facebook, Amazon, Apple, or Netflix âlabsâ, despite each of them employing many researchers and pushing a lot of genuine innovation in many fields of technology.
To be clear, there are labs inside many AI companies, especially the big ones mentioned above. There are groups of researchers doing research at the cutting edge of various fields of knowledge, in AI capabilities, safety, governance, etc. Many individuals (perhaps some readers of this very post!) would be correct in saying they work at a lab inside a frontier AI company. Itâs just not the case that any of these companies as a whole is best described as a âlabâ. Some actual AI labs include FAR.AI, Redwood Research, METR, and all academic groups. There might be some for-profit entities that I would call labs, but Iâm skeptical by default.
OpenAI, Anthropic, and DeepMind are tech companies, pure and simple. Each has different goals and approaches, and the private goals of their departments and employees vary widely, but I believe strongly that thinking of them as tech companies rather than AI laboratories provides clarity and will improve the quality of thinking and discussion within this community.
- ^
When more specificity is needed, âfrontier AI companies,â âgenerative AI companies,â âfoundational AI companies,â or similar could also be used.
I agree with this â 80,000 Hours made this change about a year ago.
I expect that âlabsâ usefully communicates to most of my interlocutors that Iâm talking about the companies developing frontier models and not something like Palantir. Thereâs a lot of hype-based incentive for companies to claim to be âAI companiesâ, which creates confusion. (Indeed, I didnât know before I chose Palantir as an example, but of course theyâre marketing themselves as an AI company.)
â
That said, I agree with the consideration in your post. I donât claim which is the bigger consideration, only that they trade off.
I think this is a useful distinction, thanks for raising it. I support terms like, âfrontier AI company,â âcompany making frontier AI,â and âcompany making foundation models,â all of which help distinguish OpenAI from Palantir. Also it seems pretty likely that within a few years, most companies will be AI companies!? So weâll need new terms. I just donât want that term to be âlabâ.
Another thing you might be alluding to is that âlabâ is less problematic when talking to people within the AI safety community, and more problematic the further out you go. I think that, within a community, the terms of art sort of lose their generic connotations over time, as community members build a dense web of new connotations specific to that meaning. I regret to admit that Iâm at the point where the word âlabâ without any qualifiers at all makes me think of OpenAI!
But code switching is hard, and if we use these terms internally, weâll also use them externally. Also external people read things that were more intended for internal people, so the language leaks out.
Itâs also just jargon-y. I call them âAI companiesâ because people outside the AGI memeplex donât know what an âAI labâ is, and (as you note) if they infer from someoneâs use of that term that the frontier developers are something besides âAI companies,â theyâd be wrong!
I agree that the term âAI companyâ is technically more accurate. However, I also think the term âAI labâ is still useful terminology, as it distinguishes companies that train large foundation models from companies that work in other parts of the AI space, such as companies that primarily build tools, infrastructure, or applications on top of AI models.
I agree that those companies are worth distinguishing. I just think calling them âlabsâ is a confusing way to do so. If the purpose was only to distinguish them from other AI companies, you could call them âAI bananasâ and it would be just as useful. But âAI bananasâ is unhelpful and confusing. I think âAI labsâ is the same (to a lesser but still important degree).
Unfortunately thereâs momentum behind the term âAI labâ in a way that is not true for âAI bananasâ. Also, it is unambiguously true that a major part of what these companies do is scientific experimentation, as one would expect in a laboratoryâthis makes the analogy to âAI bananasâ imperfect.
I think âlabsâ has the connotation of mad scientists and somebody creating something that escapes the lab, so has some âgoodâ connotations for AI safety comms.
Of course, depending on the context and audience.
Interesting point! Iâd be OK with people calling them âevil mad scientist labs,â but I still think the generic âlabâ has more of a positive, harmless connotation than this negative one.
Iâd also be more sympathetic to calling them âlabsâ if (1) we had actual regulations around them or (2) they were government projects. Biosafety and nuclear weapons labs have a healthy reputation for being dangerous and unfriendly, in a way âcomputer labsâ do not. Also, private companies may have biosafety containment labs on premises, and the people working within them are labworkers/âscientists, but we call the companies pharmaceutical companies (or âBig Pharmaâ), not âfrontier medicine labsâ.
Also also if any startup tried to make a nuclear weapons lab they would be shut down immediately and all the founders would be arrested. [citation needed]
Seems testable!
Fwiw, I would have predicted that labs would lead to more positive evaluations overall, including higher evaluations of responsibility and safety. But I donât think peopleâs intuitions are very reliable about such cases.
I agree overall but fwiw I think that for the first few years of Open AI and Deepmindâs existence, they were mostly pursuing blue sky research with few obvious nearby commercial applications (e.g. training NNs to play video games). I think a lab was a pretty reasonable termâor at least similarly reasonable to calling say, bell labs a lab.
I completely agree that OpenAI and Deepmind started out as labs and are no longer so.
My point was that I donât think it was marketing or a historical accident, and itâs actually quite different to the other companies that you named which were all just straightforward revenue generating companies from ~day 1.
Ah! Yes thatâs a good point and I misinterpreted.Thatâs part of what I meant by âhistorical accidentâ but now I think that it was confusing to say âaccidentâ and I should have said something like âhisotrical activitiesâ.
I think people like the âlabsâ language because it makes it easier to work with them and all the reasons you state, which is why I generally say âAI companiesâ. I do find it hard, however, to make myself understood sometimes in an EA context when I donât use it.
I imagine that one reason they are referred to as âlabsâ is because, to some extent, they are seen as creating a new kind of organism. They arenât just creating a product, they are poking and prodding something most do not fully understand.
This is a good point, though we will probably need to discern between several varieties of âAI companiesâ.
âLabâ (currently) means research is happening there (which is correct for the companies you mentioned).
âAI companyâ right now mostly says someone is doing something that involves AI. If youâre building a ChatGPT wrapper youâre an âAI companyâ.
So while I do agree with your point that these companies are no longer just labs (as you mentioned), we need to denote that they are companies where major research is happening, in comparison to most companies who are just building products with AI.
Yes, theyâre all tech companies. But OpenAI, Anthropic and DeepMind are obviously the core of a cluster of points in objectspace, and it seems reasonable to look for some name for that cluster (with a different discussion being what exactly the cluster that denotes âlabsâ includes, and whether these points are a part of it).
I agree that theyâre worth calling out somehow, I just think âlabâ is a misleading way to doing so given their current activities. Iâve made some admittedly-clunky suggestions in other threads here.
Very, very fair point, Sawyer! Thereâs a lot left to be desired in existing AI risk communicationsâespecially to the public/âpolicymakersâso any refinements are very welcome in my book. Great post!
Good point. Word association is misleading in this case.
They are big AGI companies.
And they are worse than big oil companies and big tobacco companies.