I’m pretty ignorant on AI risk and honestly tech stuff in general, but I’m trying to learn more. I think AI risk is like the #2 or #3 most important thing, but my naive reaction to the EA community’s view in particular was/sorta still is if it’s so bad why don’t they stop. When EA people make a pitch for the importance and urgency of AI risk, they point at AlphaGo, GPT-3, and Dall-E, which are huge advances made possible by OpenAI and DeepMind. Yet 80k and EAG (through the job fairs) actively recruit to non-safety roles at OpenAI and DeepMind and there’s lots of EA’s who have worked at them, and if anything they’re looked upon more favorably for doing so. When I asked my AI risk EA friends who I basically 99% defer to on AI stuff why we should be so cushy with people trying to do the thing we’re saying might be the worst thing ever, they explained that other, less safety-conscious AI groups are not far behind. Meta, Microsoft, and “AI groups in China” generally, are the ones I’ve heard referred to each at least 3x. (Though I don’t really get the Microsoft example after hearing about their partnership with OpenAI.)
The if-we-don’t-someone-will argument doesn’t sit very well with me, but I get it. Meta’s just released a chatbot called Blenderbot though, which, even though it’s obviously a different type of endeavor from something like GPT-3, very obviously sucks. It’s not a category difference from the AIM chatbot I remember growing up, honestly. If someone tried to sell me on impending existential AI risk using this chatbot, I would not be on board. I assume that Meta is announcing Blenderbot because it is a positive example of Meta’s AI work progress though. Is that a fair assumption? If not, should I / by how much should this cause me to negatively update on Meta’s AI capabilities? And by how should it cause me to negatively update on the if-we-don’t-someone-will argument, both vis-a-vis Meta and in general?
Earnest thanks for any replies.
They (Meta) literally did do it. They open sourced a GPT-3 clone called OPT. It’s 175-B parameter version is the most powerful LM whose weights are publicly available. I have no idea why they released a system as bad as Blenderbot, but don’t let their worst projects distort your impression of their best projects. They’re 6 months behind Deepmind, not 6 years.
GPT-3 was released June 2020. Meta didn’t release their OPT until May 2022. They did this after open source replications by EleutherAI and others, and after more impressive language models had been released by DeepMind (Gopher, Chinchilla) and Google (PaLM). According to Meta’s own evaluation in Figure 4 of the OPT paper, their model still fails to perform as well as GPT-3.
Meta also recently lost many of their top AI scientists [1]. They disbanded FAIR, their dedicated AI research group, and instead have put all ML and AI researchers on product-focused teams [2].
Meta seems ~2 years behind OpenAI and DeepMind in the AI race. They are prioritizing video games, not AI, as their central focus in the next 5-10 years. Zuckerberg must have longer timelines than many other people, or else he’d be jumping on this economic opportunity. As best I can tell, OpenAI, DeepMind, and Google Brain are head and shoulders above any other non-Chinese competition and are therefore responsible for the ongoing race to AGI.
[1] https://www.cnbc.com/amp/2022/04/01/metas-ai-lab-loses-some-key-people.html
[2] https://aibusiness.com/document.asp?doc_id=778013
A recent Scott Alexander blog covers this:
https://astralcodexten.substack.com/p/why-not-slow-ai-progress
Not quite a direct answer to your question, but it is worth noting—not everyone in EA believes that about AI capabilities work. I, for one, believe that working on AI capabilities, especially at a top lab like OpenAI and DeepMind, is a terrible idea and should be front and center on our “List of unethical careers”. Working in safety positions in those labs is still highly useful and impactful imo.
relevant tweet I saw recently: https://twitter.com/scholl_adam/status/1556989092784615424