[Question] By how much should Meta’s BlenderBot being really bad cause me to update on how justifiable it is for OpenAI and DeepMind to be making significant progress on AI capabilities?

I’m pretty ignorant on AI risk and honestly tech stuff in general, but I’m trying to learn more. I think AI risk is like the #2 or #3 most important thing, but my naive reaction to the EA community’s view in particular was/​sorta still is if it’s so bad why don’t they stop. When EA people make a pitch for the importance and urgency of AI risk, they point at AlphaGo, GPT-3, and Dall-E, which are huge advances made possible by OpenAI and DeepMind. Yet 80k and EAG (through the job fairs) actively recruit to non-safety roles at OpenAI and DeepMind and there’s lots of EA’s who have worked at them, and if anything they’re looked upon more favorably for doing so. When I asked my AI risk EA friends who I basically 99% defer to on AI stuff why we should be so cushy with people trying to do the thing we’re saying might be the worst thing ever, they explained that other, less safety-conscious AI groups are not far behind. Meta, Microsoft, and “AI groups in China” generally, are the ones I’ve heard referred to each at least 3x. (Though I don’t really get the Microsoft example after hearing about their partnership with OpenAI.)

The if-we-don’t-someone-will argument doesn’t sit very well with me, but I get it. Meta’s just released a chatbot called Blenderbot though, which, even though it’s obviously a different type of endeavor from something like GPT-3, very obviously sucks. It’s not a category difference from the AIM chatbot I remember growing up, honestly. If someone tried to sell me on impending existential AI risk using this chatbot, I would not be on board. I assume that Meta is announcing Blenderbot because it is a positive example of Meta’s AI work progress though. Is that a fair assumption? If not, should I /​ by how much should this cause me to negatively update on Meta’s AI capabilities? And by how should it cause me to negatively update on the if-we-don’t-someone-will argument, both vis-a-vis Meta and in general?

Earnest thanks for any replies.