For someone new to looking at AI concerns, can either of you briefly explain why Meta is worse than the others? The biggest difference I’m aware of is that Meta is open source vs the others that are not
Good question. Yeah, Meta AI tends to share their research and model weights while OpenAI, Google DeepMind, and Anthropic seem to be becoming more closed. But more generally, those three labs seem to be concerned about catastrophic risk from AI while Meta does not. Those three labs have alignment plans (more or less), they do alignment research, they are working toward good red-teaming and modelevals, they tend to support strong regulation that might be able to prevent dangerous AI from being trained or deployed, their leadership talks about catastrophic risks, and a decent chunk of their staff is concerned about catastrophic risks.
Sorry I don’t have time to provide sources for all these claims.
For someone new to looking at AI concerns, can either of you briefly explain why Meta is worse than the others? The biggest difference I’m aware of is that Meta is open source vs the others that are not
Good question. Yeah, Meta AI tends to share their research and model weights while OpenAI, Google DeepMind, and Anthropic seem to be becoming more closed. But more generally, those three labs seem to be concerned about catastrophic risk from AI while Meta does not. Those three labs have alignment plans (more or less), they do alignment research, they are working toward good red-teaming and model evals, they tend to support strong regulation that might be able to prevent dangerous AI from being trained or deployed, their leadership talks about catastrophic risks, and a decent chunk of their staff is concerned about catastrophic risks.
Sorry I don’t have time to provide sources for all these claims.
Not a problem, that’s a good starting point for me to effectively jump into the different reasons and find sources. I appreciate it!