I think your perception is spot on. The labs that are advancing towards AGI the fastest also profess to care about safety and do research on safety. Within the alignment field, many people believe that many other people’s research agendas are useless. There are varying levels of consensus about different questions—many people are opposed to racing towards AGI, and research directions like interpretability and eliciting latent knowledge are rarely criticized—but in many cases, making progress on AI safety requires having inside view opinions about what’s important and useful.
I think your perception is spot on. The labs that are advancing towards AGI the fastest also profess to care about safety and do research on safety. Within the alignment field, many people believe that many other people’s research agendas are useless. There are varying levels of consensus about different questions—many people are opposed to racing towards AGI, and research directions like interpretability and eliciting latent knowledge are rarely criticized—but in many cases, making progress on AI safety requires having inside view opinions about what’s important and useful.