Is this community over-emphasizing AI alignment?

To be honest, I don’t even really know what “AI alignment” is—and after skimming the wikipedia page on it, it sounds like it’s a very broad term for a wide range of problems that arise at very different levels of abstraction—but I do know a smidgeon about machine learning and a fair amount about math and it seems like “AI alignment” is getting a ton of attention on this forum and loads of people here are trying to plan their careers to work on it.

Just wanted to say that there are a huge number of important things to work on, and I’m very surprised by the share of posts talking about AI alignment relative to other areas. Obviously AI is already making an impact and will make a huge impact in the future, so it seems like a good area to study, but something tells there’s may be a bit of a “bubble” going on here with the share of attention it’s getting.

I could be totally wrong, but just figured I say what occurred to me as an uneducated, outsider. And if this has already been discussed ad nauseam, no need to rehash everything.

Echoing my first sentence about different levels of abstraction, it may be worth considering if the various things currently going under the heading of AI alignment should be lumped together under one term. Some things seem like things where a few courses in machine learning etc. would be enough to make progress on them. Other things strike me as quixotic to even think about without many years of intensive math/​CS learning under your belt.