Personally, I’d like to see more work being done to make it easier for people to get into AI alignment without becoming involved in EA or the rationality community. I think there are lots of researchers, particularly in academia, who would potentially work on alignment but who for one reason or another either get rubbed the wrong way by EA/rationality or just don’t vibe with it. And I think we’re missing out on a lot of these people’s contributions.
To be clear, I personally think EA and rationality are great, and I hope EA/rationality continue to be on-ramps to alignment; I just don’t want them to be the ~only on-ramps to alignment.
[I realize I didn’t answer your question literally, since there are some people working on this, but I figured you’d appreciate an answer to an adjacent question.]
Personally, I’d like to see more work being done to make it easier for people to get into AI alignment without becoming involved in EA or the rationality community. I think there are lots of researchers, particularly in academia, who would potentially work on alignment but who for one reason or another either get rubbed the wrong way by EA/rationality or just don’t vibe with it. And I think we’re missing out on a lot of these people’s contributions.
To be clear, I personally think EA and rationality are great, and I hope EA/rationality continue to be on-ramps to alignment; I just don’t want them to be the ~only on-ramps to alignment.
[I realize I didn’t answer your question literally, since there are some people working on this, but I figured you’d appreciate an answer to an adjacent question.]