One thing I’d be interested in seeing is more applications from people outside of the Anglosphere and Western Europe. Both because of intellectual diversity reasons and fairly naive arguments like lower cost-of-living means we can fund more projects, technical talent in those countries might be less tapped, etc. Sometimes people ask me why we haven’t funded many projects by people from developing countries, and (at least in my view) the short answer is that we haven’t received that many relevant applications.
Personally, I’d like to see more work being done to make it easier for people to get into AI alignment without becoming involved in EA or the rationality community. I think there are lots of researchers, particularly in academia, who would potentially work on alignment but who for one reason or another either get rubbed the wrong way by EA/rationality or just don’t vibe with it. And I think we’re missing out on a lot of these people’s contributions.
To be clear, I personally think EA and rationality are great, and I hope EA/rationality continue to be on-ramps to alignment; I just don’t want them to be the ~only on-ramps to alignment.
[I realize I didn’t answer your question literally, since there are some people working on this, but I figured you’d appreciate an answer to an adjacent question.]
What projects to reduce existential risk would you be excited to see someone work on (provided they were capable enough) that don’t already exist?
One thing I’d be interested in seeing is more applications from people outside of the Anglosphere and Western Europe. Both because of intellectual diversity reasons and fairly naive arguments like lower cost-of-living means we can fund more projects, technical talent in those countries might be less tapped, etc. Sometimes people ask me why we haven’t funded many projects by people from developing countries, and (at least in my view) the short answer is that we haven’t received that many relevant applications.
Personally, I’d like to see more work being done to make it easier for people to get into AI alignment without becoming involved in EA or the rationality community. I think there are lots of researchers, particularly in academia, who would potentially work on alignment but who for one reason or another either get rubbed the wrong way by EA/rationality or just don’t vibe with it. And I think we’re missing out on a lot of these people’s contributions.
To be clear, I personally think EA and rationality are great, and I hope EA/rationality continue to be on-ramps to alignment; I just don’t want them to be the ~only on-ramps to alignment.
[I realize I didn’t answer your question literally, since there are some people working on this, but I figured you’d appreciate an answer to an adjacent question.]