In general my sense is that most potential advisors would be put off discussion of AGI or x-risk. Talking about safety in more mainstream terms might get a better reception, for example Unsolved Problems in ML Safety and X-Risk Analysis for AI Research by Hendrycks et al both present AI safety concerns in a way that might have broader appeal. Another approach would be presenting specific technical challenges that you want to work on, such as ELK or interpretability or OOD robustness, which can interest people on technical grounds even if they don’t share your motivations.
I don’t mean to totally discourage x-risk discussion, I’m actually writing a thesis proposal right now and trying to figure out how to productively mention my real motivations. I think it’s tough but hopefully you can find a way to make it work.
This might be helpful context: https://www.lesswrong.com/posts/SqjQFhn5KTarfW8v7/lessons-learned-from-talking-to-greater-than-100-academics
In general my sense is that most potential advisors would be put off discussion of AGI or x-risk. Talking about safety in more mainstream terms might get a better reception, for example Unsolved Problems in ML Safety and X-Risk Analysis for AI Research by Hendrycks et al both present AI safety concerns in a way that might have broader appeal. Another approach would be presenting specific technical challenges that you want to work on, such as ELK or interpretability or OOD robustness, which can interest people on technical grounds even if they don’t share your motivations.
I don’t mean to totally discourage x-risk discussion, I’m actually writing a thesis proposal right now and trying to figure out how to productively mention my real motivations. I think it’s tough but hopefully you can find a way to make it work.