I am a mathematics grad student. I think that working on AI safety research would be a valuable thing for me to do, if the research were something I felt intellectually motivated by. Unfortunately, whether I feel intellectually motivated by a problem has little to do with what is useful or important; it basically just depends on how cool/aesthetic/elegant the math involved is.
I’ve taken a semester of ML and read a handful (~5) AI safety papers as part of a Zoom reading group, and thus far none of it appeals. It might be that this is because nothing in AI research will be adequately appealing, but it might also be that I just haven’t found the right topic yet. So to that end: what’s the coolest math involved in AI safety research? What problems might I really like reading about or working on?
[Question] What are the coolest topics in AI safety, to a hopelessly pure mathematician?
I am a mathematics grad student. I think that working on AI safety research would be a valuable thing for me to do, if the research were something I felt intellectually motivated by. Unfortunately, whether I feel intellectually motivated by a problem has little to do with what is useful or important; it basically just depends on how cool/aesthetic/elegant the math involved is.
I’ve taken a semester of ML and read a handful (~5) AI safety papers as part of a Zoom reading group, and thus far none of it appeals. It might be that this is because nothing in AI research will be adequately appealing, but it might also be that I just haven’t found the right topic yet. So to that end: what’s the coolest math involved in AI safety research? What problems might I really like reading about or working on?