Why do you think you’d need to “force yourself?” More specifically, have you tested your fit for any sort of AI alignment research?
If not, I would start there! e.g., I have no CS background, am not STEM-y (was a Public Policy major), and told myself I wasn’t the right kind of person to work on technical research … But I felt like AI safety was important enough that I should give it a proper shot, so I spent some time coming up with ELK proposals, starting the AGISF curriculum, and thinking about open questions in the field. I ended up, surprisingly, feeling like I wasn’t the most terrible fit for theoretical research!
If you’re interested in testing your fit, here are some resources
I could possibly connect you to people who would be interested in testing their fit too, if that’s of interest. In my experience, it’s useful to have like-minded people supporting you!
Finally, +1 to what Kirsten is saying—my approach to career planning is very much, “treat it like a science experiment,” which means that you should be exploring a lot of different hypotheses about what the most impactful (including personal fit etc.) path looks like for you.
edit: Here are also some scattered thoughts about other factors that you’ve mentioned:
“I also have an interest in starting a for-profit company, which couldn’t happen with AGI alignment (most likely).”
FWIW, the leading AI labs (OpenAI, Anthropic, and I think DeepMind) are all for-profits. Though it might be contested how much they contribute to safety efforts, they do have alignment teams.
“would the utility positive thing to do be to force myself to get an ML alignment focused PhD and become a researcher?”
What do you mean by “utility positive”—utility positive for whom? You? The world writ large?
“Is it certain enough that AI alignment is so much more important that I should forgo what I think I will be good at/like to pursue it?”
I don’t think anyone can answer this besides you. : ) I also think there are at least three questions here (besides the question of what you are good at/what you like, which imo is best addressed by testing your fit)
How important do you think AI alignment is? How confident are you in your cause prioritization?
Why do you think you’d need to “force yourself?” More specifically, have you tested your fit for any sort of AI alignment research?
If not, I would start there! e.g., I have no CS background, am not STEM-y (was a Public Policy major), and told myself I wasn’t the right kind of person to work on technical research … But I felt like AI safety was important enough that I should give it a proper shot, so I spent some time coming up with ELK proposals, starting the AGISF curriculum, and thinking about open questions in the field. I ended up, surprisingly, feeling like I wasn’t the most terrible fit for theoretical research!
If you’re interested in testing your fit, here are some resources
How to do theoretical research, a personal perspective
(Even) More Early-Career EAs Should Try AI Safety Technical Research
How to pursue a career in technical AI alignment
I could possibly connect you to people who would be interested in testing their fit too, if that’s of interest. In my experience, it’s useful to have like-minded people supporting you!
Finally, +1 to what Kirsten is saying—my approach to career planning is very much, “treat it like a science experiment,” which means that you should be exploring a lot of different hypotheses about what the most impactful (including personal fit etc.) path looks like for you.
edit: Here are also some scattered thoughts about other factors that you’ve mentioned:
“I also have an interest in starting a for-profit company, which couldn’t happen with AGI alignment (most likely).”
FWIW, the leading AI labs (OpenAI, Anthropic, and I think DeepMind) are all for-profits. Though it might be contested how much they contribute to safety efforts, they do have alignment teams.
“would the utility positive thing to do be to force myself to get an ML alignment focused PhD and become a researcher?”
What do you mean by “utility positive”—utility positive for whom? You? The world writ large?
“Is it certain enough that AI alignment is so much more important that I should forgo what I think I will be good at/like to pursue it?”
I don’t think anyone can answer this besides you. : ) I also think there are at least three questions here (besides the question of what you are good at/what you like, which imo is best addressed by testing your fit)
How important do you think AI alignment is? How confident are you in your cause prioritization?
How demanding do you want EA to be?
How much should you defer to others?