[Question] Is a career in making AI systems more secure a meaningful way to mitigate the X-risk posed by AGI?

Hello everyone,

I’m a software engineer at a FAANG tech company and have recently become interested (and worried) by the existential risk posed by advanced artificial intelligence. I’ve looked at roles in AI alignment research suggested by 80,000 hours but don’t want to relocate outside the Seattle Area and am not interested in leaving the industry to pursue a Ph.D.

I don’t do any machine learning work in my current role but am spending the next year self-studying. However, an opportunity has presented itself to join a new team researching adversarial machine learning attacks and developing security products to counteract them. It seems logical that research and work done to make it more difficult for today’s ML systems to be hacked could be helpful or at least help build a foundation for improving the security of AGI systems in the future. It seems to me that even if we’re able to align AGI’s values with our intentions, there are still risks posed by extremely powerful intelligence being hacked or tricked one way or another in the future.

What are your thoughts? Is this a promising way to have an impact in reducing AI x-risk, or is the only promising method to work in an AI research lab?

Thanks!