Hi Isaac, I agree with many other replies here. I would just add this:
I think AI alignment research could benefit from a broader range of expertise, beyond the usual ‘AI/CS experts + moral philosophers’ model that seems typical in EA approaches.
Lots of non-AI topics in computer science seem relevant to specific AI risks, such as crypto/blockchain, autonomous agents/robotics, cybersecurity, military/defense applications, computational biology, big data/privacy, social media algorithms, etc. I think getting some training in those—especially the topics best aligned with your for-profit business interests—would position you to make more distinctive and valuable contributions to AI safety discussions. In other words, focus on the CS topics relevant to AI safety that are neglected, and not just important and tractable.
Even further afield, I think cases could be made that studying cognitive science, evolutionary psychology, animal behavior, evolutionary game theory, behavioral economics, political science, etc. could contribute very helpful insights to AI safety—but they’re not very well integrated into mainstream AI safety discussions yet.
Hi Isaac, I agree with many other replies here. I would just add this:
I think AI alignment research could benefit from a broader range of expertise, beyond the usual ‘AI/CS experts + moral philosophers’ model that seems typical in EA approaches.
Lots of non-AI topics in computer science seem relevant to specific AI risks, such as crypto/blockchain, autonomous agents/robotics, cybersecurity, military/defense applications, computational biology, big data/privacy, social media algorithms, etc. I think getting some training in those—especially the topics best aligned with your for-profit business interests—would position you to make more distinctive and valuable contributions to AI safety discussions. In other words, focus on the CS topics relevant to AI safety that are neglected, and not just important and tractable.
Even further afield, I think cases could be made that studying cognitive science, evolutionary psychology, animal behavior, evolutionary game theory, behavioral economics, political science, etc. could contribute very helpful insights to AI safety—but they’re not very well integrated into mainstream AI safety discussions yet.