Thank you for taking the time to provide your thoughts in detail, it was extremely helpful for understanding the field better. It also helped me pinpoint some options for the next steps. For now, I’m relearning ML/DL and decided to sign up for the Introduction to ML Safety course.
I had a few follow-up questions, if you don’t mind:
What are those 100 people going to do? Either start their own org, work independently or try to find some other team.
That seems like a reasonable explanation. My impression was that the field was very talent-constrained, but you make it seem like neither talent nor funding is the bottleneck. What do you think are the current constraints for AI safety work?
My confusion about the funding/private company situation stems from my (likely incorrect) assumption that AI safety solutions are not very monetizable. How would a private company focusing primarily on AI safety make a steady profit?
OpenAI, DeepMind, Anthropic—have the safety of AI systems as an important part of their founding DNA.
I currently view OpenAI and DeepMind more like AI product companies, with “ethics and safety as considerations but not as the focus.” Does this seem accurate to you? Do engineers (in general) have the option to focus mainly on safety-related projects in these companies?
Separately, I’d also like to wish the DeepMind alignment team the best of luck! I was surprised to find that it has grown much more than I thought in recent years.
I don’t have a great model of the constraints but my guess is that we’re mostly talent and mentoring constrained in that we need to make more research progress but also we don’t have enough mentoring to upskill new researchers (programs like SERI MATS are trying to change this though).. but also we need to be able to translate that into actual systems so buy in from the biggest players seems crucial.
I agree that most safety work isn’t monetizable, some things are, e.g. in order to make a nicer chat bot, but it’s questionable whether that actually reduces X risk. Afaik the companies which focus the most on safety (in terms of employee hours) are Anthropic and Conjecture. I don’t know how they aim to make money. For the time being most of their funding seems to come from philanthropic investors.
When I say that it’s in the company’s DNA then I mean that the founders value safety for its own sake and not primarily as a money making scheme. This would explain why they haven’t shut down their safety teams after they failed to provide immediate monetary value.
People, including engineers, can definitely spend all their time on safety at DM (can’t speak for OpenAI). I obviously can’t comment on the by me perceived priorities of DM leadership when it comes to the considerations around safety and ethics, beyond what is publicly available. In terms of the raw number of people working on it, I think it’s accurate for both companies that most ppl are not working on safety.
Thanks a lot! Best of luck with your career development!
Thank you for taking the time to provide your thoughts in detail, it was extremely helpful for understanding the field better. It also helped me pinpoint some options for the next steps. For now, I’m relearning ML/DL and decided to sign up for the Introduction to ML Safety course.
I had a few follow-up questions, if you don’t mind:
That seems like a reasonable explanation. My impression was that the field was very talent-constrained, but you make it seem like neither talent nor funding is the bottleneck. What do you think are the current constraints for AI safety work?
My confusion about the funding/private company situation stems from my (likely incorrect) assumption that AI safety solutions are not very monetizable. How would a private company focusing primarily on AI safety make a steady profit?
I currently view OpenAI and DeepMind more like AI product companies, with “ethics and safety as considerations but not as the focus.” Does this seem accurate to you? Do engineers (in general) have the option to focus mainly on safety-related projects in these companies?
Separately, I’d also like to wish the DeepMind alignment team the best of luck! I was surprised to find that it has grown much more than I thought in recent years.
I don’t have a great model of the constraints but my guess is that we’re mostly talent and mentoring constrained in that we need to make more research progress but also we don’t have enough mentoring to upskill new researchers (programs like SERI MATS are trying to change this though).. but also we need to be able to translate that into actual systems so buy in from the biggest players seems crucial.
I agree that most safety work isn’t monetizable, some things are, e.g. in order to make a nicer chat bot, but it’s questionable whether that actually reduces X risk. Afaik the companies which focus the most on safety (in terms of employee hours) are Anthropic and Conjecture. I don’t know how they aim to make money. For the time being most of their funding seems to come from philanthropic investors.
When I say that it’s in the company’s DNA then I mean that the founders value safety for its own sake and not primarily as a money making scheme. This would explain why they haven’t shut down their safety teams after they failed to provide immediate monetary value.
People, including engineers, can definitely spend all their time on safety at DM (can’t speak for OpenAI). I obviously can’t comment on the by me perceived priorities of DM leadership when it comes to the considerations around safety and ethics, beyond what is publicly available. In terms of the raw number of people working on it, I think it’s accurate for both companies that most ppl are not working on safety.
Thanks a lot! Best of luck with your career development!