I don’t have a great model of the constraints but my guess is that we’re mostly talent and mentoring constrained in that we need to make more research progress but also we don’t have enough mentoring to upskill new researchers (programs like SERI MATS are trying to change this though).. but also we need to be able to translate that into actual systems so buy in from the biggest players seems crucial.
I agree that most safety work isn’t monetizable, some things are, e.g. in order to make a nicer chat bot, but it’s questionable whether that actually reduces X risk. Afaik the companies which focus the most on safety (in terms of employee hours) are Anthropic and Conjecture. I don’t know how they aim to make money. For the time being most of their funding seems to come from philanthropic investors.
When I say that it’s in the company’s DNA then I mean that the founders value safety for its own sake and not primarily as a money making scheme. This would explain why they haven’t shut down their safety teams after they failed to provide immediate monetary value.
People, including engineers, can definitely spend all their time on safety at DM (can’t speak for OpenAI). I obviously can’t comment on the by me perceived priorities of DM leadership when it comes to the considerations around safety and ethics, beyond what is publicly available. In terms of the raw number of people working on it, I think it’s accurate for both companies that most ppl are not working on safety.
Thanks a lot! Best of luck with your career development!
I don’t have a great model of the constraints but my guess is that we’re mostly talent and mentoring constrained in that we need to make more research progress but also we don’t have enough mentoring to upskill new researchers (programs like SERI MATS are trying to change this though).. but also we need to be able to translate that into actual systems so buy in from the biggest players seems crucial.
I agree that most safety work isn’t monetizable, some things are, e.g. in order to make a nicer chat bot, but it’s questionable whether that actually reduces X risk. Afaik the companies which focus the most on safety (in terms of employee hours) are Anthropic and Conjecture. I don’t know how they aim to make money. For the time being most of their funding seems to come from philanthropic investors.
When I say that it’s in the company’s DNA then I mean that the founders value safety for its own sake and not primarily as a money making scheme. This would explain why they haven’t shut down their safety teams after they failed to provide immediate monetary value.
People, including engineers, can definitely spend all their time on safety at DM (can’t speak for OpenAI). I obviously can’t comment on the by me perceived priorities of DM leadership when it comes to the considerations around safety and ethics, beyond what is publicly available. In terms of the raw number of people working on it, I think it’s accurate for both companies that most ppl are not working on safety.
Thanks a lot! Best of luck with your career development!