(Not necessarily a criticism of this post, but) I want to note that some (maybe 20%?) of these roles seem probably-net-negative to me, and I think there are big differences in effectiveness between the rest.
Maybe I’m wrong, but make sure to think carefully about finding a job that has a big positive impact, not just getting a job at a (more or less) EA-aligned organization!
I agree with this. Please don’t work in AI capabilities research, and in particular don’t work in labs directly trying to build AGI (e.g. OpenAI or Deepmind). There are few jobs that cause as much harm, and historically the EA community has already caused great harm here. (There are some arguments that people can make the processes at those organizations safer, but I’ve only heard negative things about people working in jobs that are non-safety related who tried to do this, and I don’t currently think you will have much success changing organizations like that from a ground-level engineering role)
I don’t think this is true for the safety teams at Deepmind, but think it was true for some of the safety team at OpenAI, though I don’t think all of it (I don’t know what the current safety team at OpenAI is like, since most of it left to Anthropic).
Thanks for sharing. It seems like the most informed people in AI Safety have strongly changed their views on the impact of OpenAI and Deepmind compared to only a few years ago. Most notably, I was surprised to see ~all of the OpenAI safety team leave for Anthropic . This shift and the reasoning behind it have been fairly opaque to me, although I try to keep up to date. Clearly there are risks with publicly criticizing these important organizations, but I’d be really interested to hear more about this update from anybody who understands it.
Thanks for the comment and further clarifying OP’s point. This is an important perspective. I have edited the post to refer to your comment. Would you maybe like to share a link to some discussion regarding this for those who would like to read more about it?
Not OP, but I’m guessing it’s at least unclear for the non-safety positions at OpenAI listed but it depends a lot on what a person would do in those positions. (I think they are not necessarily good “by default”, so the people working in these positions would have to be more careful/more proactive to make it positive. Still think it could be great.) Same for many similar positions on the sheet but pointing out OpenAI since a lot of roles there are listed. For some of the roles, I don’t know enough about the org to judge.
Thanks. I don’t have a personal opinion on this, but I’ve adapted the list to show which of the OpenAI positions were listed on the 80k job board and which not. I would point out that 80k lists OpenAI as an org they recommend.
(Not necessarily a criticism of this post, but) I want to note that some (maybe 20%?) of these roles seem probably-net-negative to me, and I think there are big differences in effectiveness between the rest.
Maybe I’m wrong, but make sure to think carefully about finding a job that has a big positive impact, not just getting a job at a (more or less) EA-aligned organization!
I agree with this. Please don’t work in AI capabilities research, and in particular don’t work in labs directly trying to build AGI (e.g. OpenAI or Deepmind). There are few jobs that cause as much harm, and historically the EA community has already caused great harm here. (There are some arguments that people can make the processes at those organizations safer, but I’ve only heard negative things about people working in jobs that are non-safety related who tried to do this, and I don’t currently think you will have much success changing organizations like that from a ground-level engineering role)
Do you think this is the case for Deepmind / OpenAI’s safety teams as well, or does this only apply to non-safety roles within these organisations?
I don’t think this is true for the safety teams at Deepmind, but think it was true for some of the safety team at OpenAI, though I don’t think all of it (I don’t know what the current safety team at OpenAI is like, since most of it left to Anthropic).
Thanks for sharing. It seems like the most informed people in AI Safety have strongly changed their views on the impact of OpenAI and Deepmind compared to only a few years ago. Most notably, I was surprised to see ~all of the OpenAI safety team leave for Anthropic . This shift and the reasoning behind it have been fairly opaque to me, although I try to keep up to date. Clearly there are risks with publicly criticizing these important organizations, but I’d be really interested to hear more about this update from anybody who understands it.
Thanks for the comment and further clarifying OP’s point. This is an important perspective. I have edited the post to refer to your comment.
Would you maybe like to share a link to some discussion regarding this for those who would like to read more about it?
Which roles specifically seem net-negative to you?
Not OP, but I’m guessing it’s at least unclear for the non-safety positions at OpenAI listed but it depends a lot on what a person would do in those positions. (I think they are not necessarily good “by default”, so the people working in these positions would have to be more careful/more proactive to make it positive. Still think it could be great.) Same for many similar positions on the sheet but pointing out OpenAI since a lot of roles there are listed. For some of the roles, I don’t know enough about the org to judge.
Thanks. I don’t have a personal opinion on this, but I’ve adapted the list to show which of the OpenAI positions were listed on the 80k job board and which not. I would point out that 80k lists OpenAI as an org they recommend.