1. 80k don’t claim to only advertise impactful jobs
They also advertise jobs that help build career impact, and they’re not against posting jobs that cause harm (and it’s often/always not clear which is which). See more in this post.
They sometimes add features like marking “recommended orgs” (which I endorse!), and sometimes remove those features ( 😿 ).
2. 80k’s career guide about working at AI labs doesn’t dive into “which lab”
We’re really not sure. It seems like OpenAI, Google DeepMind, and Anthropic are currently taking existential risk more seriously than other labs.
I think [link to comment] the “which lab” question is really important, and I’d encourage 80k to either be opinionated about it, or at least help people make up their mind somehow, not just leave people hanging on “which lab” while also often recommending people go work at AI labs, and also mentioning that often that work is net-negative and recommending reducing the risk by not working “in certain positions unless you feel awesome about the lab”.
[I have longer thoughts on how they could do this, but my main point is that it’s (imo) an important hole in their recommendation that might be hidden from many readers]
3. Counterfactual / With great power comes great responsibility
If 80k wouldn’t do all this, should we assume there would be no job board and no guides?
I claim that something like a job board has critical mass: Candidates know the best orgs are there, and orgs know the best candidates are there.
Once there’s a job board with critical mass, it’s not trivial to “compete” with it.
But EAs love opening job boards. A few new EA job boards pop up every year. I do think there would be an alternative. And so the question seems to me to be—how well are 80k using their critical mass?
4. What results did 80k’s work actually cause?
First of all: I don’t actually know, and if someone from 80k would respond, that would be way better than my guess.
Still, here’s my guess, which I think would be better than just responding to the poll:
Lots of engineers who care about AI Safety but don’t have a deep understanding of it (and not much interest in spending months to learn) - go work at AI labs.
Is this positive, because now people “care” in the labs, or negative because the labs have a much easier time hiring people who basically go and do their job? This is seriously a hard question, but I’d guess “negative” (and I think 80k would agree but I’m not sure)
I wouldn’t be surprised if 80k are directly responsible for a few very important hires.
For example, I think the CISO (head of security) of Anthropic used to run the security of Chrome. I’m VERY happy Anthropic hired such a person, I think infosec is really important for AI labs, and I wouldn’t be surprised if 80k had something to do with this, and if not this—then maybe some other similar role.
I think this is very positive, and maybe more important than all the rest.
[I need to go, my comment seems incomplete but I hope it’s still somewhat helpful so posting. I’m still not sure how to vote!]
And there’s always the other option that I (unpopularly) believe in—that better publicly available AI capabilities are necessary for meaningful safety research, thus AI labs have contributed positively to the field.
@Yonatan Cale
My long thoughts:
1. 80k don’t claim to only advertise impactful jobs
They also advertise jobs that help build career impact, and they’re not against posting jobs that cause harm (and it’s often/always not clear which is which). See more in this post.
They sometimes add features like marking “recommended orgs” (which I endorse!), and sometimes remove those features ( 😿 ).
2. 80k’s career guide about working at AI labs doesn’t dive into “which lab”
See here. Relevant text:
I think [link to comment] the “which lab” question is really important, and I’d encourage 80k to either be opinionated about it, or at least help people make up their mind somehow, not just leave people hanging on “which lab” while also often recommending people go work at AI labs, and also mentioning that often that work is net-negative and recommending reducing the risk by not working “in certain positions unless you feel awesome about the lab”.
[I have longer thoughts on how they could do this, but my main point is that it’s (imo) an important hole in their recommendation that might be hidden from many readers]
3. Counterfactual / With great power comes great responsibility
If 80k wouldn’t do all this, should we assume there would be no job board and no guides?
I claim that something like a job board has critical mass: Candidates know the best orgs are there, and orgs know the best candidates are there.
Once there’s a job board with critical mass, it’s not trivial to “compete” with it.
But EAs love opening job boards. A few new EA job boards pop up every year. I do think there would be an alternative. And so the question seems to me to be—how well are 80k using their critical mass?
4. What results did 80k’s work actually cause?
First of all: I don’t actually know, and if someone from 80k would respond, that would be way better than my guess.
Still, here’s my guess, which I think would be better than just responding to the poll:
Lots of engineers who care about AI Safety but don’t have a deep understanding of it (and not much interest in spending months to learn) - go work at AI labs.
Is this positive, because now people “care” in the labs, or negative because the labs have a much easier time hiring people who basically go and do their job? This is seriously a hard question, but I’d guess “negative” (and I think 80k would agree but I’m not sure)
I wouldn’t be surprised if 80k are directly responsible for a few very important hires.
For example, I think the CISO (head of security) of Anthropic used to run the security of Chrome. I’m VERY happy Anthropic hired such a person, I think infosec is really important for AI labs, and I wouldn’t be surprised if 80k had something to do with this, and if not this—then maybe some other similar role.
I think this is very positive, and maybe more important than all the rest.
[I need to go, my comment seems incomplete but I hope it’s still somewhat helpful so posting. I’m still not sure how to vote!]
And there’s always the other option that I (unpopularly) believe in—that better publicly available AI capabilities are necessary for meaningful safety research, thus AI labs have contributed positively to the field.