1. 80k donāt claim to only advertise impactful jobs
They also advertise jobs that help build career impact, and theyāre not against posting jobs that cause harm (and itās often/āalways not clear which is which). See more in this post.
They sometimes add features like marking ārecommended orgsā (which I endorse!), and sometimes remove those features ( šæ ).
2. 80kās career guide about working at AI labs doesnāt dive into āwhich labā
Weāre really not sure. It seems like OpenAI, Google DeepMind, and Anthropic are currently taking existential risk more seriously than other labs.
I think [link to comment] the āwhich labā question is really important, and Iād encourage 80k to either be opinionated about it, or at least help people make up their mind somehow, not just leave people hanging on āwhich labā while also often recommending people go work at AI labs, and also mentioning that often that work is net-negative and recommending reducing the risk by not working āin certain positions unless you feel awesome about the labā.
[I have longer thoughts on how they could do this, but my main point is that itās (imo) an important hole in their recommendation that might be hidden from many readers]
3. Counterfactual /ā With great power comes great responsibility
If 80k wouldnāt do all this, should we assume there would be no job board and no guides?
I claim that something like a job board has critical mass: Candidates know the best orgs are there, and orgs know the best candidates are there.
Once thereās a job board with critical mass, itās not trivial to ācompeteā with it.
But EAs love opening job boards. A few new EA job boards pop up every year. I do think there would be an alternative. And so the question seems to me to beāhow well are 80k using their critical mass?
4. What results did 80kās work actually cause?
First of all: I donāt actually know, and if someone from 80k would respond, that would be way better than my guess.
Still, hereās my guess, which I think would be better than just responding to the poll:
Lots of engineers who care about AI Safety but donāt have a deep understanding of it (and not much interest in spending months to learn) - go work at AI labs.
Is this positive, because now people ācareā in the labs, or negative because the labs have a much easier time hiring people who basically go and do their job? This is seriously a hard question, but Iād guess ānegativeā (and I think 80k would agree but Iām not sure)
I wouldnāt be surprised if 80k are directly responsible for a few very important hires.
For example, I think the CISO (head of security) of Anthropic used to run the security of Chrome. Iām VERY happy Anthropic hired such a person, I think infosec is really important for AI labs, and I wouldnāt be surprised if 80k had something to do with this, and if not thisāthen maybe some other similar role.
I think this is very positive, and maybe more important than all the rest.
[I need to go, my comment seems incomplete but I hope itās still somewhat helpful so posting. Iām still not sure how to vote!]
And thereās always the other option that I (unpopularly) believe ināthat better publicly available AI capabilities are necessary for meaningful safety research, thus AI labs have contributed positively to the field.
My long thoughts:
1. 80k donāt claim to only advertise impactful jobs
They also advertise jobs that help build career impact, and theyāre not against posting jobs that cause harm (and itās often/āalways not clear which is which). See more in this post.
They sometimes add features like marking ārecommended orgsā (which I endorse!), and sometimes remove those features ( šæ ).
2. 80kās career guide about working at AI labs doesnāt dive into āwhich labā
See here. Relevant text:
I think [link to comment] the āwhich labā question is really important, and Iād encourage 80k to either be opinionated about it, or at least help people make up their mind somehow, not just leave people hanging on āwhich labā while also often recommending people go work at AI labs, and also mentioning that often that work is net-negative and recommending reducing the risk by not working āin certain positions unless you feel awesome about the labā.
[I have longer thoughts on how they could do this, but my main point is that itās (imo) an important hole in their recommendation that might be hidden from many readers]
3. Counterfactual /ā With great power comes great responsibility
If 80k wouldnāt do all this, should we assume there would be no job board and no guides?
I claim that something like a job board has critical mass: Candidates know the best orgs are there, and orgs know the best candidates are there.
Once thereās a job board with critical mass, itās not trivial to ācompeteā with it.
But EAs love opening job boards. A few new EA job boards pop up every year. I do think there would be an alternative. And so the question seems to me to beāhow well are 80k using their critical mass?
4. What results did 80kās work actually cause?
First of all: I donāt actually know, and if someone from 80k would respond, that would be way better than my guess.
Still, hereās my guess, which I think would be better than just responding to the poll:
Lots of engineers who care about AI Safety but donāt have a deep understanding of it (and not much interest in spending months to learn) - go work at AI labs.
Is this positive, because now people ācareā in the labs, or negative because the labs have a much easier time hiring people who basically go and do their job? This is seriously a hard question, but Iād guess ānegativeā (and I think 80k would agree but Iām not sure)
I wouldnāt be surprised if 80k are directly responsible for a few very important hires.
For example, I think the CISO (head of security) of Anthropic used to run the security of Chrome. Iām VERY happy Anthropic hired such a person, I think infosec is really important for AI labs, and I wouldnāt be surprised if 80k had something to do with this, and if not thisāthen maybe some other similar role.
I think this is very positive, and maybe more important than all the rest.
[I need to go, my comment seems incomplete but I hope itās still somewhat helpful so posting. Iām still not sure how to vote!]
And thereās always the other option that I (unpopularly) believe ināthat better publicly available AI capabilities are necessary for meaningful safety research, thus AI labs have contributed positively to the field.