I haven’t shared this post with other relevant parties – my experience has been that private discussion of this sort of thing is more paralyzing than helpful.
Fourteen months ago, I emailed 80k staff with concerns about how they were promoting AGI lab positions on their job board.
The exchange:
I offered specific reasons and action points.
80k staff replied by referring to their website articles about why their position on promoting jobs at OpenAI and Anthropic was broadly justified (plus they removed one job listing).
Then I pointed out what those articles were specifically missing,
Then staff stopped responding (except to say they were “considering prioritising additional content on trade-offs”).
It was not a meaningful discussion.
Five months ago, I posted my concerns publicly. Again, 80k staff removed one job listing (why did they not double-check before?). Again, staff referred to their website articles as justification to keep promoting OpenAI and Anthropic safety and non-safety roles on their job board. Again, I pointed out what’s missing or off about their justifications in those articles, with no response from staff.
It took the firing of the entireOpenAI superalignment team before 80k staff “tightened up [their] listings”. That is, three years after the first wave of safety researchers left OpenAI.
80k is still listing 33 Anthropic jobs, even as Anthropic has clearly been competing to extend “capabilities” for over a year.
Fourteen months ago, I emailed 80k staff with concerns about how they were promoting AGI lab positions on their job board.
The exchange:
I offered specific reasons and action points.
80k staff replied by referring to their website articles about why their position on promoting jobs at OpenAI and Anthropic was broadly justified (plus they removed one job listing).
Then I pointed out what those articles were specifically missing,
Then staff stopped responding (except to say they were “considering prioritising additional content on trade-offs”).
It was not a meaningful discussion.
Five months ago, I posted my concerns publicly. Again, 80k staff removed one job listing (why did they not double-check before?). Again, staff referred to their website articles as justification to keep promoting OpenAI and Anthropic safety and non-safety roles on their job board. Again, I pointed out what’s missing or off about their justifications in those articles, with no response from staff.
It took the firing of the entire OpenAI superalignment team before 80k staff “tightened up [their] listings”. That is, three years after the first wave of safety researchers left OpenAI.
80k is still listing 33 Anthropic jobs, even as Anthropic has clearly been competing to extend “capabilities” for over a year.