I don’t object to dropping OpenAI safety positions from the 80k job board on the grounds that the people who would be highly impactful in those roles don’t need the job board to learn about them, especially when combined with the other factors we’ve been discussing.
As I tried to communicate in my previous comment, I’m not convinced there is anyone who “will have their plans changed for the better by seeing OpenAI safety positions on 80k’s board”, and am not arguing for including them on the board.
EDIT: after a bit of offline messaging I realize I misunderstood Elizabeth; I thought the parent comment was pushing me to answer the question posed in the great grandcomment but actually it was accepting my request to bring this up a level of generality and not be specific to OpenAI. Sorry!
I think this mostly comes down to a disagreement over how sophisticated we think job board participants are, and I’d change my view on this if it turned out that a lot of people reading the board are new-to-EA folks who don’t pay much attention to disclaimers and interpret listing a role as saying “someone who takes this role will have a large positive impact in expectation”.
If there did turn out to be a lot of people in that category I’d recommend splitting the board into a visible-by-default section with jobs where conditional on getting the role you’ll have high positive impact in expectation (I’d biasedly put the NAO’s current openings in this category) and a you-need-to-click-show-more section with jobs where you need to think carefully about whether the combination of you and the role is a good one.
I don’t object to dropping OpenAI safety positions from the 80k job board on the grounds that the people who would be highly impactful in those roles don’t need the job board to learn about them, especially when combined with the other factors we’ve been discussing.
In this subthread I’m pushing back on your broader “I think a job board shouldn’t host companies that have taken already-earned compensation hostage”.
I still think the question of “who is the job board aimed at?” is relevant here, and would like to hear your answer.
As I tried to communicate in my previous comment, I’m not convinced there is anyone who “will have their plans changed for the better by seeing OpenAI safety positions on 80k’s board”, and am not arguing for including them on the board.
EDIT: after a bit of offline messaging I realize I misunderstood Elizabeth; I thought the parent comment was pushing me to answer the question posed in the great grandcomment but actually it was accepting my request to bring this up a level of generality and not be specific to OpenAI. Sorry!
I think the board should generally list jobs that, under some combinations of values and world models that the job board runners think are plausible, are plausibly one of the highest impact opportunities for the right person. I think in cases like working in OpenAI’s safety roles where anyone who is the “right person” almost certainly already knows about the role, there’s not much value in listing it but also not much harm.
I think this mostly comes down to a disagreement over how sophisticated we think job board participants are, and I’d change my view on this if it turned out that a lot of people reading the board are new-to-EA folks who don’t pay much attention to disclaimers and interpret listing a role as saying “someone who takes this role will have a large positive impact in expectation”.
If there did turn out to be a lot of people in that category I’d recommend splitting the board into a visible-by-default section with jobs where conditional on getting the role you’ll have high positive impact in expectation (I’d biasedly put the NAO’s current openings in this category) and a you-need-to-click-show-more section with jobs where you need to think carefully about whether the combination of you and the role is a good one.