The tagline for the job board is: “Handpicked to help you tackle the world’s most pressing problems with your career.” I think that gives the reader the impression that, at least by default, the listed jobs are expected to have positive impact on the world, that they are better off being done well/faithfully than being unfilled or filled by incompetent candidates, etc.
Based on what I take to be Geoffrey’s position here, the best case that could be made for listing these positions would be: it could be impactful to fill a position one thinks is net harmful to prevent it from being filled by someone else in a way that causes even more net harm. But if that’s the theory of impact, I think one has to be very, very clear with the would-be applicant on the what the theory is. I question whether you can do that effectively on a public job board.
For example, if one thinks that working in prisons is an deplorable thing to do, I submit that it would be low integrity to encourage people to work as prison guards by painting that work in a positive light (e.g., handpicked careers to help you tackle the nation’s most pressing social-justice problems).
[The broader question of whether we’re better off with safety-conscious people in these kinds of roles has been discussed in prior posts at some length, so I haven’t attempted to restate that prior conversation.]
A clarification: We would not post roles if we thought they were net harmful and were hoping that somebody would counterfactually do less harm. I think that would be too morally fraught to propose to a stranger.
Relatedly, we would not post a job where we thought that to have a positive impact, you’d have to do the job badly.
We might post roles if we thought the average entrant would make the world worse, but a job board user would make the world better (due to the EA context our applicants typically have!). No cases of this come to mind immediately though. We post our jobs because we consider them promising opportunities to have a positive impact in the world, and expect job board users to do even more good than the average person.
Conor—yes, I understand that you’re making judgment calls about what’s likely to be net harmful versus helpful.
But your judgment calls seem to assume—implicitly or explicitly—that ASI alignment and control are possible, eventually, at least in principle.
Why do you assume that it’s possible, at all, to achieve reliable long-term alignment of ASI agents? I see no serious reason to think that it is possible. And I’ve never seen a single serious thinker make a principled argument that long-term ASI alignment with human values is, in fact, possible.
And if ASI alignment isn’t possible, then all AI ‘safety research’ at AI companies aiming to build ASI is, in fact, just safety-washing. And it all increases X risk by giving a false sense of security, and encouraging capabilities development.
So, IMHO, 80k Hours should re-assess what it’s doing by posting these ads for jobs inside AI companies—which are arguably the most dangerous organizations in human history.
Is it ethical to try to do good by taking a job within an evil and reckless industry? To ‘steer it’ in a better direction? To nudge it towards minimally-bad outcomes? To soften the extinction risk?
I think not. I think the AI industry is evil and reckless, and EAs would do best to denounce it clearly by warning talented young people not to work inside it.
FWIW my impression of the EA community’s position is that we need to build safe AI, not that we need to stop AI development altogether (although some may hold this view).
Stopping AI development altogether misses out on all the benefits from AI, which could genuinely be extensive and could include helping us with other very pressing problems (global health, animal welfare etc.).
I do think one can do a tremendous amount of good at OpenAI, and a tremendous amount of harm. I am in favor of roles at AI companies being on the 80,000 Hours job board so that the former is more likely.
JackM—these alleged ‘tremendous’ benefits are all hypothetical and speculative.
Whereas the likely X risk from ASI have been examined in detail by thousands of serious people, and polls show that most people, both inside and outside the AI industry, are deeply concerned by them.
This is why I think it’s deeply unethical for 80k Hours to post jobs to work on ASI within AI companies.
I share your concern about x-risk from ASI, that’s why I want safety-aligned people in these roles as opposed to people who aren’t concerned about the risks.
There are genuine proposals on how to align ASI, so I think it’s possible. I’m not sure what the chances are, but I think it’s possible. I think the most promising proposals involve using advanced AI to assist with oversight, interpretability, and recursive alignment tasks—eventually building a feedback loop where aligned systems help align more powerful successors.
My view is these roles are going to filled regardless. Wouldn’t you want someone who is safety-conscious in them?
The tagline for the job board is: “Handpicked to help you tackle the world’s most pressing problems with your career.” I think that gives the reader the impression that, at least by default, the listed jobs are expected to have positive impact on the world, that they are better off being done well/faithfully than being unfilled or filled by incompetent candidates, etc.
Based on what I take to be Geoffrey’s position here, the best case that could be made for listing these positions would be: it could be impactful to fill a position one thinks is net harmful to prevent it from being filled by someone else in a way that causes even more net harm. But if that’s the theory of impact, I think one has to be very, very clear with the would-be applicant on the what the theory is. I question whether you can do that effectively on a public job board.
For example, if one thinks that working in prisons is an deplorable thing to do, I submit that it would be low integrity to encourage people to work as prison guards by painting that work in a positive light (e.g., handpicked careers to help you tackle the nation’s most pressing social-justice problems).
[The broader question of whether we’re better off with safety-conscious people in these kinds of roles has been discussed in prior posts at some length, so I haven’t attempted to restate that prior conversation.]
A clarification: We would not post roles if we thought they were net harmful and were hoping that somebody would counterfactually do less harm. I think that would be too morally fraught to propose to a stranger.
Relatedly, we would not post a job where we thought that to have a positive impact, you’d have to do the job badly.
We might post roles if we thought the average entrant would make the world worse, but a job board user would make the world better (due to the EA context our applicants typically have!). No cases of this come to mind immediately though. We post our jobs because we consider them promising opportunities to have a positive impact in the world, and expect job board users to do even more good than the average person.
Conor—yes, I understand that you’re making judgment calls about what’s likely to be net harmful versus helpful.
But your judgment calls seem to assume—implicitly or explicitly—that ASI alignment and control are possible, eventually, at least in principle.
Why do you assume that it’s possible, at all, to achieve reliable long-term alignment of ASI agents? I see no serious reason to think that it is possible. And I’ve never seen a single serious thinker make a principled argument that long-term ASI alignment with human values is, in fact, possible.
And if ASI alignment isn’t possible, then all AI ‘safety research’ at AI companies aiming to build ASI is, in fact, just safety-washing. And it all increases X risk by giving a false sense of security, and encouraging capabilities development.
So, IMHO, 80k Hours should re-assess what it’s doing by posting these ads for jobs inside AI companies—which are arguably the most dangerous organizations in human history.
Jason—your reply cuts to the heart of the matter.
Is it ethical to try to do good by taking a job within an evil and reckless industry? To ‘steer it’ in a better direction? To nudge it towards minimally-bad outcomes? To soften the extinction risk?
I think not. I think the AI industry is evil and reckless, and EAs would do best to denounce it clearly by warning talented young people not to work inside it.
FWIW my impression of the EA community’s position is that we need to build safe AI, not that we need to stop AI development altogether (although some may hold this view).
Stopping AI development altogether misses out on all the benefits from AI, which could genuinely be extensive and could include helping us with other very pressing problems (global health, animal welfare etc.).
I do think one can do a tremendous amount of good at OpenAI, and a tremendous amount of harm. I am in favor of roles at AI companies being on the 80,000 Hours job board so that the former is more likely.
JackM—these alleged ‘tremendous’ benefits are all hypothetical and speculative.
Whereas the likely X risk from ASI have been examined in detail by thousands of serious people, and polls show that most people, both inside and outside the AI industry, are deeply concerned by them.
This is why I think it’s deeply unethical for 80k Hours to post jobs to work on ASI within AI companies.
I share your concern about x-risk from ASI, that’s why I want safety-aligned people in these roles as opposed to people who aren’t concerned about the risks.
There are genuine proposals on how to align ASI, so I think it’s possible. I’m not sure what the chances are, but I think it’s possible. I think the most promising proposals involve using advanced AI to assist with oversight, interpretability, and recursive alignment tasks—eventually building a feedback loop where aligned systems help align more powerful successors.
I don’t agree that benefits are speculative by the way. DeepMind has already won the Nobel prize for Chemistry for their work on protein folding.
EDIT: 80,000 Hours also doesn’t seem to promote all roles, only those which contribute to safety, which seems reasonable to me.