If OpenAI doesn’t hire an EA they will just hire someone else. I’m not sure if you tackle this point directly (sorry if I missed it) but doesn’t it straightforwardly seem better to have someone safety-conscious in these roles rather than someone who isn’t safety-conscious?
To reiterate, it’s not like if we remove these roles from the job board that they will less likely be filled. They would still definitely be filled, just by someone less safety-conscious in expectation. And I’m not sure the person who would get the role would be “less talented” in expectation because there are just so many talented ML researchers—so I’m not sure removing roles from the job board would slow down capabilities development much if at all.
I get a sense that your argument is somewhat grounded in deontology/virtue ethics (i.e. “don’t support a bad organization”) but perhaps not so much in terms of consequentialism?
I think the evidence we have from OpenAI is that it isn’t very helpful to “be a safety conscious person there.” (i.e. combo of people leaving who did not find it tractable to be helpful there, and NDAs making it hard to reason about, and IMO better to default assume bad things rather than good things given the NDAs)
I think it’s especially not helpful if you’re a low-context person, who reads an OpenAI job board posting, and isn’t going in with a specific plan to operate in an adversarial environment.
If the job posting literally said “to be clear, OpenAI has a pretty bad track record and seems to be an actively misleading environment, take this job if you are prepared to deal with that”, that’d be a different story. (But, that’s also a pretty weird job ad, and OpenAI would be rightly skeptical of people coming from that funnel. I think taking jobs at OpenAI that are net helpful to the world requires a mix of a very strong moral and epistemic backbone, and nonetheless still able to make good faith positive sum trades with OpenAI leadership. Most of the people I know who maybe had those skills have left the company)
I expect the object-level impact of a person joining OpenAI to be slightly harmful on net (although realistically close to neutral because of replaceability effects. I expect them to be slightly-harmful on net because OpenAI is good at hiring competent people, and good at funneling them into harmful capabilities work. So, the fact that you got hired is evidence you are slightly better at it than the next person).
I think the evidence we have from OpenAI is that it isn’t very helpful to “be a safety conscious person there
It’s insanely hard to have an outsized impact in this world. Of course it’s hard to change things from inside OpenAI, but that doesn’t mean we shouldn’t try. If we succeed it could mean everything. You’re probably going to have lower expected value pretty much anywhere else IMO, even if it does seem intractable to change things at OpenAI.
I think it’s especially not helpful if you’re a low-context person, who reads an OpenAI job board posting, and isn’t going in with a specific plan to operate in an adversarial environment.
I think job ads in particular are a filter for “being more typical.”
I expect the people who have a chance of doing a good job to be well connected to previous people who worked at OpenAI, with some experience under their belt navigating organizational social scenes while holding onto their own epistemics. I expect such a person to basically not need to see the job ad.
You’re referring to job boards generally but we’re talking about the 80K job board which is no typical job board.
I would expect someone who will do a good job to be someone going in wanting to stop OpenAI destroying the world. That seems to be someone who would read the 80K Hours job board. 80K is all about preserving the future.
They of course also have to be good at navigating organizational social scenes while holding onto their own epistemics which in my opinion are skills commonly found in the EA community!
I think EAs vary wildly. I think most EAs do not have those skills – I think it is a very difficult skill. Merely caring about the world is not enough.
I think most EAs do not, by default, prioritize epistemics that highly, unless they came in through the rationalist scene, and even then, I think holding onto your epistemics while navigating social pressure is a very difficult skill that even rationalists who specialize in it tend to fail at. (Getting into details here is tricky because it involves judgment calls about individuals, in social situations that are selected for being murky and controversial, but, no, I do not think the median EA or even the 99th percentile EA is going to competent enough at this for it to be worthwhile for them to join OpenAI. I think ~99.5th percentile is the point where it seems even worth talking about, and I don’t think those people get most of their job leads through the job board).
The person who gets the role is obviously going to be highly intelligent, probably socially adept, and highly-qualified with experience working in AI etc. etc. OpenAI wouldn’t hire someone who wasn’t.
The question is do you want this person also to care about safety. If so I would think advertising on the EA job board would increase the chance of this.
If you think EAs or people who look at the 80K Hours job board are for some reason less good epistemically than others then you will have to explain why because I believe the opposite.
If OpenAI doesn’t hire an EA they will just hire someone else. I’m not sure if you tackle this point directly (sorry if I missed it) but doesn’t it straightforwardly seem better to have someone safety-conscious in these roles rather than someone who isn’t safety-conscious?
To reiterate, it’s not like if we remove these roles from the job board that they will less likely be filled. They would still definitely be filled, just by someone less safety-conscious in expectation. And I’m not sure the person who would get the role would be “less talented” in expectation because there are just so many talented ML researchers—so I’m not sure removing roles from the job board would slow down capabilities development much if at all.
I get a sense that your argument is somewhat grounded in deontology/virtue ethics (i.e. “don’t support a bad organization”) but perhaps not so much in terms of consequentialism?
I attempted to address this in the Isn’t it better to have alignment researchers working there, than not? Are you sure you’re not running afoul of misguided purity instincts? FAQ section.
I think the evidence we have from OpenAI is that it isn’t very helpful to “be a safety conscious person there.” (i.e. combo of people leaving who did not find it tractable to be helpful there, and NDAs making it hard to reason about, and IMO better to default assume bad things rather than good things given the NDAs)
I think it’s especially not helpful if you’re a low-context person, who reads an OpenAI job board posting, and isn’t going in with a specific plan to operate in an adversarial environment.
If the job posting literally said “to be clear, OpenAI has a pretty bad track record and seems to be an actively misleading environment, take this job if you are prepared to deal with that”, that’d be a different story. (But, that’s also a pretty weird job ad, and OpenAI would be rightly skeptical of people coming from that funnel. I think taking jobs at OpenAI that are net helpful to the world requires a mix of a very strong moral and epistemic backbone, and nonetheless still able to make good faith positive sum trades with OpenAI leadership. Most of the people I know who maybe had those skills have left the company)
I expect the object-level impact of a person joining OpenAI to be slightly harmful on net (although realistically close to neutral because of replaceability effects. I expect them to be slightly-harmful on net because OpenAI is good at hiring competent people, and good at funneling them into harmful capabilities work. So, the fact that you got hired is evidence you are slightly better at it than the next person).
It’s insanely hard to have an outsized impact in this world. Of course it’s hard to change things from inside OpenAI, but that doesn’t mean we shouldn’t try. If we succeed it could mean everything. You’re probably going to have lower expected value pretty much anywhere else IMO, even if it does seem intractable to change things at OpenAI.
Surely this isn’t the typical EA though?
I think job ads in particular are a filter for “being more typical.”
I expect the people who have a chance of doing a good job to be well connected to previous people who worked at OpenAI, with some experience under their belt navigating organizational social scenes while holding onto their own epistemics. I expect such a person to basically not need to see the job ad.
You’re referring to job boards generally but we’re talking about the 80K job board which is no typical job board.
I would expect someone who will do a good job to be someone going in wanting to stop OpenAI destroying the world. That seems to be someone who would read the 80K Hours job board. 80K is all about preserving the future.
They of course also have to be good at navigating organizational social scenes while holding onto their own epistemics which in my opinion are skills commonly found in the EA community!
I think EAs vary wildly. I think most EAs do not have those skills – I think it is a very difficult skill. Merely caring about the world is not enough.
I think most EAs do not, by default, prioritize epistemics that highly, unless they came in through the rationalist scene, and even then, I think holding onto your epistemics while navigating social pressure is a very difficult skill that even rationalists who specialize in it tend to fail at. (Getting into details here is tricky because it involves judgment calls about individuals, in social situations that are selected for being murky and controversial, but, no, I do not think the median EA or even the 99th percentile EA is going to competent enough at this for it to be worthwhile for them to join OpenAI. I think ~99.5th percentile is the point where it seems even worth talking about, and I don’t think those people get most of their job leads through the job board).
The person who gets the role is obviously going to be highly intelligent, probably socially adept, and highly-qualified with experience working in AI etc. etc. OpenAI wouldn’t hire someone who wasn’t.
The question is do you want this person also to care about safety. If so I would think advertising on the EA job board would increase the chance of this.
If you think EAs or people who look at the 80K Hours job board are for some reason less good epistemically than others then you will have to explain why because I believe the opposite.