I did read that compilation of advice, and responded to that in an email (16 May 2023):
“Dear [a],
People will drop in and look at job profiles without reading your other materials on the website. I’d suggest just writing a do-your-research cautionary line about OpenAI and Anthropic in the job descriptions itself.
Also suggest reviewing whether to trust advice on whether to take jobs that contribute to capability research.
Particularly advice by nerdy researchers paid/funded by corporate tech.
Particularly by computer-minded researchers who might not be aware of the limitations of developing complicated control mechanisms to contain complex machine-environment feedback loops.
This is what the article says: “All that said, we think it’s crucial to take an enormous amount of care before working at an organisation that might be a huge force for harm. Overall, it’s complicated to assess whether it’s good to work at a leading AI lab — and it’ll vary from person to person, and role to role.”
So you are saying that people are making a decision about working for an AGI lab that might be (or actually is) a huge force for harm. And that whether it’s good (or bad) to work at an AGI lab depends on the person – ie. people need to figure this out for them personally.
Yet you are openly advertising various jobs at AGI labs on the job board. People are clicking through and applying. Do you know how many read your article beforehand?
~ ~ ~ Even if they did read through the article, both the content and framing of the advice seems misguided. Noticing what is emphasised in your considerations.
Here are the first sentences of each consideration section: (ie. as what readers are most likely to read, and what you might most want to convey).
“We think that a leading — but careful — AI project could be a huge force for good, and crucial to preventing an AI-related catastrophe.”
Is this your opinion about DeepMind, OpenAI and Anthropic?
“Top AI labs are high-performing, rapidly growing organisations. In general, one of the best ways to gain career capital is to go and work with any high-performing team — you can just learn a huge amount about getting stuff done. They also have excellent reputations more widely. So you get the credential of saying you’ve worked in a leading lab, and you’ll also gain lots of dynamic, impressive connections.”
Is this focussing on gaining prestige and (nepotistic) connections as an instrumental power move, with the hope of improving things later...?
Instead of on actually improving safety?
“We’d guess that, all else equal, we’d prefer that progress on AI capabilities was slower.”
Why is only this part stated as a guess?
I did not read “we’d guess that a leading but careful AI project, all else equal, could be a force of good”.
Or inversely: “we think that continued scaling of AI capabilities could be a huge force of harm.”
Notice how those framings come across very differently.
Wait, reading this section further is blowing my mind.
“But that’s not necessarily the case. There are reasons to think that advancing at least some kinds of AI capabilities could be beneficial. Here are a few”
“This distinction between ‘capabilities’ research and ‘safety’ research is extremely fuzzy, and we have a somewhat poor track record of predicting which areas of research will be beneficial for safety work in the future. This suggests that work that advances some (and perhaps many) kinds of capabilities faster may be useful for reducing risks.”
Did you just argue for working on some capabilities because it might improve safety? This is blowing my mind.
“Moving faster could reduce the risk that AI projects that are less cautious than the existing ones can enter the field.”
Are you saying we should consider moving faster because there are people less cautious than us?
Do you notice how a similarly flavoured argument can be used by and is probably being used by staff at three leading AGI labs that are all competing with each other?
Did OpenAI moving fast with ChatGPT prevent Google from starting new AI projects?
“It’s possible that the later we develop transformative AI, the faster (and therefore more dangerously) everything will play out, because other currently-constraining factors (like the amount of compute available in the world) could continue to grow independently of technical progress.”
How would compute grow independently of AI corporations deciding to scale up capability?
The AGI labs were buying up GPUs to the point of shortage. Nvidia was not able to supply them fast enough. How is that not getting Nvidia and other producers to increase production of GPUs?
More comments on the hardware overhang argument here.
“Lots of work that makes models more useful — and so could be classified as capabilities (for example, work to align existing large language models) — probably does so without increasing the risk of danger”
What is this claim based on?
“As far as we can tell, there are many roles at leading AI labs where the primary effects of the roles could be to reduce risks.”
As far as I can tell, this is not the case.
For technical research roles, you can go by what I just posted.
For policy, I note that you wrote the following: ”Labs also often don’t have enough staff… to figure out what they should be lobbying governments for (we’d guess that many of the top labs would lobby for things that reduce existential risks).”
I guess that AI corporations use lobbyists for lobbying to open up markets for profit, and to not get actually restricted by regulations (maybe to move focus to somewhere hypothetically in the future, maybe to remove upstart competitors who can’t deal with the extra compliance overhead, but don’t restrict us now!).
On prior, that is what you should expect, because that is what tech corporations do everywhere. We shouldn’t expect on prior that AI corporations are benevolent entities that are not shaped by the forces of competition. That would be naive.
~ ~ ~ After that, there is a new section titled “How can you mitigate the downsides of this option?”
That section reads as thoughtful and reasonable.
How about on the job board, you link to that section in each AGI lab job description listed, just above the ‘VIEW JOB DETAILS’ button?
For example, you could append and hyperlink ‘Suggestions for mitigating downsides’ to the organisational descriptions of Google DeepMind, OpenAI and Anthropic.
That would help guide through potential applicants to AGI lab positions to think through their decision.
“This distinction between ‘capabilities’ research and ‘safety’ research is extremely fuzzy, and we have a somewhat poor track record of predicting which areas of research will be beneficial for safety work in the future. This suggests that work that advances some (and perhaps many) kinds of capabilities faster may be useful for reducing risks.”
This seems like a absurd claim. Are 80k actually making it?
EDIT: the claim is made by Benjamin Hilton, one of 80k’s analysts and the person the OP is replying too.
It is an extreme claim to make in that context, IMO.
I think Benjamin made it to be nuanced. But the nuance in that article is rather one-sided.
If anything, the nuance should be on the side of identifying any ways you might accidentally support the development of dangerous auto-scaling technologies.
I did read that compilation of advice, and responded to that in an email (16 May 2023):
“Dear [a],
People will drop in and look at job profiles without reading your other materials on the website. I’d suggest just writing a do-your-research cautionary line about OpenAI and Anthropic in the job descriptions itself.
Also suggest reviewing whether to trust advice on whether to take jobs that contribute to capability research.
Particularly advice by nerdy researchers paid/funded by corporate tech.
Particularly by computer-minded researchers who might not be aware of the limitations of developing complicated control mechanisms to contain complex machine-environment feedback loops.
Totally up to you of course.
Warm regards,
Remmelt”
This is what the article says:
“All that said, we think it’s crucial to take an enormous amount of care before working at an organisation that might be a huge force for harm. Overall, it’s complicated to assess whether it’s good to work at a leading AI lab — and it’ll vary from person to person, and role to role.”
So you are saying that people are making a decision about working for an AGI lab that might be (or actually is) a huge force for harm. And that whether it’s good (or bad) to work at an AGI lab depends on the person – ie. people need to figure this out for them personally.
Yet you are openly advertising various jobs at AGI labs on the job board. People are clicking through and applying. Do you know how many read your article beforehand?
~ ~ ~
Even if they did read through the article, both the content and framing of the advice seems misguided. Noticing what is emphasised in your considerations.
Here are the first sentences of each consideration section:
(ie. as what readers are most likely to read, and what you might most want to convey).
“We think that a leading — but careful — AI project could be a huge force for good, and crucial to preventing an AI-related catastrophe.”
Is this your opinion about DeepMind, OpenAI and Anthropic?
“Top AI labs are high-performing, rapidly growing organisations. In general, one of the best ways to gain career capital is to go and work with any high-performing team — you can just learn a huge amount about getting stuff done. They also have excellent reputations more widely. So you get the credential of saying you’ve worked in a leading lab, and you’ll also gain lots of dynamic, impressive connections.”
Is this focussing on gaining prestige and (nepotistic) connections as an instrumental power move, with the hope of improving things later...?
Instead of on actually improving safety?
“We’d guess that, all else equal, we’d prefer that progress on AI capabilities was slower.”
Why is only this part stated as a guess?
I did not read “we’d guess that a leading but careful AI project, all else equal, could be a force of good”.
Or inversely: “we think that continued scaling of AI capabilities could be a huge force of harm.”
Notice how those framings come across very differently.
Wait, reading this section further is blowing my mind.
“But that’s not necessarily the case. There are reasons to think that advancing at least some kinds of AI capabilities could be beneficial. Here are a few”
“This distinction between ‘capabilities’ research and ‘safety’ research is extremely fuzzy, and we have a somewhat poor track record of predicting which areas of research will be beneficial for safety work in the future. This suggests that work that advances some (and perhaps many) kinds of capabilities faster may be useful for reducing risks.”
Did you just argue for working on some capabilities because it might improve safety? This is blowing my mind.
“Moving faster could reduce the risk that AI projects that are less cautious than the existing ones can enter the field.”
Are you saying we should consider moving faster because there are people less cautious than us?
Do you notice how a similarly flavoured argument can be used by and is probably being used by staff at three leading AGI labs that are all competing with each other?
Did OpenAI moving fast with ChatGPT prevent Google from starting new AI projects?
“It’s possible that the later we develop transformative AI, the faster (and therefore more dangerously) everything will play out, because other currently-constraining factors (like the amount of compute available in the world) could continue to grow independently of technical progress.”
How would compute grow independently of AI corporations deciding to scale up capability?
The AGI labs were buying up GPUs to the point of shortage. Nvidia was not able to supply them fast enough. How is that not getting Nvidia and other producers to increase production of GPUs?
More comments on the hardware overhang argument here.
“Lots of work that makes models more useful — and so could be classified as capabilities (for example, work to align existing large language models) — probably does so without increasing the risk of danger”
What is this claim based on?
“As far as we can tell, there are many roles at leading AI labs where the primary effects of the roles could be to reduce risks.”
As far as I can tell, this is not the case.
For technical research roles, you can go by what I just posted.
For policy, I note that you wrote the following:
”Labs also often don’t have enough staff… to figure out what they should be lobbying governments for (we’d guess that many of the top labs would lobby for things that reduce existential risks).”
I guess that AI corporations use lobbyists for lobbying to open up markets for profit, and to not get actually restricted by regulations (maybe to move focus to somewhere hypothetically in the future, maybe to remove upstart competitors who can’t deal with the extra compliance overhead, but don’t restrict us now!).
On prior, that is what you should expect, because that is what tech corporations do everywhere. We shouldn’t expect on prior that AI corporations are benevolent entities that are not shaped by the forces of competition. That would be naive.
~ ~ ~
After that, there is a new section titled “How can you mitigate the downsides of this option?”
That section reads as thoughtful and reasonable.
How about on the job board, you link to that section in each AGI lab job description listed, just above the ‘VIEW JOB DETAILS’ button?
For example, you could append and hyperlink ‘Suggestions for mitigating downsides’ to the organisational descriptions of Google DeepMind, OpenAI and Anthropic.
That would help guide through potential applicants to AGI lab positions to think through their decision.
“This distinction between ‘capabilities’ research and ‘safety’ research is extremely fuzzy, and we have a somewhat poor track record of predicting which areas of research will be beneficial for safety work in the future. This suggests that work that advances some (and perhaps many) kinds of capabilities faster may be useful for reducing risks.”
This seems like a absurd claim. Are 80k actually making it?
EDIT: the claim is made by Benjamin Hilton, one of 80k’s analysts and the person the OP is replying too.
It is an extreme claim to make in that context, IMO.
I think Benjamin made it to be nuanced. But the nuance in that article is rather one-sided.
If anything, the nuance should be on the side of identifying any ways you might accidentally support the development of dangerous auto-scaling technologies.
First do, no harm.
Note that we are focussing here on decisions at the individual level.
There are limitations to that.
See my LessWrong comment.