I think I agree that safety researchers should prefer not to take a purely ceremonial role at a big company if they have other good options, but I’m hesitant to conclude that no one should be willing to do it. I don’t think it is remotely obvious that safety research at big companies is ceremonial.
There are a few reasons why some people might opt for a ceremonial role:
It is good for some AI safety researchers to have access to what is going on at top labs, even if they can’t do anything about it. They can at least keep tabs on it and can use that experience later in their careers.
It seems bad to isolate capabilities researchers from safety concerns. I bet capabilities researchers would take safety concerns more seriously if they eat lunch every day with someone who is worried than if they only talk to each other.
If labs do engage in behavior that is flagrantly reckless, employees can act as whistleblowers. Non-employees can’t. Even if they can’t prevent a disaster, they can create a paper trail of internal concerns which could be valuable in the future.
Internal politics might change and it seems better to have people in place already thinking about these things.
If labs do engage in behavior that is flagrantly reckless, employees can act as whistleblowers.
This is the crux for me.
If some employees actually have the guts to whistleblow on current engineering malpractices, I have some hope left that having AI safety researchers at these labs still turns out “net good”.
If this doesn’t happen, then they can keep having conversations about x-risks with their colleagues, but I don’t quite see when they will put up a resistance to dangerous tech scaling.
If not now, when?
Internal politics might change
We’ve seen in which directions internal politics change, as under competitive pressures.
Nerdy intellectual researchers can wait that out as much as they like. That would confirm my concern here.
If some employees actually have the guts to whistleblow on current engineering malpractices…
Plenty of concrete practices you can whistleblow on that will be effective in getting society to turn against these companies:
The copying of copyrighted and person-identifying information without permission (pass on evidence to publishers and they will have a lawsuit feast).
The exploitation and underpayment of data workers and coders from the Global South (inside information on how OpenAI staff hid that they instructed workers in Kenya to collect images of child sexual abuse, anyone?).
The unscoped misdesign and failure to test these systems for all the uses the AI company promotes.
The extent of AI hardware’s environmental pollution.
Pick what you’re in a position to whistleblow on.
Be very careful to prepare well. You’re exposing a multi-billion-dollar company. First meet in person with an attorney experienced in protecting whistleblowers.
Once you start collecting information, make photographs with your personal phone, rather than screenshots or USB copies that might be tracked by software. Make sure you’re not in line of sight of an office camera or webcam. Etc. Etc.
Preferably, before you start, talk with an experienced whistleblower about how to maintain anonymity. The more at ease you are there, the more you can bide your time, carefully collecting and storing information.
If you need information to get started, email me at remmelt.ellen[a/}protonmail<d0t>com.
~ ~ ~
But don’t wait it out until you can see some concrete dependable sign of “extinction risk”. By that time, it’s too late.
I think I agree that safety researchers should prefer not to take a purely ceremonial role at a big company if they have other good options, but I’m hesitant to conclude that no one should be willing to do it. I don’t think it is remotely obvious that safety research at big companies is ceremonial.
There are a few reasons why some people might opt for a ceremonial role:
It is good for some AI safety researchers to have access to what is going on at top labs, even if they can’t do anything about it. They can at least keep tabs on it and can use that experience later in their careers.
It seems bad to isolate capabilities researchers from safety concerns. I bet capabilities researchers would take safety concerns more seriously if they eat lunch every day with someone who is worried than if they only talk to each other.
If labs do engage in behavior that is flagrantly reckless, employees can act as whistleblowers. Non-employees can’t. Even if they can’t prevent a disaster, they can create a paper trail of internal concerns which could be valuable in the future.
Internal politics might change and it seems better to have people in place already thinking about these things.
This is the crux for me.
If some employees actually have the guts to whistleblow on current engineering malpractices, I have some hope left that having AI safety researchers at these labs still turns out “net good”.
If this doesn’t happen, then they can keep having conversations about x-risks with their colleagues, but I don’t quite see when they will put up a resistance to dangerous tech scaling. If not now, when?
We’ve seen in which directions internal politics change, as under competitive pressures.
Nerdy intellectual researchers can wait that out as much as they like. That would confirm my concern here.
Plenty of concrete practices you can whistleblow on that will be effective in getting society to turn against these companies:
The copying of copyrighted and person-identifying information without permission (pass on evidence to publishers and they will have a lawsuit feast).
The exploitation and underpayment of data workers and coders from the Global South (inside information on how OpenAI staff hid that they instructed workers in Kenya to collect images of child sexual abuse, anyone?).
The unscoped misdesign and failure to test these systems for all the uses the AI company promotes.
The extent of AI hardware’s environmental pollution.
Pick what you’re in a position to whistleblow on.
Be very careful to prepare well. You’re exposing a multi-billion-dollar company. First meet in person with an attorney experienced in protecting whistleblowers.
Once you start collecting information, make photographs with your personal phone, rather than screenshots or USB copies that might be tracked by software. Make sure you’re not in line of sight of an office camera or webcam. Etc. Etc.
Preferably, before you start, talk with an experienced whistleblower about how to maintain anonymity. The more at ease you are there, the more you can bide your time, carefully collecting and storing information.
If you need information to get started, email me at remmelt.ellen[a/}protonmail<d0t>com.
~ ~ ~
But don’t wait it out until you can see some concrete dependable sign of “extinction risk”. By that time, it’s too late.