You seem frustrated that some EAs are working on leading AI labs, because you see that as accelerating AI timelines when we are not ready for advanced AI.
Here are some cruxes that might explain why working at leading AI labs might be a good thing:
We are uncertain of the outcomes of advanced AI
AI can be used to solve many problems, including eg poverty and health. It is plausible that we would be harming people who would benefit from this technology by delaying it.
Also, accelerating progress of space colonization can ultimately give you access to a vast amount of resources, which otherwise we would not be able to physically reach because of the expansion of the universe. Under some worldviews (which I dont personally share), this is a large penalty to waiting.
Having people concerned about safety in leading AI labs is important to ensure a responsible deployment
If EAs systematically avoid working for top AI labs, they will be replaced by less safety-conscious staff.
Safety-conscious researchers and engineers have done an incredible work setting up safety teams in OpenAI and DeepMind.
I expect they will also be helpful for coordinating a responsible deployment of advanced AI in the future.
Having a large lead might be helpful to avoid race dynamics
If multiple labs are on the brink of transformative AI, they will be incentivized to cut corners to be the first to cross the finish line. Having fewer leaders can help them coordinate and delay deployment.
There might not be much useful safety research to be done now
Plausibly, AI safety research will need some experimentation and knowledge of future AI paradigms. So there might just not be much you can do to address AI risk right now.
Overall I think crux 2 is very strong, and I lend some credence to crux 1 and crux 3. I dont feel very moved by crux 4 - I think its too early to give up on current safety research, even if only because the current DL paradigm might scale to TAI already.
In any case, I am enormously glad to have safety-conscious researchers in DM and OpenAI. I think ostracizing them would be a huge error.
I agree “having people on the inside” seems useful. At the same time, it’s hard for me to imagine what an “aligned” researcher could have done at the Manhattan Project to lower nuclear risk. That’s not meant as a total dismissal, it’s just not very clear to me.
> Safety-conscious researchers and engineers have done an incredible work setting up safety teams in OpenAI and DeepMind.
I don’t know much about what successes here have looked like, I agree this is a relevant and important case study.
> I think ostracizing them would be a huge error. My other comments better reflect my current feelings here.
You seem frustrated that some EAs are working on leading AI labs, because you see that as accelerating AI timelines when we are not ready for advanced AI.
Here are some cruxes that might explain why working at leading AI labs might be a good thing:
AI can be used to solve many problems, including eg poverty and health. It is plausible that we would be harming people who would benefit from this technology by delaying it.
Also, accelerating progress of space colonization can ultimately give you access to a vast amount of resources, which otherwise we would not be able to physically reach because of the expansion of the universe. Under some worldviews (which I dont personally share), this is a large penalty to waiting.
If EAs systematically avoid working for top AI labs, they will be replaced by less safety-conscious staff.
Safety-conscious researchers and engineers have done an incredible work setting up safety teams in OpenAI and DeepMind.
I expect they will also be helpful for coordinating a responsible deployment of advanced AI in the future.
If multiple labs are on the brink of transformative AI, they will be incentivized to cut corners to be the first to cross the finish line. Having fewer leaders can help them coordinate and delay deployment.
Plausibly, AI safety research will need some experimentation and knowledge of future AI paradigms. So there might just not be much you can do to address AI risk right now.
Overall I think crux 2 is very strong, and I lend some credence to crux 1 and crux 3. I dont feel very moved by crux 4 - I think its too early to give up on current safety research, even if only because the current DL paradigm might scale to TAI already.
In any case, I am enormously glad to have safety-conscious researchers in DM and OpenAI. I think ostracizing them would be a huge error.
I agree “having people on the inside” seems useful. At the same time, it’s hard for me to imagine what an “aligned” researcher could have done at the Manhattan Project to lower nuclear risk. That’s not meant as a total dismissal, it’s just not very clear to me.
> Safety-conscious researchers and engineers have done an incredible work setting up safety teams in OpenAI and DeepMind.
I don’t know much about what successes here have looked like, I agree this is a relevant and important case study.
> I think ostracizing them would be a huge error.
My other comments better reflect my current feelings here.