There is absolutely nothing hypocritical about an AI researcher who is pursuing either research that’s not on the path to AGI or alignment research to be sounding the alarm about the risks of AGI. Consider if we had one word for “energy researcher” which included all of: a) studying the energy released in chemical reactions, b) developing solar panels, and c) developing methods for fossil fuel extraction. In such a situation, it would not be hypocritical for someone from a) or b) to voice concerns about how c) was leading to climate change — even though they would be an “energy researcher” expressing concerns about “energy research.”
Probably the majority of “AI researchers” are in this position. It’s an extremely broad field. Someone can come up with a new probabilistic programming language for Bayesian statistics, or prove some abstruse separation of two classes of MDPs, and wind up publishing at the same conference as the people trying to hook up a giant LLM to real-world actuators.
I thought this was a great point.
Probably the majority of “AI researchers” are in this position. It’s an extremely broad field. Someone can come up with a new probabilistic programming language for Bayesian statistics, or prove some abstruse separation of two classes of MDPs, and wind up publishing at the same conference as the people trying to hook up a giant LLM to real-world actuators.
Thank you! Yeah, I agree that point applies to most AI researchers.