I thought this was a great point.
There is absolutely nothing hypocritical about an AI researcher who is pursuing either research that’s not on the path to AGI or alignment research to be sounding the alarm about the risks of AGI. Consider if we had one word for “energy researcher” which included all of: a) studying the energy released in chemical reactions, b) developing solar panels, and c) developing methods for fossil fuel extraction. In such a situation, it would not be hypocritical for someone from a) or b) to voice concerns about how c) was leading to climate change — even though they would be an “energy researcher” expressing concerns about “energy research.”
Probably the majority of “AI researchers” are in this position. It’s an extremely broad field. Someone can come up with a new probabilistic programming language for Bayesian statistics, or prove some abstruse separation of two classes of MDPs, and wind up publishing at the same conference as the people trying to hook up a giant LLM to real-world actuators.
I think this is an extremely good post laying out why the public discussion on this topic might seem confusing:
https://www.lesswrong.com/posts/BTcEzXYoDrWzkLLrQ/the-public-debate-about-ai-is-confusing-for-the-general
It might be somewhat hard to follow, but this little prediction market is interesting (wouldn’t take the numbers too seriously):
In December of last year it seemed plausible to many people online that by now, August 2023, the world would be a very strange, near-apocalyptic place full of inscrutable alien intelligences. Obviously, this is totally wrong. So it could be worth comparing others’ “vibes” here to your own thought process to see if you’re overestimating the rate of progress.
Paying for GPT-4 if you have the budget may also be helpful to calibrate. It’s magical, but you run into embarrassing failures pretty quickly, which most commentators tend to talk about rarely.