Thank you for writing this, I found it very interesting and helpful. I have something between a belief and a hope that the antagonistic dynamics (which I agree are likely driven by the idea that AI safety is merely speculative) will settle down in the short-ish future as more empirical results emerge on the difficulty of training models with the intended goals (e.g. avoiding sycophancy) and get more widely appreciated. I think many people on the critical side still have the idea of AI safety as grounded largely in thought experiments only loosely connected to current technology.
Thank you for writing this, I found it very interesting and helpful. I have something between a belief and a hope that the antagonistic dynamics (which I agree are likely driven by the idea that AI safety is merely speculative) will settle down in the short-ish future as more empirical results emerge on the difficulty of training models with the intended goals (e.g. avoiding sycophancy) and get more widely appreciated. I think many people on the critical side still have the idea of AI safety as grounded largely in thought experiments only loosely connected to current technology.