I think in an ideal world, you could confidently say that “there is value in trying to interact with AI researchers outside of the AI Alignment bubble”, not only so you can figure out cruxes and better convince them, but actually because you might learn they are right and we are wrong. I don’t know whether you believe that, but it seems not only true but also follows very strongly from our movements epistemic ideals about being open-minded to follow evidence and reason where it leads.
If you felt that you would get pushback on suggesting that there’s an outside view where AGI Alignment cause area sceptics might be right, I hope you are wrong, but if there are many other people who feel that way, it indicates some kind of epistemic problem in our movement.
Any time we’re in a place where someone feels there’s something critical they can’t say, even when speaking in good faith, to best use evidence and reason to do the most good, that’s a potential epistemic failure mode we need to guard against.
Thanks for the reminder on the open-minded epistemics ideal of the movement. To clarify, I do spend a lot of time reading posts from people who are concerned about AI Alignment, and talking to multiple “skeptics” made me realize things that I had not properly considered before, learning where AI Alignment arguments might be wrong or simply overconfident.
(FWIW I did not feel any pushback in suggesting that skeptics might be right on the EAF, and, to be clear, that was not my intention. The goal was simply to showcase a methodology to facilitate a constructive dialogue between the Machine Learning and AI Alignment community.)
Good post!
I think in an ideal world, you could confidently say that “there is value in trying to interact with AI researchers outside of the AI Alignment bubble”, not only so you can figure out cruxes and better convince them, but actually because you might learn they are right and we are wrong. I don’t know whether you believe that, but it seems not only true but also follows very strongly from our movements epistemic ideals about being open-minded to follow evidence and reason where it leads.
If you felt that you would get pushback on suggesting that there’s an outside view where AGI Alignment cause area sceptics might be right, I hope you are wrong, but if there are many other people who feel that way, it indicates some kind of epistemic problem in our movement.
Any time we’re in a place where someone feels there’s something critical they can’t say, even when speaking in good faith, to best use evidence and reason to do the most good, that’s a potential epistemic failure mode we need to guard against.
Thanks for the reminder on the open-minded epistemics ideal of the movement. To clarify, I do spend a lot of time reading posts from people who are concerned about AI Alignment, and talking to multiple “skeptics” made me realize things that I had not properly considered before, learning where AI Alignment arguments might be wrong or simply overconfident.
(FWIW I did not feel any pushback in suggesting that skeptics might be right on the EAF, and, to be clear, that was not my intention. The goal was simply to showcase a methodology to facilitate a constructive dialogue between the Machine Learning and AI Alignment community.)