I agree with the “fresh sheet of paper.” Reading the alignment faking paper and the current alignment challenges has been way more informative than reading Yudkowsky.
I think theese circles have granted him too many bayes points for predicting alignment when the technical details of his alignment problems basically don’t apply to deep learning as you said.
I agree with the “fresh sheet of paper.” Reading the alignment faking paper and the current alignment challenges has been way more informative than reading Yudkowsky.
I think theese circles have granted him too many bayes points for predicting alignment when the technical details of his alignment problems basically don’t apply to deep learning as you said.