I honestly agree with this post, and to best translate this into my own thinking, we should rather have AI that is superhuman at faithful COT reasoning than it is at wise forward pass thinking.
The Peter Singer/Einstein/Legible reasoning corresponds to COT reasoning, whereas a lot of the directions for intuitive wise/illegible thinking depend on making the forward pass thinking more capable, which is not a great direction for reasons of trust and alignment.
I honestly agree with this post, and to best translate this into my own thinking, we should rather have AI that is superhuman at faithful COT reasoning than it is at wise forward pass thinking.
The Peter Singer/Einstein/Legible reasoning corresponds to COT reasoning, whereas a lot of the directions for intuitive wise/illegible thinking depend on making the forward pass thinking more capable, which is not a great direction for reasons of trust and alignment.