I think EA is at its best when it takes the high epistemic standards of LW and applies them to altruistic goals. I see the divergence growing, and that worries me.
Can you give me an example of EA using bad epistemic standards and an example of EA using good epistemic standards?
I think EA is at its best when it takes the high epistemic standards of LW and applies them to altruistic goals.
I agree with this.
(I don’t know whether the divergence is growing, shrinking, or staying the same.)
I think EA is at its best when it takes the high epistemic standards of LW and applies them to altruistic goals. I see the divergence growing, and that worries me.
Can you give me an example of EA using bad epistemic standards and an example of EA using good epistemic standards?
I agree with this.
(I don’t know whether the divergence is growing, shrinking, or staying the same.)