I think EA is at its best when it takes the high epistemic standards of LW and applies them to altruistic goals. I see the divergence growing, and that worries me.
I think EA is at its best when it takes the high epistemic standards of LW and applies them to altruistic goals.
I agree with this.
(I don’t know whether the divergence is growing, shrinking, or staying the same.)
Can you give me an example of EA using bad epistemic standards and an example of EA using good epistemic standards?
Current theme: default
Less Wrong (text)
Less Wrong (link)
Arrow keys: Next/previous image
Escape or click: Hide zoomed image
Space bar: Reset image size & position
Scroll to zoom in/out
(When zoomed in, drag to pan; double-click to close)
Keys shown in yellow (e.g., ]) are accesskeys, and require a browser-specific modifier key (or keys).
]
Keys shown in grey (e.g., ?) do not require any modifier keys.
?
Esc
h
f
a
m
v
c
r
q
t
u
o
,
.
/
s
n
e
;
Enter
[
\
k
i
l
=
-
0
′
1
2
3
4
5
6
7
8
9
→
↓
←
↑
Space
x
z
`
g
I think EA is at its best when it takes the high epistemic standards of LW and applies them to altruistic goals. I see the divergence growing, and that worries me.
I agree with this.
(I don’t know whether the divergence is growing, shrinking, or staying the same.)
Can you give me an example of EA using bad epistemic standards and an example of EA using good epistemic standards?