What do you think of applying Open Phil’s outlier opportunities principle to an individual EA? Do you think that, even in the absence of instrumental considerations, an early career EA who thinks longtermism is probably correct but possibly wrong should choose a substantial chance of making a major contribution to increasing access to pain relief in the developing world over a small chance of making a major contribution to reducing GCBRs?
Current theme: default
Less Wrong (text)
Less Wrong (link)
Arrow keys: Next/previous image
Escape or click: Hide zoomed image
Space bar: Reset image size & position
Scroll to zoom in/out
(When zoomed in, drag to pan; double-click to close)
Keys shown in yellow (e.g., ]) are accesskeys, and require a browser-specific modifier key (or keys).
]
Keys shown in grey (e.g., ?) do not require any modifier keys.
?
Esc
h
f
a
m
v
c
r
q
t
u
o
,
.
/
s
n
e
;
Enter
[
\
k
i
l
=
-
0
′
1
2
3
4
5
6
7
8
9
→
↓
←
↑
Space
x
z
`
g
What do you think of applying Open Phil’s outlier opportunities principle to an individual EA? Do you think that, even in the absence of instrumental considerations, an early career EA who thinks longtermism is probably correct but possibly wrong should choose a substantial chance of making a major contribution to increasing access to pain relief in the developing world over a small chance of making a major contribution to reducing GCBRs?