Some quick thoughts:
Strong +1 to actually trying and not assuming a priori that you’re not good enough.
If you’re at all interested in empirical AI safety research, it’s valuable to just try to get really good at machine learning research.
An IMO medalist or generic “super-genius” is not necessarily someone who would be a top-tier AI safety researcher, and vice versa.
For trying AI safety technical research, I’d strongly recommend How to pursue a career in technical AI alignment.
Thanks for these points, especially the last one, which I’ve now added to the intro section.
Current theme: default
Less Wrong (text)
Less Wrong (link)
Arrow keys: Next/previous image
Escape or click: Hide zoomed image
Space bar: Reset image size & position
Scroll to zoom in/out
(When zoomed in, drag to pan; double-click to close)
Keys shown in yellow (e.g., ]) are accesskeys, and require a browser-specific modifier key (or keys).
]
Keys shown in grey (e.g., ?) do not require any modifier keys.
?
Esc
h
f
a
m
v
c
r
q
t
u
o
,
.
/
s
n
e
;
Enter
[
\
k
i
l
=
-
0
′
1
2
3
4
5
6
7
8
9
→
↓
←
↑
Space
x
z
`
g
Some quick thoughts:
Strong +1 to actually trying and not assuming a priori that you’re not good enough.
If you’re at all interested in empirical AI safety research, it’s valuable to just try to get really good at machine learning research.
An IMO medalist or generic “super-genius” is not necessarily someone who would be a top-tier AI safety researcher, and vice versa.
For trying AI safety technical research, I’d strongly recommend How to pursue a career in technical AI alignment.
Thanks for these points, especially the last one, which I’ve now added to the intro section.