i highly suggest reading the sequences.
if you suspect you might be capable of it, you could also start reading about alignment and potentially contributing to it. i know of at least one whose been doing (imo good) alignment research since they were in high school. many working on AI catastrophic risk (myself included) believe there’s not much time left for you to have a career. you may want to look into those arguments.
Current theme: default
Less Wrong (text)
Less Wrong (link)
Arrow keys: Next/previous image
Escape or click: Hide zoomed image
Space bar: Reset image size & position
Scroll to zoom in/out
(When zoomed in, drag to pan; double-click to close)
Keys shown in yellow (e.g., ]) are accesskeys, and require a browser-specific modifier key (or keys).
]
Keys shown in grey (e.g., ?) do not require any modifier keys.
?
Esc
h
f
a
m
v
c
r
q
t
u
o
,
.
/
s
n
e
;
Enter
[
\
k
i
l
=
-
0
′
1
2
3
4
5
6
7
8
9
→
↓
←
↑
Space
x
z
`
g
i highly suggest reading the sequences.
if you suspect you might be capable of it, you could also start reading about alignment and potentially contributing to it. i know of at least one whose been doing (imo good) alignment research since they were in high school. many working on AI catastrophic risk (myself included) believe there’s not much time left for you to have a career. you may want to look into those arguments.