i highly suggest reading the sequences.
if you suspect you might be capable of it, you could also start reading about alignment and potentially contributing to it. i know of at least one whose been doing (imo good) alignment research since they were in high school. many working on AI catastrophic risk (myself included) believe there’s not much time left for you to have a career. you may want to look into those arguments.
i highly suggest reading the sequences.
if you suspect you might be capable of it, you could also start reading about alignment and potentially contributing to it. i know of at least one whose been doing (imo good) alignment research since they were in high school. many working on AI catastrophic risk (myself included) believe there’s not much time left for you to have a career. you may want to look into those arguments.