If after Arete, someone without background in AI decides that AI safety is the most important issue, then something likely has gone wrong
I would like to second the objection to this. I feel as most intros to AI Safety, such as AGISF, are detached enough from technical AI details such that one could do the course without the need for past AI background
(This isn’t an objection to the epistemics related to picking up something non-mainstream cause area quickly, but rather about the need to have an AI background to do so)
I guess I’m unclear about what sort of background is important. ML isn’t actually that sophisticated as it turns out, it could have been, but “climb a hill” or “think about an automata but with probability distributions and annotated with rewards” just don’t rely on more than a few semesters of math.
I would like to second the objection to this. I feel as most intros to AI Safety, such as AGISF, are detached enough from technical AI details such that one could do the course without the need for past AI background
(This isn’t an objection to the epistemics related to picking up something non-mainstream cause area quickly, but rather about the need to have an AI background to do so)
I guess I’m unclear about what sort of background is important. ML isn’t actually that sophisticated as it turns out, it could have been, but “climb a hill” or “think about an automata but with probability distributions and annotated with rewards” just don’t rely on more than a few semesters of math.