One said climate change, two said global health, and two said AI safety. Neither of the people who said AI safety had any background in AI. If after Arete, someone without background in AI decides that AI safety is the most important issue, then something likely has gone wrong (Note: prioritizing any non-mainstream cause area after Arete is epistemically shaky. By mainstream, I mean a cause area that someone would have a high prior on).
This seems like a strange position to me. Do you think people have to have a background in climate science to decide that climate change is the most important problem, or development economics to decide that global poverty is the moral imperative of our time? Many people will not have a background relevant to any major problem; are they permitted to have any top priority?
I think (apologies if I am mis-understanding you) you try to get around this by suggesting that ‘mainstream’ causes can have much higher priors and lower evidential burdens. But that just seems like deference to wider society, and the process by which mainstream causes became dominant does not seem very epistemically reliable to me.
If after Arete, someone without background in AI decides that AI safety is the most important issue, then something likely has gone wrong
I would like to second the objection to this. I feel as most intros to AI Safety, such as AGISF, are detached enough from technical AI details such that one could do the course without the need for past AI background
(This isn’t an objection to the epistemics related to picking up something non-mainstream cause area quickly, but rather about the need to have an AI background to do so)
I guess I’m unclear about what sort of background is important. ML isn’t actually that sophisticated as it turns out, it could have been, but “climb a hill” or “think about an automata but with probability distributions and annotated with rewards” just don’t rely on more than a few semesters of math.
2⁄5 doesn’t seem like very strong evidence of groupthink to me.
I also wouldn’t focus on their background, but on things like whether they were able to explain the reasons for their beliefs in their own words or tended to simply fall back on particular phrases they’d heard.
This seems like a strange position to me. Do you think people have to have a background in climate science to decide that climate change is the most important problem, or development economics to decide that global poverty is the moral imperative of our time? Many people will not have a background relevant to any major problem; are they permitted to have any top priority?
I think (apologies if I am mis-understanding you) you try to get around this by suggesting that ‘mainstream’ causes can have much higher priors and lower evidential burdens. But that just seems like deference to wider society, and the process by which mainstream causes became dominant does not seem very epistemically reliable to me.
I would like to second the objection to this. I feel as most intros to AI Safety, such as AGISF, are detached enough from technical AI details such that one could do the course without the need for past AI background
(This isn’t an objection to the epistemics related to picking up something non-mainstream cause area quickly, but rather about the need to have an AI background to do so)
I guess I’m unclear about what sort of background is important. ML isn’t actually that sophisticated as it turns out, it could have been, but “climb a hill” or “think about an automata but with probability distributions and annotated with rewards” just don’t rely on more than a few semesters of math.
2⁄5 doesn’t seem like very strong evidence of groupthink to me.
I also wouldn’t focus on their background, but on things like whether they were able to explain the reasons for their beliefs in their own words or tended to simply fall back on particular phrases they’d heard.