Maybe this is true in the EA branch of AI safety. In the wider community, e.g. as represented by those attending IASEAI in February, I believe this is not a correct assessment. Since I began working on AI safety, I have heard many cautious and uncertainty-aware statements along the line that the things you claim people believe will almost certainly happen are merely likely enough to worry deeply and work on preventing them. I also don’t see that community having an AI-centric worldview – they seem to worry about many other cause areas as well such as inequality, war, pandemics, climate.
Agreed—I should’ve made it clearer in the title that I was referring specifically to the AI safety people in EA, i.e. this excludes other EAs not in AI safety, and also excludes other non-EAs in AI safety.
Maybe this is true in the EA branch of AI safety. In the wider community, e.g. as represented by those attending IASEAI in February, I believe this is not a correct assessment. Since I began working on AI safety, I have heard many cautious and uncertainty-aware statements along the line that the things you claim people believe will almost certainly happen are merely likely enough to worry deeply and work on preventing them. I also don’t see that community having an AI-centric worldview – they seem to worry about many other cause areas as well such as inequality, war, pandemics, climate.
Agreed—I should’ve made it clearer in the title that I was referring specifically to the AI safety people in EA, i.e. this excludes other EAs not in AI safety, and also excludes other non-EAs in AI safety.