Sure, but the examples you gave are more about tactics than content.
What I mean is that there are a lot of people who are downplaying their level of concern about Xrisk in order to not turn off people who don’t appreciate the issue.
I think that can be a good tactic, but it also risks reducing the sense of urgency people have about AI-Xrisk, and can also lead to incorrect strategic conclusions, which could even be disasterous when they are informing crucial policy decisions.
TBC, I’m not saying we are lacking in radicals ATM, the level is probably about right. I just don’t think that everyone should be moderating their stance in order to maximize their credibility with the (currently ignorant, but increasingly less so) ML research community.
Sure, but the examples you gave are more about tactics than content. What I mean is that there are a lot of people who are downplaying their level of concern about Xrisk in order to not turn off people who don’t appreciate the issue. I think that can be a good tactic, but it also risks reducing the sense of urgency people have about AI-Xrisk, and can also lead to incorrect strategic conclusions, which could even be disasterous when they are informing crucial policy decisions.
TBC, I’m not saying we are lacking in radicals ATM, the level is probably about right. I just don’t think that everyone should be moderating their stance in order to maximize their credibility with the (currently ignorant, but increasingly less so) ML research community.