In general, I think that people are being too conservative about addressing the issue. I think we need some “radicals” who aren’t as worried about losing some credibility. Whether or not you want to try and have mainstream appeal, or just be straightforward with people about the issue is a strategic question that should be considered case-by-case.
Of course, it is a big problem that talking about AIS makes a good chunk of people think you’re nuts. It’s been my impression that most of those people are researchers, not the general public, who are actually quite receptive to the idea (although maybe for the wrong reasons...)
I don’t think the issue is that we don’t have any people willing to be radicals and lose credibility. I think the issue is that radicals on a certain issue tend to also mar the reputations of their more level-headed counterparts. Weak men are superweapons, and groups like PETA and Greenpeace and Westboro Baptist Church seem to have attached lasting stigma to their causes because people’s pattern-matching minds associate their entire movement with the worst example.
Since, as you point out, researchers specifically grow resentful, it seems really important to make sure radicals don’t tip the balance backward just as the field of AI safety is starting to grow more respectable in the minds of policymakers and researchers.
Sure, but the examples you gave are more about tactics than content.
What I mean is that there are a lot of people who are downplaying their level of concern about Xrisk in order to not turn off people who don’t appreciate the issue.
I think that can be a good tactic, but it also risks reducing the sense of urgency people have about AI-Xrisk, and can also lead to incorrect strategic conclusions, which could even be disasterous when they are informing crucial policy decisions.
TBC, I’m not saying we are lacking in radicals ATM, the level is probably about right. I just don’t think that everyone should be moderating their stance in order to maximize their credibility with the (currently ignorant, but increasingly less so) ML research community.
It probably wouldn’t hurt if AI inclined EAs focused more on getting experts on board. It’s a very bad situation to be in if the vast majority of experts on a given topic think that a given issue you are interested in is overblown, because 1) tractabilty goes down the tubes, since most experts actively contradict you, 2) your ability to collaborate with other experts is greatly hampered, since most experts won’t work with you, and 3) it becomes really easy for people to assume that you’re a crackpot. I’m also not sure if it’s even ‘rational’ for non experts to get involved in this until a majority of experts in the field is on board. I mean, if person A has no experience with a topic, and the majority of experts say one thing, but person A gets convinced that the opposite is true by an expert in the minority, am I wrong in thinking that that’s not a great precedent to set?
In general, I think that people are being too conservative about addressing the issue. I think we need some “radicals” who aren’t as worried about losing some credibility. Whether or not you want to try and have mainstream appeal, or just be straightforward with people about the issue is a strategic question that should be considered case-by-case.
Of course, it is a big problem that talking about AIS makes a good chunk of people think you’re nuts. It’s been my impression that most of those people are researchers, not the general public, who are actually quite receptive to the idea (although maybe for the wrong reasons...)
I don’t think the issue is that we don’t have any people willing to be radicals and lose credibility. I think the issue is that radicals on a certain issue tend to also mar the reputations of their more level-headed counterparts. Weak men are superweapons, and groups like PETA and Greenpeace and Westboro Baptist Church seem to have attached lasting stigma to their causes because people’s pattern-matching minds associate their entire movement with the worst example.
Since, as you point out, researchers specifically grow resentful, it seems really important to make sure radicals don’t tip the balance backward just as the field of AI safety is starting to grow more respectable in the minds of policymakers and researchers.
Sure, but the examples you gave are more about tactics than content. What I mean is that there are a lot of people who are downplaying their level of concern about Xrisk in order to not turn off people who don’t appreciate the issue. I think that can be a good tactic, but it also risks reducing the sense of urgency people have about AI-Xrisk, and can also lead to incorrect strategic conclusions, which could even be disasterous when they are informing crucial policy decisions.
TBC, I’m not saying we are lacking in radicals ATM, the level is probably about right. I just don’t think that everyone should be moderating their stance in order to maximize their credibility with the (currently ignorant, but increasingly less so) ML research community.
It probably wouldn’t hurt if AI inclined EAs focused more on getting experts on board. It’s a very bad situation to be in if the vast majority of experts on a given topic think that a given issue you are interested in is overblown, because 1) tractabilty goes down the tubes, since most experts actively contradict you, 2) your ability to collaborate with other experts is greatly hampered, since most experts won’t work with you, and 3) it becomes really easy for people to assume that you’re a crackpot. I’m also not sure if it’s even ‘rational’ for non experts to get involved in this until a majority of experts in the field is on board. I mean, if person A has no experience with a topic, and the majority of experts say one thing, but person A gets convinced that the opposite is true by an expert in the minority, am I wrong in thinking that that’s not a great precedent to set?