I think we should imagine two scenarios, one where I see the demonic possession people as being “on my team” and the other where I see them as being “against my team”.
To elaborate, here’s yet another example: Concerned Climate Scientist Alice responding to statements by environmentalists of the Gaia / naturalness / hippy-type tradition. Alice probably thinks that a lot of their beliefs are utterly nuts. But it’s pretty plausible that she sees them as kinda “on her side” from a vibes perspective. (Hmm, actually, also imagine this is 20 years ago; I think there’s been something of a tribal split between pro-tech environmentalists and anti-tech environmentalists since then.) So probably Alice would probably make somewhat diplomatic statements, emphasizing areas of agreement, etc. Maybe she would say “I think they have the right idea about deforestation and many other things, although I come at it from a more scientific perspective. I don’t think we should take the Gaia idea too literally. But anyway, everyone agrees that there’s an environmental crisis here…” or something like that.
In your demon example, imagine someone saying “I think it’s really great to see so many people questioning the narrative that the police are always perfect. I don’t think demonic possession is the problem, but y’know why so many people keep talking about demonic possession? It’s because they can see there’s a problem, and they’re angry, and they have every right to be angry because there is in fact a problem. And that problem is police corruption…”.
So finally back to the AI example, I claim there’s a strong undercurrent of “The people talking about AI x-risk, they suck, those people are not on my team.” And if there wasn’t that undercurrent, I think most of the x-risk-doesn’t-exist people would have at worst mixed feelings about the x-risk discourse. Maybe they be vaguely happy that there are all these new anti-AI vibes going around, and they would try to redirect those vibes in the directions that they believe to be actually productive, as in the above examples: “I think it’s really great to see people across society questioning the narrative that AI is always a force for good and tech companies are always a force for good. They’re absolutely right to question that narrative; that narrative is wrong and dangerous! Now, on this specific question, I don’t think future AI x-risk is anything to worry about, but let’s talk about AI companies stomping on copyright law…”
Very different vibe, right? Much less aggressive trashing of AI x-risk than what we actually see from some people.
To be clear, in a perfect world, people would ignore vibes and stay on-topic and at the object level, and Alice would just straightforwardly say “My opinion is that Gaia is pseudoscientific nonsense” instead of sanewashing it and immediately changing the subject, and ditto with the demon person and the other imaginary people above. I’m just saying what often happens in practice.
Back to your example, I think it’s far from obvious IMO that the number of articles about police corruption are going to go down in absolute numbers, although it obviously goes down as a fraction of police articles. It’s also far from obvious IMO that this situation will make it harder rather than easier to get anti-corruption laws passed, or to fundraise.
Great reply! In fact, I think that the speech you wrote for the police reformer is probably the best way to advance the police corruption cause in that situation, with one change: they should be very clear that they don’t think that demons exist.
I think there is an aspect where the AI risk skeptics don’t want to be too closely associated with ideas they think are wrong: because if the AI x-riskers are proven to be wrong, they don’t want to go down with the ship. IE: if another AI winter hits, or an AGI is built that shows no sign of killing anyone, then everyone who jumped on the x-risk train might look like fools, and they don’t want to look like fools (for both personal and cause related reasons).
I think there definitely is an aspect of “AI x-risk people suck”, but I worry that casting it as a team sports thing makes it seem overly irrational. When Timnit Gebru says that AI x-risk people suck, she’s saying they are net negative: they do far more harm in promoting the incorrect x-risk idea and the actions they take (for example, helping start openAI) than they do incidental good in raising AI ethics awareness. You might think this belief is wrong, but the resulting actions make perfect sense, given this belief.
To modify the Gaia example, it’d be like if the Gaia people were trying to block all renewable energy building because it interrupted the chakras of the earth, and also loudly announcing that an earth spirit will become visible to the whole planet in 5 years. Yes, they are objectively increasing attention to your actual cause, but debunking them is still the correct move here. They’ve moved from on your team to not on your team because of objective object level disagreements over what beliefs are true and what actions should be taken.
Thanks for the comment!
I think we should imagine two scenarios, one where I see the demonic possession people as being “on my team” and the other where I see them as being “against my team”.
To elaborate, here’s yet another example: Concerned Climate Scientist Alice responding to statements by environmentalists of the Gaia / naturalness / hippy-type tradition. Alice probably thinks that a lot of their beliefs are utterly nuts. But it’s pretty plausible that she sees them as kinda “on her side” from a vibes perspective. (Hmm, actually, also imagine this is 20 years ago; I think there’s been something of a tribal split between pro-tech environmentalists and anti-tech environmentalists since then.) So probably Alice would probably make somewhat diplomatic statements, emphasizing areas of agreement, etc. Maybe she would say “I think they have the right idea about deforestation and many other things, although I come at it from a more scientific perspective. I don’t think we should take the Gaia idea too literally. But anyway, everyone agrees that there’s an environmental crisis here…” or something like that.
In your demon example, imagine someone saying “I think it’s really great to see so many people questioning the narrative that the police are always perfect. I don’t think demonic possession is the problem, but y’know why so many people keep talking about demonic possession? It’s because they can see there’s a problem, and they’re angry, and they have every right to be angry because there is in fact a problem. And that problem is police corruption…”.
So finally back to the AI example, I claim there’s a strong undercurrent of “The people talking about AI x-risk, they suck, those people are not on my team.” And if there wasn’t that undercurrent, I think most of the x-risk-doesn’t-exist people would have at worst mixed feelings about the x-risk discourse. Maybe they be vaguely happy that there are all these new anti-AI vibes going around, and they would try to redirect those vibes in the directions that they believe to be actually productive, as in the above examples: “I think it’s really great to see people across society questioning the narrative that AI is always a force for good and tech companies are always a force for good. They’re absolutely right to question that narrative; that narrative is wrong and dangerous! Now, on this specific question, I don’t think future AI x-risk is anything to worry about, but let’s talk about AI companies stomping on copyright law…”
Very different vibe, right? Much less aggressive trashing of AI x-risk than what we actually see from some people.
To be clear, in a perfect world, people would ignore vibes and stay on-topic and at the object level, and Alice would just straightforwardly say “My opinion is that Gaia is pseudoscientific nonsense” instead of sanewashing it and immediately changing the subject, and ditto with the demon person and the other imaginary people above. I’m just saying what often happens in practice.
Back to your example, I think it’s far from obvious IMO that the number of articles about police corruption are going to go down in absolute numbers, although it obviously goes down as a fraction of police articles. It’s also far from obvious IMO that this situation will make it harder rather than easier to get anti-corruption laws passed, or to fundraise.
Great reply! In fact, I think that the speech you wrote for the police reformer is probably the best way to advance the police corruption cause in that situation, with one change: they should be very clear that they don’t think that demons exist.
I think there is an aspect where the AI risk skeptics don’t want to be too closely associated with ideas they think are wrong: because if the AI x-riskers are proven to be wrong, they don’t want to go down with the ship. IE: if another AI winter hits, or an AGI is built that shows no sign of killing anyone, then everyone who jumped on the x-risk train might look like fools, and they don’t want to look like fools (for both personal and cause related reasons).
I think there definitely is an aspect of “AI x-risk people suck”, but I worry that casting it as a team sports thing makes it seem overly irrational. When Timnit Gebru says that AI x-risk people suck, she’s saying they are net negative: they do far more harm in promoting the incorrect x-risk idea and the actions they take (for example, helping start openAI) than they do incidental good in raising AI ethics awareness. You might think this belief is wrong, but the resulting actions make perfect sense, given this belief.
To modify the Gaia example, it’d be like if the Gaia people were trying to block all renewable energy building because it interrupted the chakras of the earth, and also loudly announcing that an earth spirit will become visible to the whole planet in 5 years. Yes, they are objectively increasing attention to your actual cause, but debunking them is still the correct move here. They’ve moved from on your team to not on your team because of objective object level disagreements over what beliefs are true and what actions should be taken.