I recall reading that in debates like this, the audience usually moves against the majority position.
There’s a simple a priori reason one might expect this: if to begin with there are twice as many people who agree with X as disagree with it, then the anti- side has twice as many people available who they can plausibly persuade to switch to their view.
If 10% of both groups change their minds, you’d go from 66.6% agreeing to 63.3% agreeing.
(Note that even a very fringe view benefits from equal time in a debate format, in a way that isn’t the case in how much it gets covered in the broader social conversation.)
Would be neat if anyone could google around and see if this is a real phenomenon.
Interesting! This does sound pretty plausible, and it could explain a good share of the move against the original majority position.
Still, this seems unlikely to entirely explain moving to “almost in a tie”, though, in case that’s what actually happened. If you start with vote shares p and 1−p, and get half from each group switching to the other, you end up with 0.5p+0.5(1−p)=0.5 in each group. Half from each group switching seems pretty extreme,[1] and any less than that (the same share of each group switching) would preserve the majority position.
sorry I thought the difference was more like 4%? 67-33 to 63-37.
If Robert_Wiblin was right about the proposed causal mechanism, which I’m fairly neutral on, then you just need .67x -.33x= 0.04, or about a x=~12% (relative) shift from each side, which is very close to Robert’s original proposed numbers.
I have thought something similar (without having read about it before), given the large percentage of people who were willing to change their minds. But I think the exact percentage of the shift, if there was one at all, isn’t really important. I think you could say that since there wasn’t a major shift towards x-risk, the debate wasn’t going very well from an x-risk perspective.
Imagine you’re telling people that the building you’re in is on fire, the alarm didn’t go off because of some technical problem, and they should leave the building immediately. If you then have a discussion and afterwards even just a small fraction of people decides to stay in the building, you have “lost” the debate.
In this case, though I was disappointed, I don’t think the outcome is “bad”, because it is an opportunity to learn. We’re just at the beginning of the “battle” about the public opinion on AI x-risk, so we should use this opportunity to fine-tune our communications. That’s why I wrote the post. There’s also this excellent piece by Steven Byrnes about the various arguments.
Thanks for this post, good summary!
I recall reading that in debates like this, the audience usually moves against the majority position.
There’s a simple a priori reason one might expect this: if to begin with there are twice as many people who agree with X as disagree with it, then the anti- side has twice as many people available who they can plausibly persuade to switch to their view.
If 10% of both groups change their minds, you’d go from 66.6% agreeing to 63.3% agreeing.
(Note that even a very fringe view benefits from equal time in a debate format, in a way that isn’t the case in how much it gets covered in the broader social conversation.)
Would be neat if anyone could google around and see if this is a real phenomenon.
Interesting! This does sound pretty plausible, and it could explain a good share of the move against the original majority position.
Still, this seems unlikely to entirely explain moving to “almost in a tie”, though, in case that’s what actually happened. If you start with vote shares p and 1−p, and get half from each group switching to the other, you end up with 0.5p+0.5(1−p)=0.5 in each group. Half from each group switching seems pretty extreme,[1] and any less than that (the same share of each group switching) would preserve the majority position.
More than half from each switching sounds crazy: their prior positions would be inversely correlated with their later positions.
sorry I thought the difference was more like 4%? 67-33 to 63-37.
If Robert_Wiblin was right about the proposed causal mechanism, which I’m fairly neutral on, then you just need .67x -.33x= 0.04, or about a x=~12% (relative) shift from each side, which is very close to Robert’s original proposed numbers.
Yes sorry I don’t meant to ‘explain away’ any large shift (if it occurred), the anti- side may just have been more persuasive here.
I have thought something similar (without having read about it before), given the large percentage of people who were willing to change their minds. But I think the exact percentage of the shift, if there was one at all, isn’t really important. I think you could say that since there wasn’t a major shift towards x-risk, the debate wasn’t going very well from an x-risk perspective.
Imagine you’re telling people that the building you’re in is on fire, the alarm didn’t go off because of some technical problem, and they should leave the building immediately. If you then have a discussion and afterwards even just a small fraction of people decides to stay in the building, you have “lost” the debate.
In this case, though I was disappointed, I don’t think the outcome is “bad”, because it is an opportunity to learn. We’re just at the beginning of the “battle” about the public opinion on AI x-risk, so we should use this opportunity to fine-tune our communications. That’s why I wrote the post. There’s also this excellent piece by Steven Byrnes about the various arguments.