“it usually isn’t actually very important if someone is wrong on the internet”
It USUALLY isn’t, but certain perspectives published in public spaces (eg Facebook groups with thousands of people) and left unchallenged are a PR risk.
PR risk is another whole topic by itself and there are some tough questions here. One comment though: We need to be wary that acting to prevent PR damage can actually encourage more people to put pressure on you as they’ve seen that you are vulnerable.
I’m glad that someone mentions this. I have a strong alief that misrepresenting your opinions to be more palatable is a bad idea if you’re right. It pulls you into a bad equilibrium.
If you sermon the truth, you might lose the respect of those that are wrong, but you will gain the respect of those that are right, and those people are the ones you want in your community.
Having said that, you really do have to be right, and I feel like not even EA’s are up to the herculean task of clearly seeing outside of their political intuitions. I for one have so far been wrong about many things that felt obvious to me.
I guess that’s why we focus on meta truth instead. It seems that the set of rules that arrive at truth are much more easily described than the truth itself.
Are you saying there are groups who go around inflicting PR damage on generic communities they perceive as vulnerable, or that there are groups who are inclined to attack EA in particular, but will only do so if we are percieved as vulnerable (or something else I’m missing)? I’m having a hard time understanding the mechanism through which this occurs.
It’s not necessarily as intentional as that. Some people have certain political goals. They can achieve those goals co-operatively by engaging people in civil discussion or by adversarily by protesting/creating negative publicity. If the later tends to be successful, a greater proportion of people will be drawn towards it. Is that clearer?
Suppose (heaven forbid) a close relative has cancer, and there’s a new therapy which fractionally improves survival. The NHS doesn’t provide it on cost-effectiveness grounds. If you look around and see the NHS often provides treatment it previously ruled out if enough public sympathy can be aroused, you might be inclined try to do the same. If instead you see it is pretty steadfast (“We base our allocation on ethical principles, and only change this when we find we’ve made a mistake in applying them”), you might not be—or at least change your strategy to show the decision the NHS has made for your relative is unjust rather than unpopular.
None of this requires you to be acting in bad faith looking for ways of extorting the government—you’re just trying to do everything you can for a loved one (the motivation for pharmaceutical companies that sponsor patient advocacy groups may be less unalloyed). Yet (ideally) the government wants to encourage protest that highlights a policy mistake, and discourage those for when it has done the right thing for its population, but is against the interests of a powerful/photogenic/popular constituency. ‘Caving in’ to the latter type pushes in the wrong direction.
(That said, back in EA-land, I think a lot things that are ‘PR risks’ for EA look bad because they are bad (e.g. in fact mistaken, morally abhorrent, etc.), and so although PR considerations aren’t sufficient to want to discourage something, they can further augment concern.)
“it usually isn’t actually very important if someone is wrong on the internet” It USUALLY isn’t, but certain perspectives published in public spaces (eg Facebook groups with thousands of people) and left unchallenged are a PR risk.
PR risk is another whole topic by itself and there are some tough questions here. One comment though: We need to be wary that acting to prevent PR damage can actually encourage more people to put pressure on you as they’ve seen that you are vulnerable.
I’m glad that someone mentions this. I have a strong alief that misrepresenting your opinions to be more palatable is a bad idea if you’re right. It pulls you into a bad equilibrium.
If you sermon the truth, you might lose the respect of those that are wrong, but you will gain the respect of those that are right, and those people are the ones you want in your community.
Having said that, you really do have to be right, and I feel like not even EA’s are up to the herculean task of clearly seeing outside of their political intuitions. I for one have so far been wrong about many things that felt obvious to me.
I guess that’s why we focus on meta truth instead. It seems that the set of rules that arrive at truth are much more easily described than the truth itself.
Are you saying there are groups who go around inflicting PR damage on generic communities they perceive as vulnerable, or that there are groups who are inclined to attack EA in particular, but will only do so if we are percieved as vulnerable (or something else I’m missing)? I’m having a hard time understanding the mechanism through which this occurs.
It’s not necessarily as intentional as that. Some people have certain political goals. They can achieve those goals co-operatively by engaging people in civil discussion or by adversarily by protesting/creating negative publicity. If the later tends to be successful, a greater proportion of people will be drawn towards it. Is that clearer?
Not for me! I really don’t understand what you mean.
I think I get the idea:
Suppose (heaven forbid) a close relative has cancer, and there’s a new therapy which fractionally improves survival. The NHS doesn’t provide it on cost-effectiveness grounds. If you look around and see the NHS often provides treatment it previously ruled out if enough public sympathy can be aroused, you might be inclined try to do the same. If instead you see it is pretty steadfast (“We base our allocation on ethical principles, and only change this when we find we’ve made a mistake in applying them”), you might not be—or at least change your strategy to show the decision the NHS has made for your relative is unjust rather than unpopular.
None of this requires you to be acting in bad faith looking for ways of extorting the government—you’re just trying to do everything you can for a loved one (the motivation for pharmaceutical companies that sponsor patient advocacy groups may be less unalloyed). Yet (ideally) the government wants to encourage protest that highlights a policy mistake, and discourage those for when it has done the right thing for its population, but is against the interests of a powerful/photogenic/popular constituency. ‘Caving in’ to the latter type pushes in the wrong direction.
(That said, back in EA-land, I think a lot things that are ‘PR risks’ for EA look bad because they are bad (e.g. in fact mistaken, morally abhorrent, etc.), and so although PR considerations aren’t sufficient to want to discourage something, they can further augment concern.)
Thank you!