I found it valuable to hear information from the debrief meeting, and I agree with some of what you said—e.g. that it a priori seems plausible that implicit threats played at least some role in the decision. However, I’m not sure I agree with the extent to which you characterize the relevant incentives as threats or blackmail.
I think this is relevant because talk of blackmail suggests an appeal to clear-cut principles like “blackmail is (almost) always bad”. Such principles could ground criticism that’s independent from the content of beliefs, values, and norms: “I don’t care what this is about, structurally your actions are blackmail, and so they’re bad.”
I do think there is some force to such criticism in cases of so-called deplatforming including the case discussed here. However, I think that most conflict about such cases (between people opposing “deplatforming” and those favoring it) is not explained by different evaluations of blackmail, or different views on whether certain actions constitute blackmail. Instead, I think they are mostly garden-variety cases of conflicting goals and beliefs that lead to a different take on certain norms governing discourse that are mostly orthogonal to blackmail. I do have relevant goals and beliefs as well, and so do have an opinion on the matter, but don’t think it’s coming from a value-neutral place.
So I don’t think there’s one side either condoning blackmail or being unaware it’s committing blackmail versus another condemning it. I think there’s one side who wants a norm of having an extremely high bar for physically disrupting speech in certain situations versus another who wants a norm with a lower bar, one side who wants to treat issues independently versus one who wants to link them together, etc. - And if I wanted to decide which side I agree with in a certain instance, I wouldn’t try to locate blackmail (not because I don’t think blackmail is bad but because I don’t think this is where the sides differ), I’d ask myself who has goals more similar to mine, and whether the beliefs linking actions to goals are correct or not: e.g., what consequences would it have to have one norm versus the other, how much do physical disruptions violate ‘deontological’ constraints and are there alternatives that wouldn’t, would or wouldn’t physically disrupting more speech in one sort of situation increase or decrease physical or verbal violence elsewhere, etc.
Below I explain why I think blackmail isn’t the main issue here.
--
I think a central example of blackmail as the term is commonly used is something like
Alice knows information about Bob that Bob would prefer not to be public. Alice doesn’t independently care about Bob or who has access to this information. Alice just wants generic resources such as money, which Bob happens to have. So Alice tells Bob: “Give me some money or I’ll disclose this information about you.”
I think some features that contribute to making this an objectionable case of blackmail are:
Alice doesn’t get intrinsic value from the threatened action (and so it’ll be net costly to Alice in isolation, if only because of opportunity cost).
There is no relationship between the content of the threat or the threatened action on one hand, and Alice’s usual plans or goals.
By the standards of common-sense morality, Bob did not deserve to be punished (or at least not as severely) and Alice did not deserve gains because of the relevant information or other previous actions.
Similar remarks apply to robbing at knifepoint or kidnapping.
Do they also apply to actions you refer to as threats to EA Munich? You may have information suggesting they do, and that case I’d likely agree they’d be commonly described as threats. (Only “likely” because new information could also update my characterization of threats, which was quite ad hoc.)
However, my a priori guess would be that the alleged threats in the EA Munich case exhibited the above features to a much smaller extent. (In particular the alleged threat of disaffiliation, less but still substantially so threats of disrupting the event.) Instead, I’d mostly expect things like:
Group X thinks that public appearances by Hanson are a danger to some value V they care about (say, gender equality). So in some sense they derive intrinsic value from reducing the number of Hanson’s public appearances.
A significant part of Group X’s mission is to further value V, and they routinely take other actions for the stated reason to further V.
Group X thinks that according to moral norms (that are either already in place or Group X thinks should be in place) Hanson no longer deserves to speak publicly without disruptions.
To be clear, I think the difference is gradual rather than black-and-white, and that I imagine in the EA Munich case some of these “threat properties” were present to some extent, e.g.:
Group X doesn’t usually care about the planned topic of Hanson’s talk (tort law).
Whether or not Group X agrees, by the standards of common-sense morality and widely shared norms, it is at least controversial whether Hanson should no longer be invited to give unrelated talks, and some responses such as physically disrupting the talk would arguably violate widely shared norms. (Part of the issue is that some of these norms are contested itself, with Group X aiming to change them and others defending them.)
Possibly some groups Y, Z, … are involved whose main purpose is at first glance more removed from value V, but these groups nevertheless want to further their main mission in ways consistent with V, or they think it’s useful to signal they care about V either intrinsically or as a concession to perceived outside pressure.
To illustrate the difference, consider the following hypotheticals, which I think would much less or not at all be referred to as blackmail/threats by common standards. If we abstract away from the content of values and beliefs, then I expect the alleged threats to EA Munich to in some ways be more similar to those, and some to overall be quite similar to the first:
The Society for Evidence-Based Medicine has friendly relations and some affiliation with the Society of Curious Doctors. Then they learn that the Curious Doctors plan to host a talk by Dr. Sanhon on a new type of scalpel to be used in surgery. However, they know that Dr. Sanhon has in the past advocated for homeopathy. While this doesn’t have any relevance to the topic of the planned talk, they have been concerned for a long time that hosting pro-homeopathy speakers at universities provides a false appearance of scientific credibility for homeopathy, which they believe is really harmful and antithetical to their mission of furthering evidence-based medicine. They didn’t become aware of a similar case before, so they don’t have a policy in place for how to react; after an ad-hoc discussion, they decide to inform the Curious Doctors that they plan to [disrupt the talk by Sanhon / remove their affiliation]. They believe the responses they’ve discussed would be good to do anyway if such talks happen, so they think of their message to the Curious Doctors more as an advance notice out of courtesy rather than as a threat.
Alice voted for Republican candidate R. Nashon because she hoped they would lower taxes. She’s otherwise more sympathetic to Democratic policies, but cares most about taxation. Then she learns that that Nashon has recently sponsored a tax increase bill. She writes to Nashon’s office that she’ll vote for the Democrats next time unless Nashon reverses his stance on taxation.
A group of transhumanists is concerned about existential risks from advanced AI. If they knew that no-one was going to build advanced AI, they’d happily focus on some of their other interests such as cryonics and life extension research. However, they think there’s some chance that big tech company Hasnon Inc. will develop advanced AI and inadvertently destroy the world. Therefore, they voice their concerns about AI x-risk publicly and advocate for AI safety research. They are aware that this will be costly to Hasnon, e.g. because it could undermine consumer trust or trigger misguided regulation. The transhumanists have no intrinsic interest in harming Hasnon, in fact they mostly like Hasnon’s products. Hasnon management invites them to talks with the aim of removing this PR problem and understands that the upshot of the transhumanists’ position is “if you continue to develop AI, we’ll continue to talk about AI x-risk”.
talk of blackmail suggests an appeal to clear-cut principles like “blackmail is (almost) always bad”
One ought to invite a speaker who has seriously considered the possibility that blackmail might be good in certain circumstances, written blog posts about it etc.
I found it valuable to hear information from the debrief meeting, and I agree with some of what you said—e.g. that it a priori seems plausible that implicit threats played at least some role in the decision. However, I’m not sure I agree with the extent to which you characterize the relevant incentives as threats or blackmail.
I think this is relevant because talk of blackmail suggests an appeal to clear-cut principles like “blackmail is (almost) always bad”. Such principles could ground criticism that’s independent from the content of beliefs, values, and norms: “I don’t care what this is about, structurally your actions are blackmail, and so they’re bad.”
I do think there is some force to such criticism in cases of so-called deplatforming including the case discussed here. However, I think that most conflict about such cases (between people opposing “deplatforming” and those favoring it) is not explained by different evaluations of blackmail, or different views on whether certain actions constitute blackmail. Instead, I think they are mostly garden-variety cases of conflicting goals and beliefs that lead to a different take on certain norms governing discourse that are mostly orthogonal to blackmail. I do have relevant goals and beliefs as well, and so do have an opinion on the matter, but don’t think it’s coming from a value-neutral place.
So I don’t think there’s one side either condoning blackmail or being unaware it’s committing blackmail versus another condemning it. I think there’s one side who wants a norm of having an extremely high bar for physically disrupting speech in certain situations versus another who wants a norm with a lower bar, one side who wants to treat issues independently versus one who wants to link them together, etc. - And if I wanted to decide which side I agree with in a certain instance, I wouldn’t try to locate blackmail (not because I don’t think blackmail is bad but because I don’t think this is where the sides differ), I’d ask myself who has goals more similar to mine, and whether the beliefs linking actions to goals are correct or not: e.g., what consequences would it have to have one norm versus the other, how much do physical disruptions violate ‘deontological’ constraints and are there alternatives that wouldn’t, would or wouldn’t physically disrupting more speech in one sort of situation increase or decrease physical or verbal violence elsewhere, etc.
Below I explain why I think blackmail isn’t the main issue here.
--
I think a central example of blackmail as the term is commonly used is something like
I think some features that contribute to making this an objectionable case of blackmail are:
Alice doesn’t get intrinsic value from the threatened action (and so it’ll be net costly to Alice in isolation, if only because of opportunity cost).
There is no relationship between the content of the threat or the threatened action on one hand, and Alice’s usual plans or goals.
By the standards of common-sense morality, Bob did not deserve to be punished (or at least not as severely) and Alice did not deserve gains because of the relevant information or other previous actions.
Similar remarks apply to robbing at knifepoint or kidnapping.
Do they also apply to actions you refer to as threats to EA Munich? You may have information suggesting they do, and that case I’d likely agree they’d be commonly described as threats. (Only “likely” because new information could also update my characterization of threats, which was quite ad hoc.)
However, my a priori guess would be that the alleged threats in the EA Munich case exhibited the above features to a much smaller extent. (In particular the alleged threat of disaffiliation, less but still substantially so threats of disrupting the event.) Instead, I’d mostly expect things like:
Group X thinks that public appearances by Hanson are a danger to some value V they care about (say, gender equality). So in some sense they derive intrinsic value from reducing the number of Hanson’s public appearances.
A significant part of Group X’s mission is to further value V, and they routinely take other actions for the stated reason to further V.
Group X thinks that according to moral norms (that are either already in place or Group X thinks should be in place) Hanson no longer deserves to speak publicly without disruptions.
To be clear, I think the difference is gradual rather than black-and-white, and that I imagine in the EA Munich case some of these “threat properties” were present to some extent, e.g.:
Group X doesn’t usually care about the planned topic of Hanson’s talk (tort law).
Whether or not Group X agrees, by the standards of common-sense morality and widely shared norms, it is at least controversial whether Hanson should no longer be invited to give unrelated talks, and some responses such as physically disrupting the talk would arguably violate widely shared norms. (Part of the issue is that some of these norms are contested itself, with Group X aiming to change them and others defending them.)
Possibly some groups Y, Z, … are involved whose main purpose is at first glance more removed from value V, but these groups nevertheless want to further their main mission in ways consistent with V, or they think it’s useful to signal they care about V either intrinsically or as a concession to perceived outside pressure.
To illustrate the difference, consider the following hypotheticals, which I think would much less or not at all be referred to as blackmail/threats by common standards. If we abstract away from the content of values and beliefs, then I expect the alleged threats to EA Munich to in some ways be more similar to those, and some to overall be quite similar to the first:
One ought to invite a speaker who has seriously considered the possibility that blackmail might be good in certain circumstances, written blog posts about it etc.
https://www.overcomingbias.com/2019/02/checkmate-on-blackmail.html