Having participated in a debrief meeting for EA Munich, my assessment is indeed that one of the primary reasons the event was cancelled was due to fear of disruptors showing up at the event, similar to how they have done for some events of Peter Singer. Indeed almost all concerns that were brought up during that meeting were concerns of external parties threatening EA Munich, or EA at large, in response to inviting Hanson. There were some minor concerns about Hanson’s views qua his views alone, but basically all organizers who spoke at the debrief I was part of said that they were interested in hearing Robin’s ideas and would have enjoyed participating in an event with him, and were primarily worried about how others would perceive it and react to inviting him.
As such, blackmail feels like a totally fair characterization of a substantial part of the reason for disinviting Hanson (though definitely not 100% of it).
More importantly, I am really confused why you would claim so confidently that no threats were made. The prior for actions like this being taken in response to implicit threats is really high, and talking to any person who has tried to organizing events like this, will show you that they have experienced implicit of explicit threats some form or another. In this situation there was also absolutely not an “apparent absence of people pressuring Munich to ‘cancel Hanson’”. There was indeed an abundance of threats that were readily visible by anyone looking at the current public intellectual climate, talking to people who are trying to organize public discourse, and just seeing how many other people are being actively punished on social media and other places for organizing events like this.
While I don’t think this had substantial weight in this specific decision, there was also one very explicit threat made to the organizers at EA Munich, at least if I remember correctly, of an organization removing their official affiliation with them if they were to host Hanson. The organizers assured others at the debrief that this did not play a substantial role in their final decision, but it does at least show that explicit threats were made.
I found it valuable to hear information from the debrief meeting, and I agree with some of what you said—e.g. that it a priori seems plausible that implicit threats played at least some role in the decision. However, I’m not sure I agree with the extent to which you characterize the relevant incentives as threats or blackmail.
I think this is relevant because talk of blackmail suggests an appeal to clear-cut principles like “blackmail is (almost) always bad”. Such principles could ground criticism that’s independent from the content of beliefs, values, and norms: “I don’t care what this is about, structurally your actions are blackmail, and so they’re bad.”
I do think there is some force to such criticism in cases of so-called deplatforming including the case discussed here. However, I think that most conflict about such cases (between people opposing “deplatforming” and those favoring it) is not explained by different evaluations of blackmail, or different views on whether certain actions constitute blackmail. Instead, I think they are mostly garden-variety cases of conflicting goals and beliefs that lead to a different take on certain norms governing discourse that are mostly orthogonal to blackmail. I do have relevant goals and beliefs as well, and so do have an opinion on the matter, but don’t think it’s coming from a value-neutral place.
So I don’t think there’s one side either condoning blackmail or being unaware it’s committing blackmail versus another condemning it. I think there’s one side who wants a norm of having an extremely high bar for physically disrupting speech in certain situations versus another who wants a norm with a lower bar, one side who wants to treat issues independently versus one who wants to link them together, etc. - And if I wanted to decide which side I agree with in a certain instance, I wouldn’t try to locate blackmail (not because I don’t think blackmail is bad but because I don’t think this is where the sides differ), I’d ask myself who has goals more similar to mine, and whether the beliefs linking actions to goals are correct or not: e.g., what consequences would it have to have one norm versus the other, how much do physical disruptions violate ‘deontological’ constraints and are there alternatives that wouldn’t, would or wouldn’t physically disrupting more speech in one sort of situation increase or decrease physical or verbal violence elsewhere, etc.
Below I explain why I think blackmail isn’t the main issue here.
--
I think a central example of blackmail as the term is commonly used is something like
Alice knows information about Bob that Bob would prefer not to be public. Alice doesn’t independently care about Bob or who has access to this information. Alice just wants generic resources such as money, which Bob happens to have. So Alice tells Bob: “Give me some money or I’ll disclose this information about you.”
I think some features that contribute to making this an objectionable case of blackmail are:
Alice doesn’t get intrinsic value from the threatened action (and so it’ll be net costly to Alice in isolation, if only because of opportunity cost).
There is no relationship between the content of the threat or the threatened action on one hand, and Alice’s usual plans or goals.
By the standards of common-sense morality, Bob did not deserve to be punished (or at least not as severely) and Alice did not deserve gains because of the relevant information or other previous actions.
Similar remarks apply to robbing at knifepoint or kidnapping.
Do they also apply to actions you refer to as threats to EA Munich? You may have information suggesting they do, and that case I’d likely agree they’d be commonly described as threats. (Only “likely” because new information could also update my characterization of threats, which was quite ad hoc.)
However, my a priori guess would be that the alleged threats in the EA Munich case exhibited the above features to a much smaller extent. (In particular the alleged threat of disaffiliation, less but still substantially so threats of disrupting the event.) Instead, I’d mostly expect things like:
Group X thinks that public appearances by Hanson are a danger to some value V they care about (say, gender equality). So in some sense they derive intrinsic value from reducing the number of Hanson’s public appearances.
A significant part of Group X’s mission is to further value V, and they routinely take other actions for the stated reason to further V.
Group X thinks that according to moral norms (that are either already in place or Group X thinks should be in place) Hanson no longer deserves to speak publicly without disruptions.
To be clear, I think the difference is gradual rather than black-and-white, and that I imagine in the EA Munich case some of these “threat properties” were present to some extent, e.g.:
Group X doesn’t usually care about the planned topic of Hanson’s talk (tort law).
Whether or not Group X agrees, by the standards of common-sense morality and widely shared norms, it is at least controversial whether Hanson should no longer be invited to give unrelated talks, and some responses such as physically disrupting the talk would arguably violate widely shared norms. (Part of the issue is that some of these norms are contested itself, with Group X aiming to change them and others defending them.)
Possibly some groups Y, Z, … are involved whose main purpose is at first glance more removed from value V, but these groups nevertheless want to further their main mission in ways consistent with V, or they think it’s useful to signal they care about V either intrinsically or as a concession to perceived outside pressure.
To illustrate the difference, consider the following hypotheticals, which I think would much less or not at all be referred to as blackmail/threats by common standards. If we abstract away from the content of values and beliefs, then I expect the alleged threats to EA Munich to in some ways be more similar to those, and some to overall be quite similar to the first:
The Society for Evidence-Based Medicine has friendly relations and some affiliation with the Society of Curious Doctors. Then they learn that the Curious Doctors plan to host a talk by Dr. Sanhon on a new type of scalpel to be used in surgery. However, they know that Dr. Sanhon has in the past advocated for homeopathy. While this doesn’t have any relevance to the topic of the planned talk, they have been concerned for a long time that hosting pro-homeopathy speakers at universities provides a false appearance of scientific credibility for homeopathy, which they believe is really harmful and antithetical to their mission of furthering evidence-based medicine. They didn’t become aware of a similar case before, so they don’t have a policy in place for how to react; after an ad-hoc discussion, they decide to inform the Curious Doctors that they plan to [disrupt the talk by Sanhon / remove their affiliation]. They believe the responses they’ve discussed would be good to do anyway if such talks happen, so they think of their message to the Curious Doctors more as an advance notice out of courtesy rather than as a threat.
Alice voted for Republican candidate R. Nashon because she hoped they would lower taxes. She’s otherwise more sympathetic to Democratic policies, but cares most about taxation. Then she learns that that Nashon has recently sponsored a tax increase bill. She writes to Nashon’s office that she’ll vote for the Democrats next time unless Nashon reverses his stance on taxation.
A group of transhumanists is concerned about existential risks from advanced AI. If they knew that no-one was going to build advanced AI, they’d happily focus on some of their other interests such as cryonics and life extension research. However, they think there’s some chance that big tech company Hasnon Inc. will develop advanced AI and inadvertently destroy the world. Therefore, they voice their concerns about AI x-risk publicly and advocate for AI safety research. They are aware that this will be costly to Hasnon, e.g. because it could undermine consumer trust or trigger misguided regulation. The transhumanists have no intrinsic interest in harming Hasnon, in fact they mostly like Hasnon’s products. Hasnon management invites them to talks with the aim of removing this PR problem and understands that the upshot of the transhumanists’ position is “if you continue to develop AI, we’ll continue to talk about AI x-risk”.
talk of blackmail suggests an appeal to clear-cut principles like “blackmail is (almost) always bad”
One ought to invite a speaker who has seriously considered the possibility that blackmail might be good in certain circumstances, written blog posts about it etc.
there was also one very explicit threat made to the organizers at EA Munich, at least if I remember correctly, of an organization removing their official affiliation with them if they were to host Hanson.
If I were reading this and didn’t know the facts, I would assume the organization you’re referring to might be CEA. I want to make clear that CEA didn’t threaten EA Munich in any way. I was the one who advised them when they said they were thinking of canceling the event, and I told them I could see either decision being reasonable. CEA absolutely would not have penalized them for continuing with the event if that’s how they had decided.
FYI, I read this, didn’t know the facts, and it didn’t occur to me that the organisation Habryka was referring to was CEA—I think my guess was that it was maybe some other random student group?
As such, blackmail feels like a totally fair characterization [of a substantial part of the reason for disinviting Hanson (though definitely not 100% of it).]
As your subsequent caveat implies, whether blackmail is a fair characterisation turns on exactly how substantial this part was. If in fact the decision was driven by non-blackmail considerations, the (great-)grandparent’s remarks about it being bad to submit to blackmail are inapposite.
Crucially, (q.v. Daniel’s comment), not all instances where someone says (or implies), “If you do X (which I say harms my interests), I’m going to do Y (and Y harms your interests)” are fairly characterised as (essentially equivalent to) blackmail. To give a much lower resolution of Daniel’s treatment, if (conditional on you doing X) it would be in my interest to respond with Y independent of any harm it may do to you (and any coercive pull it would have on you doing X in the first place), informing you of my intentions is credibly not a blackmail attempt, but a better-faith “You do X then I do Y is our BATNA here, can we negotiate something better?” (In some treatments these are termed warnings versus threats, or using terms like ‘spiteful’, ‘malicious’ or ‘bad faith’ to make the distinction).
The ‘very explicit threat’ of disassociation you mention is a prime example of ‘plausibly (/prima facie) not-blackmail’. There are many credible motivations to (e.g.) renounce (or denounce) a group which invites a controversial speaker you find objectionable independent from any hope threatening this makes them ultimately resile from running the event after all. So too ‘trenchantly criticising you for holding the event’, ‘no longer supporting your group’, ‘leaving in protest (and encouraging others to do the same)’ etc. etc. Any or all of these might be wrong for other reasons—but (again, per Daniels) ‘they’re trying to blackmail us!’ is not necessarily one of them.
(Less-than-coincidentally, the above are also acts of protest which are typically considered ‘fair game’, versus disrupting events, intimidating participants, campaigns to get someone fired, etc. I presume neither of us take various responses made to the NYT when they were planning to write an article about Scott to be (morally objectionable) attempts to blackmail them, even if many of them can be called ‘threats’ in natural language).
Of course, even if something could plausibly not be a blackmail attempt, it may in fact be exactly this. I may posture that my own interests would drive me to Y, but I would privately regret having to ‘follow through’ with this after X happens; or I may pretend my threat of Y is ‘only meant as a friendly warning’. Yet although our counterparty’s mind is not transparent to us, we can make reasonable guesses.
It is important to get this right, as the right strategy to deal with threats is a very wrong one to deal with warnings. If you think I’m trying to blackmail you when I say “If you do X, I will do Y”, then all the usual stuff around ‘don’t give in to the bullies’ applies: by refuting my threat, you deter me (and others) from attempting to bully you in future. But if you think I am giving a good-faith warning when I say this, it is worth looking for a compromise. Being intransigent as a matter of policy—at best—means we always end up at our mutual BATNAs even when there were better-for-you negotiated agreements we could have reached.
At worst, it may induce me to make the symmetrical mistake—wrongly believing your behaviour in is bad faith. That your real reasons for doing X, and for being unwilling to entertain the idea of compromise to mitigate the harm X will do to me, are because you’re actually ‘out to get me’. Game theory will often recommend retaliation as a way of deterring you from doing this again. So the stage is set for escalating conflict.
Directly: Widely across the comments here you have urged for charity and good faith to be extended to evaluating Hanson’s behaviour which others have taken exception to—that adverse inferences (beyond perhaps “inadvertently causes offence”) are not only mistaken but often indicate a violation of discourse norms vital for EA-land to maintain. I’m a big fan of extending charity and good faith in principle (although perhaps putting this into practice remains a work in progress for me). Yet you mete out much more meagre measure to others than you demand from them in turn, endorsing fervid hyperbole that paints those who expressed opposition to Munich inviting Hanson as bullies trying to blackmail them, and those sympathetic to the decision they made as selling out. Beyond this being normatively unjust, it is also prudentially unwise—presuming bad faith in those who object to your actions is a recipe for making a lot of enemies you didn’t need to, especially in already-fractious intellectual terrain.
You could still be right—despite the highlighted ‘very explicit threat’ which is also very plausibly not blackmail, despite the other ‘threats’ alluded to which seem also plausibly not blackmail and ‘fair game’ protests for them to make, and despite what the organisers have said (publicly) themselves, the full body of evidence should lead us to infer what really happened was bullying which was acquiesced to. But I doubt it.
I agree that the right strategy to deal with threats is substantially different than the right strategy to deal with warnings. I think it’s a fair and important point. I am not claiming that it is obvious that absolutely clear-cut blackmail occured, though I think overall, aggregating over all the evidence I have, it seems very likely (~85%-90%) to me that situation game-theoretically similar enough to a classical blackmail scenario has played out. I do think your point about it being really important to get the assessment of whether we are dealing with a warning or a threat is important, and is one of the key pieces I would want people to model when thinking about situations like this, and so your relatively clear explanation of that is appreciated (as well as the reminder for me to keep the costs of premature retaliation in mind).
Yet you mete out much more meagre measure to others than you demand from them in turn, endorsing fervid hyperbole that paints those who expressed opposition to Munich inviting Hanson as bullies trying to blackmail them, and those sympathetic to the decision they made as selling out.
This just seems like straightforward misrepresentation? What fervid hyperbole are you referring to? I am trying my best to make relatively clear and straightforward arguments in my comments here. I am not perfect and sometimes will get some details wrong, and I am sure there are many things I could do better in my phrasing, but nothing that I wrote on this post strikes me as being deserving of the phrase “fervid hyperbole”.
I also strongly disagree that I am applying some kind of one-sided charity to Hanson here. The only charity that I am demanding is to be open to engaging with people you disagree with, and to be hesitant to call for the cancellation of others without good cause. I am not even demanding that people engage with Hanson charitably. I am only asking that people do not deplatform others based on implicit threats by some other third party they don’t agree with, and do not engage in substantial public attacks in response to long-chained associations removed from denotative meaning. I am quite confident I am not doing that here.
Of course, there are lots of smaller things that I think are good for public discourse that I am requesting in addition to this, but I think overall I am running a strategy that seems quite compatible to me with a generalizable maxim that if followed would result in good discourse, even with others that substantially disagree with me. Of course, that maxim might not be obvious to you, and I take concerns of one-sided charity seriously, but after having reread every comment of mine on this post in response to this comment, I can’t find any place where such an accusation of one-sided charity fits well to my behavior.
That said, I prefer to keep this at the object-level, at least given that the above really doesn’t feel like it would start a productive conversation about conversation norms. But I hope it is clear that I disagree strongly with that characterization of mine.
You could still be right—despite the highlighted ‘very explicit threat’ which is also very plausibly not blackmail, despite the other ‘threats’ alluded to which seem also plausibly not blackmail and ‘fair game’ protests for them to make, and despite what the organisers have said (publicly) themselves, the full body of evidence should lead us to infer what really happened was bullying which was acquiesced to. But I doubt it.
That’s OK. We can read the evidence in separate ways. I’ve been trying really hard to understand what is happening here, have talked to the organizers directly, and am trying my best to build models of what the game-theoretically right response is. I expect if we were to dig into our disagreements here more, we would find a mixture of empirical disagreements, and some deeper disagreements about when something constitutes blackmail, or something game-theoretically equivalent. I don’t know which direction would be more fruitful to go into.
Having participated in a debrief meeting for EA Munich, my assessment is indeed that one of the primary reasons the event was cancelled was due to fear of disruptors showing up at the event, similar to how they have done for some events of Peter Singer. Indeed almost all concerns that were brought up during that meeting were concerns of external parties threatening EA Munich, or EA at large, in response to inviting Hanson. There were some minor concerns about Hanson’s views qua his views alone, but basically all organizers who spoke at the debrief I was part of said that they were interested in hearing Robin’s ideas and would have enjoyed participating in an event with him, and were primarily worried about how others would perceive it and react to inviting him.
As such, blackmail feels like a totally fair characterization of a substantial part of the reason for disinviting Hanson (though definitely not 100% of it).
More importantly, I am really confused why you would claim so confidently that no threats were made. The prior for actions like this being taken in response to implicit threats is really high, and talking to any person who has tried to organizing events like this, will show you that they have experienced implicit of explicit threats some form or another. In this situation there was also absolutely not an “apparent absence of people pressuring Munich to ‘cancel Hanson’”. There was indeed an abundance of threats that were readily visible by anyone looking at the current public intellectual climate, talking to people who are trying to organize public discourse, and just seeing how many other people are being actively punished on social media and other places for organizing events like this.
While I don’t think this had substantial weight in this specific decision, there was also one very explicit threat made to the organizers at EA Munich, at least if I remember correctly, of an organization removing their official affiliation with them if they were to host Hanson. The organizers assured others at the debrief that this did not play a substantial role in their final decision, but it does at least show that explicit threats were made.
I found it valuable to hear information from the debrief meeting, and I agree with some of what you said—e.g. that it a priori seems plausible that implicit threats played at least some role in the decision. However, I’m not sure I agree with the extent to which you characterize the relevant incentives as threats or blackmail.
I think this is relevant because talk of blackmail suggests an appeal to clear-cut principles like “blackmail is (almost) always bad”. Such principles could ground criticism that’s independent from the content of beliefs, values, and norms: “I don’t care what this is about, structurally your actions are blackmail, and so they’re bad.”
I do think there is some force to such criticism in cases of so-called deplatforming including the case discussed here. However, I think that most conflict about such cases (between people opposing “deplatforming” and those favoring it) is not explained by different evaluations of blackmail, or different views on whether certain actions constitute blackmail. Instead, I think they are mostly garden-variety cases of conflicting goals and beliefs that lead to a different take on certain norms governing discourse that are mostly orthogonal to blackmail. I do have relevant goals and beliefs as well, and so do have an opinion on the matter, but don’t think it’s coming from a value-neutral place.
So I don’t think there’s one side either condoning blackmail or being unaware it’s committing blackmail versus another condemning it. I think there’s one side who wants a norm of having an extremely high bar for physically disrupting speech in certain situations versus another who wants a norm with a lower bar, one side who wants to treat issues independently versus one who wants to link them together, etc. - And if I wanted to decide which side I agree with in a certain instance, I wouldn’t try to locate blackmail (not because I don’t think blackmail is bad but because I don’t think this is where the sides differ), I’d ask myself who has goals more similar to mine, and whether the beliefs linking actions to goals are correct or not: e.g., what consequences would it have to have one norm versus the other, how much do physical disruptions violate ‘deontological’ constraints and are there alternatives that wouldn’t, would or wouldn’t physically disrupting more speech in one sort of situation increase or decrease physical or verbal violence elsewhere, etc.
Below I explain why I think blackmail isn’t the main issue here.
--
I think a central example of blackmail as the term is commonly used is something like
I think some features that contribute to making this an objectionable case of blackmail are:
Alice doesn’t get intrinsic value from the threatened action (and so it’ll be net costly to Alice in isolation, if only because of opportunity cost).
There is no relationship between the content of the threat or the threatened action on one hand, and Alice’s usual plans or goals.
By the standards of common-sense morality, Bob did not deserve to be punished (or at least not as severely) and Alice did not deserve gains because of the relevant information or other previous actions.
Similar remarks apply to robbing at knifepoint or kidnapping.
Do they also apply to actions you refer to as threats to EA Munich? You may have information suggesting they do, and that case I’d likely agree they’d be commonly described as threats. (Only “likely” because new information could also update my characterization of threats, which was quite ad hoc.)
However, my a priori guess would be that the alleged threats in the EA Munich case exhibited the above features to a much smaller extent. (In particular the alleged threat of disaffiliation, less but still substantially so threats of disrupting the event.) Instead, I’d mostly expect things like:
Group X thinks that public appearances by Hanson are a danger to some value V they care about (say, gender equality). So in some sense they derive intrinsic value from reducing the number of Hanson’s public appearances.
A significant part of Group X’s mission is to further value V, and they routinely take other actions for the stated reason to further V.
Group X thinks that according to moral norms (that are either already in place or Group X thinks should be in place) Hanson no longer deserves to speak publicly without disruptions.
To be clear, I think the difference is gradual rather than black-and-white, and that I imagine in the EA Munich case some of these “threat properties” were present to some extent, e.g.:
Group X doesn’t usually care about the planned topic of Hanson’s talk (tort law).
Whether or not Group X agrees, by the standards of common-sense morality and widely shared norms, it is at least controversial whether Hanson should no longer be invited to give unrelated talks, and some responses such as physically disrupting the talk would arguably violate widely shared norms. (Part of the issue is that some of these norms are contested itself, with Group X aiming to change them and others defending them.)
Possibly some groups Y, Z, … are involved whose main purpose is at first glance more removed from value V, but these groups nevertheless want to further their main mission in ways consistent with V, or they think it’s useful to signal they care about V either intrinsically or as a concession to perceived outside pressure.
To illustrate the difference, consider the following hypotheticals, which I think would much less or not at all be referred to as blackmail/threats by common standards. If we abstract away from the content of values and beliefs, then I expect the alleged threats to EA Munich to in some ways be more similar to those, and some to overall be quite similar to the first:
One ought to invite a speaker who has seriously considered the possibility that blackmail might be good in certain circumstances, written blog posts about it etc.
https://www.overcomingbias.com/2019/02/checkmate-on-blackmail.html
If I were reading this and didn’t know the facts, I would assume the organization you’re referring to might be CEA. I want to make clear that CEA didn’t threaten EA Munich in any way. I was the one who advised them when they said they were thinking of canceling the event, and I told them I could see either decision being reasonable. CEA absolutely would not have penalized them for continuing with the event if that’s how they had decided.
Yes! This was definitely not CEA. I don’t have any more info on what organization it is (the organizers just said “an organization”).
Sorry, didn’t mean to imply that you intended this—just wanted to be sure there wasn’t a misunderstanding.
FYI, I read this, didn’t know the facts, and it didn’t occur to me that the organisation Habryka was referring to was CEA—I think my guess was that it was maybe some other random student group?
It didn’t occur to me that the organization was CEA but I also didn’t read it too carefully.
As your subsequent caveat implies, whether blackmail is a fair characterisation turns on exactly how substantial this part was. If in fact the decision was driven by non-blackmail considerations, the (great-)grandparent’s remarks about it being bad to submit to blackmail are inapposite.
Crucially, (q.v. Daniel’s comment), not all instances where someone says (or implies), “If you do X (which I say harms my interests), I’m going to do Y (and Y harms your interests)” are fairly characterised as (essentially equivalent to) blackmail. To give a much lower resolution of Daniel’s treatment, if (conditional on you doing X) it would be in my interest to respond with Y independent of any harm it may do to you (and any coercive pull it would have on you doing X in the first place), informing you of my intentions is credibly not a blackmail attempt, but a better-faith “You do X then I do Y is our BATNA here, can we negotiate something better?” (In some treatments these are termed warnings versus threats, or using terms like ‘spiteful’, ‘malicious’ or ‘bad faith’ to make the distinction).
The ‘very explicit threat’ of disassociation you mention is a prime example of ‘plausibly (/prima facie) not-blackmail’. There are many credible motivations to (e.g.) renounce (or denounce) a group which invites a controversial speaker you find objectionable independent from any hope threatening this makes them ultimately resile from running the event after all. So too ‘trenchantly criticising you for holding the event’, ‘no longer supporting your group’, ‘leaving in protest (and encouraging others to do the same)’ etc. etc. Any or all of these might be wrong for other reasons—but (again, per Daniels) ‘they’re trying to blackmail us!’ is not necessarily one of them.
(Less-than-coincidentally, the above are also acts of protest which are typically considered ‘fair game’, versus disrupting events, intimidating participants, campaigns to get someone fired, etc. I presume neither of us take various responses made to the NYT when they were planning to write an article about Scott to be (morally objectionable) attempts to blackmail them, even if many of them can be called ‘threats’ in natural language).
Of course, even if something could plausibly not be a blackmail attempt, it may in fact be exactly this. I may posture that my own interests would drive me to Y, but I would privately regret having to ‘follow through’ with this after X happens; or I may pretend my threat of Y is ‘only meant as a friendly warning’. Yet although our counterparty’s mind is not transparent to us, we can make reasonable guesses.
It is important to get this right, as the right strategy to deal with threats is a very wrong one to deal with warnings. If you think I’m trying to blackmail you when I say “If you do X, I will do Y”, then all the usual stuff around ‘don’t give in to the bullies’ applies: by refuting my threat, you deter me (and others) from attempting to bully you in future. But if you think I am giving a good-faith warning when I say this, it is worth looking for a compromise. Being intransigent as a matter of policy—at best—means we always end up at our mutual BATNAs even when there were better-for-you negotiated agreements we could have reached.
At worst, it may induce me to make the symmetrical mistake—wrongly believing your behaviour in is bad faith. That your real reasons for doing X, and for being unwilling to entertain the idea of compromise to mitigate the harm X will do to me, are because you’re actually ‘out to get me’. Game theory will often recommend retaliation as a way of deterring you from doing this again. So the stage is set for escalating conflict.
Directly: Widely across the comments here you have urged for charity and good faith to be extended to evaluating Hanson’s behaviour which others have taken exception to—that adverse inferences (beyond perhaps “inadvertently causes offence”) are not only mistaken but often indicate a violation of discourse norms vital for EA-land to maintain. I’m a big fan of extending charity and good faith in principle (although perhaps putting this into practice remains a work in progress for me). Yet you mete out much more meagre measure to others than you demand from them in turn, endorsing fervid hyperbole that paints those who expressed opposition to Munich inviting Hanson as bullies trying to blackmail them, and those sympathetic to the decision they made as selling out. Beyond this being normatively unjust, it is also prudentially unwise—presuming bad faith in those who object to your actions is a recipe for making a lot of enemies you didn’t need to, especially in already-fractious intellectual terrain.
You could still be right—despite the highlighted ‘very explicit threat’ which is also very plausibly not blackmail, despite the other ‘threats’ alluded to which seem also plausibly not blackmail and ‘fair game’ protests for them to make, and despite what the organisers have said (publicly) themselves, the full body of evidence should lead us to infer what really happened was bullying which was acquiesced to. But I doubt it.
I agree that the right strategy to deal with threats is substantially different than the right strategy to deal with warnings. I think it’s a fair and important point. I am not claiming that it is obvious that absolutely clear-cut blackmail occured, though I think overall, aggregating over all the evidence I have, it seems very likely (~85%-90%) to me that situation game-theoretically similar enough to a classical blackmail scenario has played out. I do think your point about it being really important to get the assessment of whether we are dealing with a warning or a threat is important, and is one of the key pieces I would want people to model when thinking about situations like this, and so your relatively clear explanation of that is appreciated (as well as the reminder for me to keep the costs of premature retaliation in mind).
This just seems like straightforward misrepresentation? What fervid hyperbole are you referring to? I am trying my best to make relatively clear and straightforward arguments in my comments here. I am not perfect and sometimes will get some details wrong, and I am sure there are many things I could do better in my phrasing, but nothing that I wrote on this post strikes me as being deserving of the phrase “fervid hyperbole”.
I also strongly disagree that I am applying some kind of one-sided charity to Hanson here. The only charity that I am demanding is to be open to engaging with people you disagree with, and to be hesitant to call for the cancellation of others without good cause. I am not even demanding that people engage with Hanson charitably. I am only asking that people do not deplatform others based on implicit threats by some other third party they don’t agree with, and do not engage in substantial public attacks in response to long-chained associations removed from denotative meaning. I am quite confident I am not doing that here.
Of course, there are lots of smaller things that I think are good for public discourse that I am requesting in addition to this, but I think overall I am running a strategy that seems quite compatible to me with a generalizable maxim that if followed would result in good discourse, even with others that substantially disagree with me. Of course, that maxim might not be obvious to you, and I take concerns of one-sided charity seriously, but after having reread every comment of mine on this post in response to this comment, I can’t find any place where such an accusation of one-sided charity fits well to my behavior.
That said, I prefer to keep this at the object-level, at least given that the above really doesn’t feel like it would start a productive conversation about conversation norms. But I hope it is clear that I disagree strongly with that characterization of mine.
That’s OK. We can read the evidence in separate ways. I’ve been trying really hard to understand what is happening here, have talked to the organizers directly, and am trying my best to build models of what the game-theoretically right response is. I expect if we were to dig into our disagreements here more, we would find a mixture of empirical disagreements, and some deeper disagreements about when something constitutes blackmail, or something game-theoretically equivalent. I don’t know which direction would be more fruitful to go into.