Overall, I’m in favor of more (well-executed) public advocacy à la AI Pause (though I do worry a lot about various backfire risks (also, I wonder whether a message like “AI slow” may be better)), and I commend you for taking the initiative despite it (I imagine) being kinda uncomfortable or even scary at times!
(ETA: I’ve become even more uncertain about all of this. I might still be slightly in favor of (well-executed) AI Pause public advocacy but would probably prefer emphasizing messages like conditional AI Pause or AI Slow, and yeah, it all really depends greatly on the execution.)
The inside-outside game spectrum seems very useful. We might want to keep in mind another (admittedly obvious) spectrum, ranging from hostile/confrontational to nice/considerate/cooperative.
Two points in your post made me wonder whether you view the outside-game as necessarily being more on the hostile/confrontational end of the spectrum:
1) As an example for outside-game you list “moralistic, confrontational advocacy” (emphasis mine).
2) You also write (emphasis mine):
Funnily enough, even though animal advocates do radical stunts, you do not hear this fear expressed much in animal advocacy. If anything, in my experience, the existence of radical vegans can make it easier for “the reasonable ones” to gain access to institutions.
This implicitly characterizes the outer game with radical stunts, radical, and “unreasonable” people.
However, my sense is that outside-game interventions (hereafter: activism or public advocacy) can differ enormously on the hostility vs. considerateness dimension, even while holding other effects (such as efficacy) constant.
The obvious example is Martin Luther King’s activism, perhaps most succinctly characterized by his famous “I have a Dream” speech which was non-confrontational and emphasized themes of cooperation, respect, and even camaraderie.[1] (In fact, King was criticized by others for being too compromising.[2]) On the hostile/confrontational side of the spectrum you had people like Malcolm X, or the Black Panther Party.[3] In the field of animal advocacy, you have organizations like PETA on the confrontational end of the spectrum and, say, Mercy for Animals on the more considerate side.
As you probably have guessed, I prefer considerate activism over more confrontational activism. For example, my guess is that King and Mercy for Animals have done much more good for African Americans and animals, respectively, than Malcolm X and PETA.
(As an aside and to be super clear, I didn’t want to suggest that you or AI Pause is or will be disrespectful/hostile and, say, throw paper clips at Meta employees! :P )
A couple of weak arguments in favor of considerate/cooperative public advocacy over confrontational/hostile advocacy:
Taking a more confrontational tone makes everyone more emotional and tense, which probably decreases truth-seeking, scout-mindset, and the general epistemic quality of discourse. It also makes people more aggressive and might escalate conflict, and dangerous emotional and behavioral patterns such as spite, retaliation, or even (threats of) violence. It may also help to bring about a climate where the most outrage-inducing message spreads the fastest. Last, since this is EA, here’s the obligatory option value argument: It seems easier to go from a more considerate to a more confrontational stance than vice versa.
As an aside, (and contrary to what you write in the above quote), I often have heard the fear expressed that the actions of radical vegans will backfire. I’ve certainly witnessed that people were much less receptive to my animal welfare arguments because they’ve had bad experiences with “unreasonable” vegans who e.g. yelled expletives at them.[4] I think you can also see this reflected in the general public where vegans don’t have a great reputation, partly based on the aggressive actions of a few confrontational and hostile vegans or vegan organizations like PETA.
Political science research (e.g., Simpson et al., 2018) also seems to suggest that nonviolent protests are better than violent protests. (Of course, I’m not trying to imply that you were arguing for violent protests, in fact, you repeatedly say (in other places) that you’re organizing a nonviolent protest!) Importantly, the Simpson et al. paper suggests that violent protests make the protester side appear unreasonable and that this is the mechanism that causes the public to support this side less. It seems plausible to me that more confrontational and hostile public activism, even if it’s nonviolent, is more likely to appear unreasonable (especially when it comes to movements that might seem a bit fringe and which don’t yet have a long history of broad public support).
In general, I worry that increasing hostility/conflict, in particular in the field of AI, may be a risk factor for x-risk and especially s-risks. Of course, many others have written about the value of compromise/being nice and the dangers of unnecessary hostility, e.g., Schubert & Cotton-Barratt (2017), Tomasik (many examples, most relevant 2015), and Baumann (here and here).
Needless to say, there are risks to being too nice/considerate but I think they are outweighed by the benefits though it obviously depends on the specifics. (As you imply in your post, it’s probably also true that all public protests, by their very nature, are more confrontational than silently working out compromises behind closed doors. Still, my guess is that certain forms of public advocacy can score fairly high on the considerateness dimension while still being effective.)
To summarize, it may be valuable to emphasize considerateness (along other desiderate such as good epistemics) as a core part of the AI Pause movement’s memetic fabric, to minimize the probability that it will become more hostile in the future since we will probably have only limited memetic control over the movement once it gets big. This may also amount to pulling the rope sideways, in the sense that public advocacy against AI risk may be somewhat overdetermined (?) but we are perhaps at an inflection point where we can shape its overall tone / stance on the confrontational vs. considerate spectrum.
Examples: “former slaves and the sons of former slave owners will be able to sit down together at the table of brotherhood” and “little black boys and black girls will be able to join hands with little white boys and white girls as sisters and brothers”.
From Wikipedia: “Some Black leaders later criticized the speech (along with the rest of the march) as too compromising. Malcolm X later wrote in his autobiography: “Who ever heard of angry revolutionaries swinging their bare feet together with their oppressor in lily pad pools, with gospels and guitars and ‘I have a dream’ speeches?
David—you make some excellent points here. I agree that being agreeable vs. disagreeable might be largely orthogonal to playing the ‘inside game’ vs. the ‘outside game’. (Except that highly disagreeable people trying to play the inside game might get ostracized from inside-game organizations, e.g. fired from OpenAI.)
From my evolutionary psychology perspective, if agreeableness always worked for influencing others, we’d have all evolved to be highly agreeable; if disagreeableness always worked, we’d all have evolved to be highly disagreeable. The basic fact that people differ in the Big Five trait of Agreeableness (we psychologists tend to capitalize well-established personality traits) suggests that, at the trait level, there are mixed costs and benefits for being at any point along the Agreeableness spectrum. And of course, at the situation level, there are also mixed costs and benefits for pursuing agreeable vs. disagreeable strategies in any particular social context.
So, I think there are valid roles for people to use a variety of persuasion and influence tactics when doing advocacy work, and playing the outside game. On X/Twitter for example, I tend to be pretty disagreeable when I’m arguing with the ‘e/acc’ folks who dismiss AI safety concerns—partly because they often use highly disagreeable rhetoric when criticizing ‘AI Doomers’ like me. But I tend to be more agreeable when trying to persuade people I consider more open-minded, rational, and well-informed.
I guess EAs can do some self-reflection about their own personality traits and preferred social interaction styles, and adopt advocacy tactics that are the best fit, given who they are.
I agree that people should adopt advocacy styles that fit them and that the best tactics depend on the situation. What (arguably) matters most is making good arguments and raising the epistemic quality of (online) discourse. This requires participation and if people want/need to use disagreeable rhetoric in order to do that, I don’t want to stop them!
Admittedly, it’s hypocritical of me to champion kindness while staying on the sidelines and not participating in, say, Twitter discussions. (I appreciate your engagement there!) Reading and responding to countless poor and obnoxious arguments is already challenging enough, even without the additional constraint of always having to be nice and considerate.
Your point about the evolutionary advantages of different personality traits is interesting. However, (you obviously know this already) just because some trait or behavior used to increase inclusive fitness in the EEA doesn’t mean it increases global welfare today. One particularly relevant example may be dark tetrad traits which actually negatively correlate with Agreeableness (apologies for injecting my hobbyhorse into this discussion :) ).
Generally, it may be important to unpack different notions of being “disagreeable”. For example, this could mean, say, straw-manning or being (passive-)aggressive. These behaviors are often infuriating and detrimental to epistemics so I (usually) don’t like this type of disagreeableness. On the other hand, you could also characterize, say, Stefan Schubert as being “disagreeable”. Well, I’m a big fan of this type of “disagreeableness”! :)
I agree that Dark Tetrad traits applied to modern social media are often counter-productive (e.g. anonymous trolls trolling).
And, you’re right that there are constructive ways to be disagreeable, and toxic ways to be disagreeable—just as there are toxic ways to be overly agreeable! (eg validating people’s false claims or misguided reactions).
There’s such a wide-open field here that we can make a lot of headway with nice tactics. No question from me that should be the first approach, and I would be thrilled and relieved if that just kept working. There’s no reason to rush to hostility, and I don’t know if I would be able to run a group like that if I thought it was coming to that, but there may one day be a place for (nonviolent) hostile advocacy.
I sometimes see people make a similar point to yours in an illogical way, basically asserting that hostility never works, and I don’t agree with that. People think they hate PETA while updating in their direction about whatever issues they are advocating for and promptly forgetting they ever thought anything different. It’s a difficult role to play but I think PETA absolutely knows what they are doing and how to influence people. It’s common in social change for moderate groups to get the credit, and for people remember disliking the radical groups, but the direction of society’s update was determined by the radical flank pushing the Overton window.
The answer is not “hostility is bad/doesn’t work” or “hostility is good/works”. It depends on the context. It’s an inconvenient truth that hostility sometimes works, and works where nothing else does. I don’t think we should hold it off the table forever.
I also think we should reconceptualize what the AI companies are doing as hostile, aggressive, and reckless. EA is too much in a frame where the AI companies are just doing their legitimate jobs, and we are the ones that want this onerous favor of making sure their work doesn’t kill everyone on earth. If showing hostility works to convey the situation, then hostility could be merited.
Again, though, one amazing thing about not having explored outside game much in AI Safety is that we have the luxury of pushing the Overton window with even the most bland advocacy. I think we should advance that frontier slowly. And I really hope it’s not necessary to advance into hostility.
EDIT: Just to be absolutely clear—the hard line that advocacy should not cross is violence. I am never using the word “hostility” to refer to violence.
I agree that confrontational/hostile tactics have their place and can be effective (under certain circumstances they are even necessary). I also agree that there are several plausible positive radical flank effects. Overall, I’d still guess that, say, PETA’s efforts are net negative—though it’s definitely not clear to me and I’m by no means an expert on this topic. It would be great to have more research on this topic.[1]
I also think we should reconceptualize what the AI companies are doing as hostile, aggressive, and reckless. EA is too much in a frame where the AI companies are just doing their legitimate jobs, and we are the ones that want this onerous favor of making sure their work doesn’t kill everyone on earth.
Yeah, I’m sympathetic to such concerns. I sometimes worry about being biased against the more “dirty and tedious” work of trying to slow down AI or public AI safety advocacy. For example, the fact that it took us more than ten years to seriously consider the option of “slowing down AI” seems perhaps a bit puzzling. One possible explanation is that some of us have had a bias towards doing intellectually interesting AI alignment research rather than low-status, boring work on regulation and advocacy. To be clear, there were of course also many good reasons to not consider such options earlier (such as a complete lack of public support). (Also, AI alignment research (generally speaking) is great, of course!)
It still seems possible to me that one can convey strong messages like “(some) AI companies are doing something reckless and unreasonable” while being nice and considerate, similarly to how Martin Luther King very clearly condemned racism without being (overly) hostile.
Again, though, one amazing thing about not having explored outside game much in AI Safety is that we have the luxury of pushing the Overton window with even the most bland advocacy.
For example, present participants with (hypothetical) i) confrontational and ii) considerate AI pause protest scenarios/messages and measure resulting changes in beliefs and attitudes. I think Rethink Priorities has already done some work in this vein.
For example, the fact that it took us more than ten years to seriously consider the option of “slowing down AI” seems perhaps a bit puzzling. One possible explanation is that some of us have had a bias towards doing intellectually interesting AI alignment research rather than low-status, boring work on regulation and advocacy.
I’d guess it’s also that advocacy and regulation seemed just less marginally useful in most worlds with the suspected AI timelines of even 3 years ago?
Hmmm, your reply makes me more worried than before that you’ll engage in actions that increase the overall adversarial tone in a way that seems counterproductive to me. :’)
I also think we should reconceptualize what the AI companies are doing as hostile, aggressive, and reckless. EA is too much in a frame where the AI companies are just doing their legitimate jobs, and we are the ones that want this onerous favor of making sure their work doesn’t kill everyone on earth.
I’m not completely sure what you refer to with “legitimate jobs”, but I generally have the impression that EAs working on AI risks have very mixed feelings about AI companies advancing cutting edge capabilities? Or sharing models openly? And I think reconceptualizing “the behavior of AI companies” (I would suggest trying to be more concrete in public, even here) as aggressive and hostile will itself be perceived as hostile, which you said you wouldn’t do? I think that’s definitely not “the most bland advocacy” anymore?
Also, the way you frame your pushback makes me worry that you’ll loose patience with considerate advocacy way too quickly:
“There’s no reason to rush to hostility”
“If showing hostility works to convey the situation, then hostility could be merited.”
“And I really hope it’s not necessary to advance into hostility.”
It would be convenient for me to say that hostility is counterproductive but I just don’t believe that’s always true. This issue is too important to fall back on platitudes or wishful thinking.
Also, the way you frame your pushback makes me worry that you’ll loose patience with considerate advocacy way too quickly
I don’t know what to say if my statements led you to that conclusion. I felt like I was saying the opposite. Are you just concerned that I think hostility can be an effective tactic at all?
Great post, I agree with most of it!
Overall, I’m in favor of more (well-executed) public advocacy à la AI Pause (though I do worry a lot about various backfire risks (also, I wonder whether a message like “AI slow” may be better)), and I commend you for taking the initiative despite it (I imagine) being kinda uncomfortable or even scary at times!
(ETA: I’ve become even more uncertain about all of this. I might still be slightly in favor of (well-executed) AI Pause public advocacy but would probably prefer emphasizing messages like conditional AI Pause or AI Slow, and yeah, it all really depends greatly on the execution.)
The inside-outside game spectrum seems very useful. We might want to keep in mind another (admittedly obvious) spectrum, ranging from hostile/confrontational to nice/considerate/cooperative.
Two points in your post made me wonder whether you view the outside-game as necessarily being more on the hostile/confrontational end of the spectrum:
1) As an example for outside-game you list “moralistic, confrontational advocacy” (emphasis mine).
2) You also write (emphasis mine):
This implicitly characterizes the outer game with radical stunts, radical, and “unreasonable” people.
However, my sense is that outside-game interventions (hereafter: activism or public advocacy) can differ enormously on the hostility vs. considerateness dimension, even while holding other effects (such as efficacy) constant.
The obvious example is Martin Luther King’s activism, perhaps most succinctly characterized by his famous “I have a Dream” speech which was non-confrontational and emphasized themes of cooperation, respect, and even camaraderie.[1] (In fact, King was criticized by others for being too compromising.[2]) On the hostile/confrontational side of the spectrum you had people like Malcolm X, or the Black Panther Party.[3] In the field of animal advocacy, you have organizations like PETA on the confrontational end of the spectrum and, say, Mercy for Animals on the more considerate side.
As you probably have guessed, I prefer considerate activism over more confrontational activism. For example, my guess is that King and Mercy for Animals have done much more good for African Americans and animals, respectively, than Malcolm X and PETA.
(As an aside and to be super clear, I didn’t want to suggest that you or AI Pause is or will be disrespectful/hostile and, say, throw paper clips at Meta employees! :P )
A couple of weak arguments in favor of considerate/cooperative public advocacy over confrontational/hostile advocacy:
Taking a more confrontational tone makes everyone more emotional and tense, which probably decreases truth-seeking, scout-mindset, and the general epistemic quality of discourse. It also makes people more aggressive and might escalate conflict, and dangerous emotional and behavioral patterns such as spite, retaliation, or even (threats of) violence. It may also help to bring about a climate where the most outrage-inducing message spreads the fastest. Last, since this is EA, here’s the obligatory option value argument: It seems easier to go from a more considerate to a more confrontational stance than vice versa.
As an aside, (and contrary to what you write in the above quote), I often have heard the fear expressed that the actions of radical vegans will backfire. I’ve certainly witnessed that people were much less receptive to my animal welfare arguments because they’ve had bad experiences with “unreasonable” vegans who e.g. yelled expletives at them.[4] I think you can also see this reflected in the general public where vegans don’t have a great reputation, partly based on the aggressive actions of a few confrontational and hostile vegans or vegan organizations like PETA.
Political science research (e.g., Simpson et al., 2018) also seems to suggest that nonviolent protests are better than violent protests. (Of course, I’m not trying to imply that you were arguing for violent protests, in fact, you repeatedly say (in other places) that you’re organizing a nonviolent protest!) Importantly, the Simpson et al. paper suggests that violent protests make the protester side appear unreasonable and that this is the mechanism that causes the public to support this side less. It seems plausible to me that more confrontational and hostile public activism, even if it’s nonviolent, is more likely to appear unreasonable (especially when it comes to movements that might seem a bit fringe and which don’t yet have a long history of broad public support).
In general, I worry that increasing hostility/conflict, in particular in the field of AI, may be a risk factor for x-risk and especially s-risks. Of course, many others have written about the value of compromise/being nice and the dangers of unnecessary hostility, e.g., Schubert & Cotton-Barratt (2017), Tomasik (many examples, most relevant 2015), and Baumann (here and here).
Needless to say, there are risks to being too nice/considerate but I think they are outweighed by the benefits though it obviously depends on the specifics. (As you imply in your post, it’s probably also true that all public protests, by their very nature, are more confrontational than silently working out compromises behind closed doors. Still, my guess is that certain forms of public advocacy can score fairly high on the considerateness dimension while still being effective.)
To summarize, it may be valuable to emphasize considerateness (along other desiderate such as good epistemics) as a core part of the AI Pause movement’s memetic fabric, to minimize the probability that it will become more hostile in the future since we will probably have only limited memetic control over the movement once it gets big. This may also amount to pulling the rope sideways, in the sense that public advocacy against AI risk may be somewhat overdetermined (?) but we are perhaps at an inflection point where we can shape its overall tone / stance on the confrontational vs. considerate spectrum.
Examples: “former slaves and the sons of former slave owners will be able to sit down together at the table of brotherhood” and “little black boys and black girls will be able to join hands with little white boys and white girls as sisters and brothers”.
From Wikipedia: “Some Black leaders later criticized the speech (along with the rest of the march) as too compromising. Malcolm X later wrote in his autobiography: “Who ever heard of angry revolutionaries swinging their bare feet together with their oppressor in lily pad pools, with gospels and guitars and ‘I have a dream’ speeches?
To be fair, their different tactics were probably also the result of more extreme religious and political beliefs.
I should note that I probably have much less experience with animal advocacy than you.
David—you make some excellent points here. I agree that being agreeable vs. disagreeable might be largely orthogonal to playing the ‘inside game’ vs. the ‘outside game’. (Except that highly disagreeable people trying to play the inside game might get ostracized from inside-game organizations, e.g. fired from OpenAI.)
From my evolutionary psychology perspective, if agreeableness always worked for influencing others, we’d have all evolved to be highly agreeable; if disagreeableness always worked, we’d all have evolved to be highly disagreeable. The basic fact that people differ in the Big Five trait of Agreeableness (we psychologists tend to capitalize well-established personality traits) suggests that, at the trait level, there are mixed costs and benefits for being at any point along the Agreeableness spectrum. And of course, at the situation level, there are also mixed costs and benefits for pursuing agreeable vs. disagreeable strategies in any particular social context.
So, I think there are valid roles for people to use a variety of persuasion and influence tactics when doing advocacy work, and playing the outside game. On X/Twitter for example, I tend to be pretty disagreeable when I’m arguing with the ‘e/acc’ folks who dismiss AI safety concerns—partly because they often use highly disagreeable rhetoric when criticizing ‘AI Doomers’ like me. But I tend to be more agreeable when trying to persuade people I consider more open-minded, rational, and well-informed.
I guess EAs can do some self-reflection about their own personality traits and preferred social interaction styles, and adopt advocacy tactics that are the best fit, given who they are.
Thanks, Geoffrey, great points.
I agree that people should adopt advocacy styles that fit them and that the best tactics depend on the situation. What (arguably) matters most is making good arguments and raising the epistemic quality of (online) discourse. This requires participation and if people want/need to use disagreeable rhetoric in order to do that, I don’t want to stop them!
Admittedly, it’s hypocritical of me to champion kindness while staying on the sidelines and not participating in, say, Twitter discussions. (I appreciate your engagement there!) Reading and responding to countless poor and obnoxious arguments is already challenging enough, even without the additional constraint of always having to be nice and considerate.
Your point about the evolutionary advantages of different personality traits is interesting. However, (you obviously know this already) just because some trait or behavior used to increase inclusive fitness in the EEA doesn’t mean it increases global welfare today. One particularly relevant example may be dark tetrad traits which actually negatively correlate with Agreeableness (apologies for injecting my hobbyhorse into this discussion :) ).
Generally, it may be important to unpack different notions of being “disagreeable”. For example, this could mean, say, straw-manning or being (passive-)aggressive. These behaviors are often infuriating and detrimental to epistemics so I (usually) don’t like this type of disagreeableness. On the other hand, you could also characterize, say, Stefan Schubert as being “disagreeable”. Well, I’m a big fan of this type of “disagreeableness”! :)
David—nice comment; thanks.
I agree that Dark Tetrad traits applied to modern social media are often counter-productive (e.g. anonymous trolls trolling).
And, you’re right that there are constructive ways to be disagreeable, and toxic ways to be disagreeable—just as there are toxic ways to be overly agreeable! (eg validating people’s false claims or misguided reactions).
Thanks, David :)
There’s such a wide-open field here that we can make a lot of headway with nice tactics. No question from me that should be the first approach, and I would be thrilled and relieved if that just kept working. There’s no reason to rush to hostility, and I don’t know if I would be able to run a group like that if I thought it was coming to that, but there may one day be a place for (nonviolent) hostile advocacy.
I sometimes see people make a similar point to yours in an illogical way, basically asserting that hostility never works, and I don’t agree with that. People think they hate PETA while updating in their direction about whatever issues they are advocating for and promptly forgetting they ever thought anything different. It’s a difficult role to play but I think PETA absolutely knows what they are doing and how to influence people. It’s common in social change for moderate groups to get the credit, and for people remember disliking the radical groups, but the direction of society’s update was determined by the radical flank pushing the Overton window.
The answer is not “hostility is bad/doesn’t work” or “hostility is good/works”. It depends on the context. It’s an inconvenient truth that hostility sometimes works, and works where nothing else does. I don’t think we should hold it off the table forever.
I also think we should reconceptualize what the AI companies are doing as hostile, aggressive, and reckless. EA is too much in a frame where the AI companies are just doing their legitimate jobs, and we are the ones that want this onerous favor of making sure their work doesn’t kill everyone on earth. If showing hostility works to convey the situation, then hostility could be merited.
Again, though, one amazing thing about not having explored outside game much in AI Safety is that we have the luxury of pushing the Overton window with even the most bland advocacy. I think we should advance that frontier slowly. And I really hope it’s not necessary to advance into hostility.
EDIT: Just to be absolutely clear—the hard line that advocacy should not cross is violence. I am never using the word “hostility” to refer to violence.
Thanks, makes sense!
I agree that confrontational/hostile tactics have their place and can be effective (under certain circumstances they are even necessary). I also agree that there are several plausible positive radical flank effects. Overall, I’d still guess that, say, PETA’s efforts are net negative—though it’s definitely not clear to me and I’m by no means an expert on this topic. It would be great to have more research on this topic.[1]
Yeah, I’m sympathetic to such concerns. I sometimes worry about being biased against the more “dirty and tedious” work of trying to slow down AI or public AI safety advocacy. For example, the fact that it took us more than ten years to seriously consider the option of “slowing down AI” seems perhaps a bit puzzling. One possible explanation is that some of us have had a bias towards doing intellectually interesting AI alignment research rather than low-status, boring work on regulation and advocacy. To be clear, there were of course also many good reasons to not consider such options earlier (such as a complete lack of public support). (Also, AI alignment research (generally speaking) is great, of course!)
It still seems possible to me that one can convey strong messages like “(some) AI companies are doing something reckless and unreasonable” while being nice and considerate, similarly to how Martin Luther King very clearly condemned racism without being (overly) hostile.
Agreed. :)
For example, present participants with (hypothetical) i) confrontational and ii) considerate AI pause protest scenarios/messages and measure resulting changes in beliefs and attitudes. I think Rethink Priorities has already done some work in this vein.
I’d guess it’s also that advocacy and regulation seemed just less marginally useful in most worlds with the suspected AI timelines of even 3 years ago?
Definitely!
Hmmm, your reply makes me more worried than before that you’ll engage in actions that increase the overall adversarial tone in a way that seems counterproductive to me. :’)
I’m not completely sure what you refer to with “legitimate jobs”, but I generally have the impression that EAs working on AI risks have very mixed feelings about AI companies advancing cutting edge capabilities? Or sharing models openly? And I think reconceptualizing “the behavior of AI companies” (I would suggest trying to be more concrete in public, even here) as aggressive and hostile will itself be perceived as hostile, which you said you wouldn’t do? I think that’s definitely not “the most bland advocacy” anymore?
Also, the way you frame your pushback makes me worry that you’ll loose patience with considerate advocacy way too quickly:
It would be convenient for me to say that hostility is counterproductive but I just don’t believe that’s always true. This issue is too important to fall back on platitudes or wishful thinking.
I don’t know what to say if my statements led you to that conclusion. I felt like I was saying the opposite. Are you just concerned that I think hostility can be an effective tactic at all?