Cool, thanks for clarifying your view! To clarify, here’s a version of your comment that I wouldn’t have objected to at all (filling in details where I left things in square brackets):
‘You’re right that the building is a manor house rather than a castle, and I’m generally in favor of EAs being accurate and precise in our language. That said, I think [demographic D] will mostly not be convinced of EA’s virtue by this argument, because people still think of manor houses as quite fancy. And I think it’s important for EA to convince [demographic D] of our virtue, because [two-sentence summary of why I think we should prioritize appealing to demographic D].’
The main things I found objectionable about “At the point you are having to debate the definition of a castle you’ve lost the optics argument even if you’re technically correct.” were:
The response is snarky, in a way that slightly nudges EA toward a norm of “it’s cringey and low-status to get into nit-picky arguments about what’s true, when what really matters is public perception”. I want to pump hard against moves in that direction, even small ones. I’m already wary of how much EA focuses on public perception; propagating the meme that we should focus on public perception and it’s laughable to care about what’s true (on topics with PR implications) seems outright toxic to me, even if that wasn’t your intent at all.
The response says nothing about whether you agree or disagree with the nitpick about castle terminology, further reinforcing the idea that accuracy is silly and unimportant for EAs to internally think about whenever a topic is PR-adjacent.
The response gives no argument for who you think EA should be trying to court here, or why we should be courting them. I think this is a pretty important step, because it’s quite important (and not trivial) for EAs to carefully mentally distinguish “let’s do PR action X because of specific real-world goal Y” from generalized “we feel socially anxious that we aren’t being socially embraced by enough other monkeys, and will reflexively try to appease them”.
The latter approach doesn’t work in a hostile environment of journalists or Twitter trolls who will strategically drum up outrage in order to push your psychological buttons and compel concessions from you. If you’re going to “play the game” and try to outmaneuver them, it’s very important that you do so in a clear-sighted way. Which requires being unusually explicit about why you think X is a good idea, as opposed to just smirkily shooting down other EAs’ points with an “lol how cringe of you to respond to falsehoods with corrections”.
If it actually matters to avoid some “cringe” behavior, then we should do that in a self-aware way that involves explicitly understanding what we’re doing and why, rather than just parroting whatever the current social gradient is. This helps ensure, among other things, that EA deliberately keeps its cringiness in contexts where it’s actually better to select the cringe option.
More broadly, I objected to what struck me as an attempt to import into the EA Forum the norms of “play the politics game rather than trying to figure out what’s true”, as opposed to merely describing those norms here and explicitly proposing some policy response.
If you’re going to take on the epistemic risk of playing the Game, you need to be sure that you’re explicitly simulating the Game’s moving parts in your world-model, as opposed to steering toward options based primarily on inchoate feelings about what’s popular or unpopular. Otherwise, you’re liable to over-weight near-term status risks, because human brains hyperbolically discount, and were mostly built by evolution to handle coalitional politics in ~200-person local communities where rejection meant literal death.
To narrow down our disagreement a bit. Is your position a) this won’t have reputational effects on EA b) there will be reputational effects but they won’t decrease recruitment and donations or c) even if it does decrease recruitment and donations we shouldn’t care about that.
None of the above! It’s: This will plausibly decrease recruitment and donations a nonzero amount, and that’s a real cost, but news-cycle-obsessed status-anxious people will tend to fixate on this more than makes sense, in ways that:
exaggerate the long-term importance of specific news-cycle ups and downs, resulting in panicky and untethered-from-reality reactions;
distract from more useful stuff we’d otherwise be doing;
provide bad incentives for nervous EAs to dissemble-in-the-name-of-EA or rush-to-concede-too-much;
signal weakness and blood-in-the-water in the political game;
and incentivize adversaries to try to push your buttons more.
Something can be a real cost/problem, and yet the natural reaction to that cost/problem be something that makes things worse rather than better.
I think a few EAs should have a day job of thinking about hit-piece writers (and related topics), and other EAs should mostly ignore the topic, except insofar as they see locally false things and (candidly, unstrategically) chime in to say the true thing in response. (In particular, we should chime in with the true thing whether doing so makes EA look better or worse.)
Its short sighted to narrow down the people we want to persuade to being only a certain set of people. The donations and other contributions of everyone are equally valuable. And the general perception of EA effects people’s likelihood to learn more to begin with.
I agree that we shouldn’t completely write off anyone. But we should very much prioritize reaching some groups over others, and it’s rarely the case that there’s any action available to EA that will please everyone equally. E.g., research of a given quality level is equally useful regardless of who it comes from, but not all human beings are equally likely to do useful research, and pretending that they are doesn’t help anyone.
We shouldn’t put equal effort into outreach to theologians and to biosecurity specialists. Generalizing this principle, we shouldn’t put equal effort into outreach to “people who care a lot about truth” and “people who don’t care a lot about truth”. (Though yes, we should put nonzero effort into reaching the latter group, insofar as we can do so without compromising our core principles or neglecting more-tractable and more-important opportunities.)
Cool, thanks for clarifying your view! To clarify, here’s a version of your comment that I wouldn’t have objected to at all (filling in details where I left things in square brackets):
‘You’re right that the building is a manor house rather than a castle, and I’m generally in favor of EAs being accurate and precise in our language. That said, I think [demographic D] will mostly not be convinced of EA’s virtue by this argument, because people still think of manor houses as quite fancy. And I think it’s important for EA to convince [demographic D] of our virtue, because [two-sentence summary of why I think we should prioritize appealing to demographic D].’
The main things I found objectionable about “At the point you are having to debate the definition of a castle you’ve lost the optics argument even if you’re technically correct.” were:
The response is snarky, in a way that slightly nudges EA toward a norm of “it’s cringey and low-status to get into nit-picky arguments about what’s true, when what really matters is public perception”. I want to pump hard against moves in that direction, even small ones. I’m already wary of how much EA focuses on public perception; propagating the meme that we should focus on public perception and it’s laughable to care about what’s true (on topics with PR implications) seems outright toxic to me, even if that wasn’t your intent at all.
The response says nothing about whether you agree or disagree with the nitpick about castle terminology, further reinforcing the idea that accuracy is silly and unimportant for EAs to internally think about whenever a topic is PR-adjacent.
The response gives no argument for who you think EA should be trying to court here, or why we should be courting them. I think this is a pretty important step, because it’s quite important (and not trivial) for EAs to carefully mentally distinguish “let’s do PR action X because of specific real-world goal Y” from generalized “we feel socially anxious that we aren’t being socially embraced by enough other monkeys, and will reflexively try to appease them”.
The latter approach doesn’t work in a hostile environment of journalists or Twitter trolls who will strategically drum up outrage in order to push your psychological buttons and compel concessions from you. If you’re going to “play the game” and try to outmaneuver them, it’s very important that you do so in a clear-sighted way. Which requires being unusually explicit about why you think X is a good idea, as opposed to just smirkily shooting down other EAs’ points with an “lol how cringe of you to respond to falsehoods with corrections”.
If it actually matters to avoid some “cringe” behavior, then we should do that in a self-aware way that involves explicitly understanding what we’re doing and why, rather than just parroting whatever the current social gradient is. This helps ensure, among other things, that EA deliberately keeps its cringiness in contexts where it’s actually better to select the cringe option.
More broadly, I objected to what struck me as an attempt to import into the EA Forum the norms of “play the politics game rather than trying to figure out what’s true”, as opposed to merely describing those norms here and explicitly proposing some policy response.
If you’re going to take on the epistemic risk of playing the Game, you need to be sure that you’re explicitly simulating the Game’s moving parts in your world-model, as opposed to steering toward options based primarily on inchoate feelings about what’s popular or unpopular. Otherwise, you’re liable to over-weight near-term status risks, because human brains hyperbolically discount, and were mostly built by evolution to handle coalitional politics in ~200-person local communities where rejection meant literal death.
None of the above! It’s: This will plausibly decrease recruitment and donations a nonzero amount, and that’s a real cost, but news-cycle-obsessed status-anxious people will tend to fixate on this more than makes sense, in ways that:
exaggerate the long-term importance of specific news-cycle ups and downs, resulting in panicky and untethered-from-reality reactions;
distract from more useful stuff we’d otherwise be doing;
provide bad incentives for nervous EAs to dissemble-in-the-name-of-EA or rush-to-concede-too-much;
signal weakness and blood-in-the-water in the political game;
and incentivize adversaries to try to push your buttons more.
Something can be a real cost/problem, and yet the natural reaction to that cost/problem be something that makes things worse rather than better.
I think a few EAs should have a day job of thinking about hit-piece writers (and related topics), and other EAs should mostly ignore the topic, except insofar as they see locally false things and (candidly, unstrategically) chime in to say the true thing in response. (In particular, we should chime in with the true thing whether doing so makes EA look better or worse.)
I agree that we shouldn’t completely write off anyone. But we should very much prioritize reaching some groups over others, and it’s rarely the case that there’s any action available to EA that will please everyone equally. E.g., research of a given quality level is equally useful regardless of who it comes from, but not all human beings are equally likely to do useful research, and pretending that they are doesn’t help anyone.
We shouldn’t put equal effort into outreach to theologians and to biosecurity specialists. Generalizing this principle, we shouldn’t put equal effort into outreach to “people who care a lot about truth” and “people who don’t care a lot about truth”. (Though yes, we should put nonzero effort into reaching the latter group, insofar as we can do so without compromising our core principles or neglecting more-tractable and more-important opportunities.)