TracingWoodgrains—thanks for an excellent post. I think it should lead many EAs to develop a new and more balanced perspective on this controversy.
And thanks for mentioning my EA Forum comments about Ben Pace doing amateur investigative reporting—reporting that doesn’t seem, arguably, to have lived up to the standards of basic journalistic integrity (regardless of how much time he and the Lightcone team may have put into it.)
This leaves us with a very awkward question about the ongoing anonymity of ‘Alice’ and ‘Chloe’, and I don’t know what the right answer is about this issue, but I’m curious what other EAs think.
We seem to be in a situation where two disgruntled ex-employees of an EA organization coordinated to spread very harmful, false or highly exaggerated claims about the organization with the deliberate intent of slandering it and harming its leaders. They convinced someone with power and influence in the community to spend a lot of time confirming their claims, writing a highly negative public report, and paying them as whistleblowers/informants. Later, the slandered organization published a long refutation of the ex-employees’ claims, showing that many of them were false or highly exaggerated.
Yet the key figures in this whole EA community drama, ‘Alice’ and ‘Chloe’, remain absent from the discussion—ghostly presences that, apparently, must be treated as blameless, as if they were innocent, virtuous, righteous whistle-blowers with no moral agency or accountability of their own. They have the luxury of remaining behind a cloak of anonymity, despite their apparently false allegations. But I’m not sure why.
I don’t know who ‘Alice’ and ‘Chloe’ are. I haven’t tried to find out. Someone once mentioned their real-world identities to me in passing, but I’ve tried to forget them, and I have no intention of sharing them.
But here’s the thing.
If I was running an EA organization, I would really want to know the names of any EA-involved people who seem willing to spread false, exaggerated, harmful, and slanderous claims about their former employers. I would not want to hire such people. And I would be furious if I did hire them, in ignorance of their previous slanders, and then they did the same thing to my organization that they did to their former organization—all while being protected by EA’s cloak of anonymity.
So, what do you all think? If anonymous ‘whistleblowers’ turn out to have made a lot of false, highly exaggerated, and/or highly harmful claims about EA people and organizations, should their anonymity remain protected? Or would it be in the general interests of the EA movement, and other EA organizations that they might work for in the future, for their names to be known, and for their actions to be held accountable?
I honestly do not know the right ethical line to take on this anonymity issue. But I’m raising it because it seems very odd that nobody else seems to be.
How would you define the set of circumstances that are not in the “vast majority”? My initial reaction is vaguely along the lines of: lack of good faith + clear falsity of at least the main thrust of the accusation + lack of substantial mistreatment of the psuedonymous person by their target. But how does one judge the good faith of a psuedonym?
Whistleblower protection is necessary when Abe provides evidence that Bill harmed Cindy; otherwise, Abe lacks incentive to help Cindy. It is less important when Abe defends himself against harm caused by Bill.
There’s something to this, but I don’t think the incentives argument maps neatly onto the presence/absence of third parties. It’s not entirely clear to me what tangible incentive “Alice” and “Chloe” would have to tell their stories to Ben with permission to share with the broader public. The financial payment seems to have not been anticipated. Having proceeded under pseudonyms, the bulk of any sympathy they might get from the community wouldn’t translate into better real-world outcomes for the individuals themselves.
In these kinds of cases, the motive will often be psychological. People in this position could be motivated by altruistic motives (e.g., a desire for others not to experience the same things they believe they did) or non-altruistic motives (e.g., a hope that the community will roast people who the pseudonymous individuals believe did them wrong). In the former case, a default norm of respecting pseudonymity is important. Altruistic whistleblowers aren’t getting much out of it themselves (and are already devoting a lot of time and stress to the communal good).
There’s a unilateralist’s curse issue here—if there are (say) 100 people who know the identities of Alice and Chloe, does only one of them have to decide breaching the psuedonyms would be justified?
[Edit to add: I think the questions Geoffrey is asking are worthwhile ones to ask. I am just struggling to see how an appropriate decision to unmask could be made given the community’s structure without creating this problem. I don’t see a principled basis for declaring that, e.g., CHSP can legitimately decide to unmask but everyone else had better not.]
I continue to think that something went wrong for people to come away with takes that lump together Alice and Chloe in these ways.
Not because I’m convinced that Alice is as bad as Nonlinear makes it sound, but because, even based on Nonlinear’s portrayal, Chloe is portrayed as having had a poor reaction to the specific employment situation, and (unlike Alice) not as having a general pattern/history of making false/misleading claims. That difference matters immensely regarding whether it’s appropriate to warn future potential employers. (Besides, when I directly compare Chloe’s writings to Nonlinear’s, I find it more likely that they’re unfair towards her than vice versa.)
FWIW, I’m not saying that coming away with this interpretation is all your fault. If someone is only skim-reading Nonlinear’s post, then I can see why they might form similarly negative views about both Alice and Chloe (though, on close reading, it’s apparent that also Nonlinear would agree that there’s a difference). My point is that this is more a feature of their black-and-white counterattack narrative and not so much appropriate for what I think most likely happened.
Lukas—I guess one disadvantage of pseudonyms like ‘Alice’ and ‘Chloe’ is that it’s quite difficult for outsiders who don’t know their real identities to distinguish between them very clearly—especially if their stories get very intertwined.
If we can’t attach real faces and names to the allegations, and we can’t connect their pseudonyms to any other real-world information about them, such as LinkedIn profiles, web pages, EA Forum posts, etc., then it’s much harder to remember who’s who, and to assess their relatively degrees of reliability or culpability.
That’s just how the psychology of ‘person perception’ works. The richer the information we have about people (eg real names, faces, profiles, backgrounds), the easier it is to remember them accurately, distinguish between their actions, and differentiate their stories.
You’re right about the effort involved, but when these are real people who you are discussing deanonymizing in order to try to stop them from getting jobs, you should make the effort.
Well all three key figures at Nonlinear are also real people, and they got deanonymized by Ben Pace’s highly critical post, which had the likely effect (unless challenged) of stopping Nonlinear from doing its work, and of stigmatizing its leaders.
So, I don’t understand the double standard, where those subject to false allegations don’t enjoy anonymity, and those making the false allegations do get to enjoy anonymity.
So, I don’t understand the double standard, where those subject to false allegations don’t enjoy anonymity, and those making the false allegations do get to enjoy anonymity.
I don’t think all people in the replies were arguing that Ben’s initial post was okay and deanonymizing Alice and or Chloe would be bad (which I think you would call a double standard, which I’m not commenting on right now). Some probably do but some probably think that Ben’s initial post was bad and that deanonymizing Alice and or Chloe would also be bad and that we shouldn’t try to correct one bad with another bad, which doesn’t look like a double standard to me.
I’m just puzzled about the apparent double standard where the first people to make allegations enjoy privacy & anonymity (even if their allegations seem to be largely false or exaggerated), but the people they’re accusing don’t enjoy the same privilege.
I agree that the Forum’s rules and norms on privacy protection are confused. A few observations:
(1) Suppose a universe in which the first post on this topic had been from from Nonlinear, and had accused Alice and Chloe (by their real names) of a pattern of mendaciously spreading lies about Nonlinear. Would that post have been allowed to stay up? If yes, it is hard to come up with a principled reason why Alice and Chloe can’t be named now.
If no, we would need to think about why this hypothetical post would have been disallowed. The best argument I came with would be that Alice and Chloe are, as far as I know, people with no real prominence/influence/power (“PIP”) within or without EA. Under this argument, there is a greater public interest in the actions of those with PIP, and accepting a role of PIP necessarily means sacrificing some of the privacy rights that non-PIPs get.
(2) Another possibility involves the idea of standing. Under this theory, Alice and Chloe had standing to name people at Nonlinear because they were the ones who allegedly experienced harm. Ben had derivative standing because Alice and Chloe had given him permission to share their stories on the Forum. Under this theory, Nonlinear (and individuals named in Ben’s post) would have standing to name Alice and Chloe as the parties allegedly harmed by their conduct. The rest of us would not have standing. [Edit to add footnote here.[1]]
Maybe the aggrieved party isn’t the best imaginable arbiter of whether someone should be unmasked, but it’s probably better than concluding that each and every one of us get to make that decision. That could more easily lead to a situation in which 99% of us agree that the names shouldn’t be disclosed, and the 1% gets to decide. Of course, that could happen under a standing theory as well, but the 1% minority opinion has to coincide with the list of parties with standing. That at least narrows the problem.
(3) This situation involves a mix of Forum and non-Forum conduct. I sense that the norm for not doxxing people is stronger for Forum conduct (or at least for conduct involving similar sorts of speech) than from non-Forum conduct. This may reflect a difference in the norms of Internet communities vs. physically-grounded communities, or a sense that Forum speech is often core expressive speech or otherwise entitled to extra solicitude.
In any event, based on the fact that a number of people seem to know their identities, it sounds like the tale of Alice and Chloe was not a closely-kept secret before Ben’s post went up. Presumably this information came, directly or indirectly, from Alice and Chloe. In other words, there has been potentially relevant non-Forum conduct that did not magically morph into Forum conduct merely because Ben decided to write a post sharing the same information.
That raises the following hypothetical. Let’s say that Dan allegedly made some homophobic, racist, sexist, and/or otherwise offensive comments.[2] Does the location in which the comments were made change whether he can be identified. For example, is it OK to write a Forum post identifying Dan by name if those comments were posted on the Forum under a pseudonym? What if they were said at an EAG afterparty? In a non-EA space on Reddit? With a mask on in public (by someone who recognized Dan’s voice)? If the answers are not the same, why is that the case? I don’t claim to know the answers, by the way; this isn’t a trick question.
(4) I do not at present have a clear opinion on whether naming Alice, Chloe, or both should be allowed on the Forum. I do have an opinion that there needs to be a clear, logical set of rules or at least principles—ideally, laid out in advance as far as that is practical. I don’t like the idea of apparently ad hoc decisions about who gets privacy protections and who does not.
[Added footnote: I should note that standing, which is used as a legal metaphor here, does not mean “is none of your business.” We often impose standing requirements to make sure the person making the decision has appropriate incentives, context, etc.
Here, the various people who A & C have allegedly spread lies about are in a much better position than I am to know the relevant facts. If they have not concluded that disclosure of A & C’s names is warranted, it’s not clear why I—or any other reader of the relevant posts—would do a better job as decisionmaker.
Another reason for limited standing is practicality. For instance, in US law, we don’t usually allow suits where the alleged harm is “I’m a taxpayer, and I paid two cents of tax toward this program I think is illegal.” You’d have hundreds of millions of people who could challenge any line item in the federal budget, and if you multiply that by how many line items someone might object to . . . Even though there is de minimis financial harm, and that’s usually enough, we’ve decided that isn’t enough where the taxpayer’s interest is basically of the same nature as every other taxpayer’s interest in the matter. The analogy here is that we don’t want 100 different people being able to unilaterally decide that A & C should be named. If 99 of them decide that maintaining anonymity is appropriate, and 1 disagrees, the odds of the 1 being correct are pretty low.]
If yes, it is hard to come up with a principled reason why Alice and Chloe can’t be named now.
I expect no one was interested in writing something about Alice and/or Chloe (A/C), by name or otherwise, before Ben’s post, and people only want to name them now because they think A/C should face consequences for falsely (they believe) alleging abuse. Which is very close to retaliating against whistleblowers, and we should be very careful, which includes maybe accepting a rule that will have some false positives.
To take a different example, my non-professional understanding is it would normally be legal for an MA employer to report their employee to immigration authorities, but if the employer did this right after the employee had filed a complaint with the attorney general’s office, even a false one, this is probably actually illegal retaliation. This will have the occasional false positive, where the employer really was going to report the employee anyway but can’t prove it, but we accept that because avoiding the harms of retaliation is more important.
Jeff—actual ‘whistleblowers’ make true and important allegations that withstand scrutiny and fact-checking. I agree that legit whistleblowers need the protection of anonymity.
But not all disgruntled ex-employees with a beef against their former bosses are whistleblowers in this sense. Many are pursuing their own retaliation strategies, often turning trivial or imagined slights into huge subjective moral outrages—and often getting credulous friends, family, journalists, or activists to support their cause and amplify their narrative.
It’s true that most EAs had never heard of ‘Alice’ or ‘Chloe’, and didn’t care about them, until they made public allegations against Nonlinear via Ben Pace’s post. And then, months later, many of us were dismayed and angry that many of their allegations turned out to be fabricated or exaggerated—harming Nonlinear, wasting thousands of hours of our time, and creating schisms within out community.
So, arguably, we have a case here of two disgruntled ex-employees retaliating against a former employer. Why should their retaliation be protected by anonymity?
Conversely, when Kat Woods debunked many of the claims of Ben Pace (someone with much more power and influence in the EA/Rationalist community), why was she not considered a ‘whistleblower’ calling out his bullying and slander?
Yet again, the gender bias in ‘moral typecasting’ becomes important, as I mentioned in a previous comment here.
So, arguably, we have a case here of two disgruntled ex-employees retaliating against a former employer. Why should their retaliation be protected by anonymity?
Highlighting that is an important crux (and one on which I have mixed feelings). Not all allegations of incorrect conduct rise to the level of “whistleblowing.” A whistleblower brings alleged misconduct on a matter of public importance to light. We grant lots of protections in furtherance of that public interest, not out of regard for the whistleblower’s private interests.
Is this a garden-variety dispute between an employer and two employees about terms of employment? Or is this a story about influential people allegedly using their power to mistreat two people who were in a vulnerable position which is of public import because it should update us on how much influence to allow those people?
In Australia people can be, and have been, prosecuted when they whistleblow on something commercially or otherwise sensitive (that was of major public importance!) by disclosing it publicly without completely exhausting internal whistleblowing processes. So even in cases of proper whistleblowing, countervailing factors can dominate in what the consequences are for whistleblower.
actual ‘whistleblowers’ make true and important allegations that withstand scrutiny and fact-checking. I agree that legit whistleblowers need the protection of anonymity.
I think that’s too strong? For example, under my amateur understanding of MAlaw I don’t see anything about the anti-retailation provisions being conditional requiring a complaint to withstand scrutiny and fact-checking. And if this were changed to allow employers to retaliate in cases where employees claims were not sustained then I think we’d see, as a chilling effect, a decrease in employees raising true claims.
I agree that requiring that the claims be sustained would have a chilling effect. However, in many contexts, we don’t extend protections to claims submitted in bad faith. For instance, we grant immunity from legal retaliation against people who file reports of child abuse . . . but that is usually conditioned on the allegations were made in good faith. If a reported individual can prove that the report was fabricated out of whole cloth, we don’t shield the reporter from a defamation suit or other legal consequences.
Note that this is generally a subjective standard—if the reporter honestly believed the report was appropriate, we shield the reporter from liability. This doubtless allows some bad actors to slip through with immunity. However, we believe that is necessary to avoid reporters deciding not to report out of fear that someone will Monday-morning quarterback then and decide that reporting was objectively unreasonable.
In your example, I suspect that knowingly filing a false report with a state agency is a crime in MA (as it is with a federal agency at the federal level), so there is at least some potential enforcement mechanism for dealing with malicious lies.
I expect no one was interested in writing something about Alice and/or Chloe (A/C), by name or otherwise, before Ben’s post [ . . . .]
(Correctly) surmising a lack of interest in writing a hypothetical expose about A/C isn’t quite the same thing as reaching a conclusion that the post shouldn’t have been allowed to remain. However, I think there is a lot of overlap between the two; the reasons for lack of interest seem similar to the arguments for why the post shouldn’t be allowed. So I think we are both somewhere vaguely near “there would be no legitimate/plausible reason for someone to write an expose about A/C, unless one accepted that their whistleblowing activity made it legitimate.”
One interesting thing about this framing is that it raises the possibility that the whistleblowers’ identities are relevant to the decision. If a major figure in EA were going around telling malicious lies about other EAs, that would be (the subject of an appropriate post / something people would be interested in writing about) independently of any specifically whistleblowing-retaliation angle.
One could stake out an anti-standing argument in which Nonlinear et al. would not be able to identify A/C because we would be worried that vindictive or retaliatory desires were affecting their judgment, but an truly disinterested person (e.g., not friends of Nonlinear employees) could identify them—because they are likely to be acting from purer, less emotionally-invested motives (e.g., protection of the community from those they perceive as brazen liars). I’m not endorsing that view, but it is interesting to ponder.
Continuing with your analogy, if you were a random person who found out about the employee filing with the AG independently of the employer, and you somehow were able to determine that the employee knowingly filed a false report out of (e.g.) racial animus, would it be OK for you to report the employee to the immigration authorities? [Asking ethically, not under MA law.]
Relatedly, your example relates to a situation with a significant power imbalance (employer/employee). It probably isn’t illegal for me to report someone to the immigration authorities where my real motive is that they cheated on me, they cheated on my friend, etc. [I didn’t specifically check that example.] So it seems that we often protect individuals exercising socially-important functions like whistleblowing from retaliation by some actors but not others.
My use of core expressive speech was inspired by “core political speech” in U.S. First Amendment doctrine. E.g., this article describing a hierarchy of protected speech. I meant that Forum speech may be more likely to be speech about the stuff that matters, discouragement of which (including by denying psuedonymity) poses particularly great harms. Probably “high-value speech” would have been clearer here.
Solicitude is care or concern, so here I meant that we might particularly care about protecting Forum speech as opposed to other kinds of speech for some reason.
Hi Geoffrey, I think you raise a very reasonable point.
There’s some unfortunate timing at play here: 3⁄7 of the active mod team—Lizka, Toby, and JP—have been away at a CEA retreat for the past ~week, and have thus mostly been offline. In my view, we would have ideally issued a proper update by now on the earlier notice: “For the time being, please do not post personal information that would deanonymize Alice or Chloe.”
In lieu of that, I’ll instead publish one of my comments from the moderators’ Slack thread, along with some commentary. I’m hoping that this shows some of our thinking and adds to the ongoing discussion here.[1] I’m grateful to yourself, @Ivy Mazzola, @Jason, @Jeff Kaufman and others for helping push this conversation forward.
Starting context: Majority of moderators in agreement that our current policy on doxing, “Revealing someone’s real name if they are anonymous on the Forum or elsewhere on the internet is prohibited” (link), should apply in the Alice+Chloe case. (And that it should apply whether or not Alice and/or Chloe have exaggerated their allegations.)
Will (5 days ago)
I think what this comes down to for me is: If Kat Woods’ Forum username was pseudonymous, would we have taken down Ben’s post? (Or otherwise removed all references to Kat by her real name?)
If the answer to this is “yes,” then I don’t think Alice+Chloe should be deanonymized.
If the answer to this is “no,” then I think Alice+Chloe should be deanonymized.[2] (Because if we go with “no,” then this would mean that it’s fair game for Kat to go write a post now that shares information on two past employees whom she believes have spread falsehoods about Nonlinear, using these employees’ real names. Which is equivalent to deanonymizing Alice+Chloe.)
------
I’m concerned about setting a precedent of first-mover advantage. Like, I imagine there are EAs with mutual grievances out there, and if we set a precedent of, “If you strike first, anonymously, then you can name the person you bear a grievance against whilst granting yourself anonymity immunity,” then I think we’re laying the foundations for a pretty awful dynamic.
There was then commentary from a couple of the other moderators acknowledging that:
Under our current policies, it is indeed fair game for Kat to write a post sharing information on those former employees using their real names, as long as that post doesn’t refer directly to Ben’s one. Which does seem like a weird technicality.
The three broad paths forward seem to be: 1) “give the first accuser the right to remain anonymous;” 2) “ban anonymous allegations altogether;” 3) “stick with our current policies, notwithstanding that there is that technicality.[3]”
There’s since been the beginnings of a discussion on whether certain categories of anonymous allegations—notably, employees sharing information on employers—could be exceptions to the rule if the second path is taken.[4]
Closing context: At the time of me posting this comment, inter-moderator discussion on the issue is inconclusive.
I think it probably makes most sense to decide this at the policy level first, and then circle back to how to handle the Nonlinear case. Specifically, if we go with giving the first accuser anonymity rights at the policy level, then it follows that Alice and Chloe should remain anonymous. But if we decide that allegations should not be made anonymously, then we’ll have to think of what to do with the Alice+Chloe allegations already out there.
[ETA: Like, since Alice and Chloe made their allegations expecting anonymity, I believe we should give their cases special treatment (if we go the route of not allowing anonymous allegations). This could just mean fully respecting their expectation. This could mean making difficult judgement calls in weighing up, for example, Alice’s expectation for anonymity versus the magnitude of her exaggerations.]
And notwithstanding that this technicality may well put an accused party in an uncomfortable position, where they believe they have good reason for writing a response “sharing information” post, but they also know that doing so likely means receiving some backlash (for implicitly deanonymizing their accuser).
My personal view is that this rules out the third path: I don’t think a fair set of policies would force someone into this kind of a lose-lose position.
In other words, one consideration is whether the benefit of allowing anonymous whistleblowing on the Forum—where the purpose of whistleblowing is in part to compensate for employer-employee asymmetry—outweighs the cost of not having first-accuser-second-accuser symmetry.
I think what this comes down to for me is: If Kat Woods’ Forum username was pseudonymous, would we have taken down Ben’s post? (Or otherwise removed all references to Kat by her real name?)
If the answer to this is “yes,” then I don’t think Alice+Chloe should be deanonymized.
I do not like the incentive structure that this would create if adopted. Kat did not get to look at this particular drama and decide whether she wanted it discussed under a real or pseudonymous username. Her decision point was when she created her forum account however many years ago, at a time when she had no idea that this kind of drama would erupt. If this position becomes policy, then it incentivizes every person, at the time that they create a forum account, to choose a pseudonym rather than use their real name, to avoid having any unforeseeable future drama publicly associated with their real name. I think this would be bad. People in a community can’t build trust if they don’t know the identities of the people they are building trust with.
A rule that you couldn’t directly name people of moderate or greater prominence wouldn’t work well anyway. People here are awfully clever, and I’m sure one could easily write a whistleblowing piece on such a person that left very little doubt about their identity without actually saying their name or other unique identifiers. In fact, I’m not sure if Ben’s piece could have been effectively written without most of the Forum readership knowing who Alice and Chloe had worked for.
Will—thanks very much for sharing your views, and some of the discussion amongst the EA Forum moderators.
These are tricky issues, and I’m glad to see that they’re getting some serious attention, in terms of the relative costs, benefits, and risks of different possible politicies.
I’m also concerned about ‘setting a precedent of first-mover advantage’. A blanket policy of first-mover (or first-accuser) anonymity would incentivize EAs to make lots of allegations before the people they’re accusing could make counter-allegations. That seems likely to create massive problems, conflicts, toxicity, and schisms within EA.
I had a bunch of thoughts on this situation, enough that I wrote them up as a post. Unfortunately your response came out while I was writing and I didn’t see it, but I think doesn’t change much?
In addition to your three paths forward, I see a fourth one: you extend the policy to have the moderators (or another widely-trusted entity) make decisions on when there should be exceptions in cases like this, and write a bit about how you’ll make those decisions.
There may be a fifth, which could be seen as a bit of a cop-out.
It’s not clear to me whether the mods claim jurisdiction over deanonymizing conduct that doesn’t happen on the Forum. I think the answer here is that claiming such jurisdiction here would be inappropriate.
As far as I know, it wouldn’t violate the rules of X, Facebook, or most other sites to post that “[Real Names] have been spreading malicious lies about things that happened when they were Nonlinear employees.” It certainly would not violate the rules of a state or federal court to do that in a court complaint. The alleged harm of Alice and Chloe spreading malicious lies about Kat, Emerson, and Nonlinear existed off-the-Forum prior to anything being published on the Forum. I don’t see why Ben’s act of including those allegations in a Forum post creates off-Forum obligations for Nonlinear et al. (or anyone else) that did not exist prior to Ben’s post. Alice and Chloe, and people in similar situations, have to accept that many fora exist that do not have norms against this kind of conduct.[1]
If there is no jurisdiction over off-Forum naming here, it seems that the people who want Alice and Chloe named can do so in other places, and everyone who wants to know will know soon enough. If that’s the case, I’m not sure whether—at least in these circumstances [2] -- Jeff’s suggestion offers enough added value to justify the rather significant costs of the mods adjudicating this particular matter in a reasonably thorough manner. If the names can be plastered all over X and Facebook, and ~everyone who cares to know will find out that way, does it make a huge amount of difference whether or not the names are also on the Forum? Under those circumstances, declining to adjudicate because the question would be of limited practical importance would be justifiable.
I express no opinion as to whether the mods could legitimately exercise broader jurisdiction over attempting to disclose the identity of a Forum poster, where the only relevant conduct was on-Forum.
This isn’t about the Forum mods as as representatives of the Forum, but instead as the most obvious trusted community members (possibly in consultation with CH) to make a decision.
What centralized adjudication avoids is each person having to make their own judgment about whether deanonymization is appropriate in a given circumstance. Let’s say NL starts posting the real names on Twitter: should I think poorly of them for breaking an important norm or is this an exception? Is that an unreasonable unilateral escalation of this dispute? Should I pressure them not to do this?
That approach certainly does offer some significant advantages, but I think it’s a lot harder to pull off. Will’s three options, the narrower version of mod discretion (which is limited to whether A/C can be named on the Forum), and my fifth option (declining to allow in this case because if people decide to name, everyone will find out whether it’s on the Forum or not) are all open to the mods because they are mods.
The possibility of a centralized adjudication that is recognized as binding in all places requires outside buy-in. I think it needs either (1) the consent of every party directly in interest or (2) the consent of Nonlinear, broad community support, and the centralized adjudicator’s willingness to either release the names themselves or allow widespread burner accounts naming them.
Option (1) is basically arbitration on the consent of the parties; they would be free to choose the mods, Qualy, a poll, or a coin flip. Alice and Chloe would consent to being named if the arbitrators chose, and Nonlinear would agree to not name if the arbitrators ruled against them.[1] If the arbitrators rule for naming, no one should judge Nonlinear because it would have named Alice and Chloe with their consent. If they rule against naming and Nonlinear did it anyway, everyone should judge them for breaking their agreement. And there’s a strong argument to me that we bystanders should honor the decision of those directly involved on a resolution.
But reaching an agreement to arbitrate may be challenging. A rational party would not consent to arbitrate unless it concluded its interests were expected to be better off under arbitration than the counterfactual. Settlements can be mutually beneficial, but I am not yet convinced arbitration would be in Alice and Chloe’s interests. So long as a substantial fraction of the community would judge Nonlinear for naming, it probably will not do so. So the status quo for Alice and Chloe would be a win vs. an uncertain future in arbitration.
The other, less certain option is that Nonlinear and the significant majority of the community consented to abide by the arbitration result. Even here, there is a risk that the arbitration process may become a no-win scenario for Nonlinear. If enough community members reserve their right to independently and adversely judge Nonlinear for naming, then it is in a pickle even if it “wins” the arbitration.
A possible workaround might be either that the arbitration panel itself will release the names if it rules against Alice and Chloe, or that it will allow untraceable anonymous posters to flood the Forum with their names. In other words, if Alice and Chloe do not consent, and there is a contingent of anti-namers in the community, then any blowback for releasing needs to fall on someone other than Nonlinear.
The possibility of a centralized adjudication that is recognized as binding in all places requires outside buy-in.
I think you might be thinking too formally? We sometimes have things that work because we decide to respect an authority that doesn’t have any formal power. If you make a film you don’t have to submit it to the MPAA to get a rating, and if you run a theater you don’t have to follow MPAA ratings in deciding whether someone is mature enough to be let into an R-rated movie, but everyone just goes along with the system.
I’m imagining that the Forum mods would make a decision for the Forum, and then we’d just go along with it voluntarily even off the Forum, as long as they kept making reasonable decisions.
I’m not seeing any real consensus on what standard to apply for deanonymizing someone. I think a voluntary deference model is much easier when such a consensus exists. If you’re on board with the basic decision standard, it’s easier to defer even when you disagree with the application in a specific case. In sports, the referees usually get the call right, and errors are evenly distributed between your team and your opponents. But if you fundamentally disagree with the decision standard, the calls will go systematically against your viewpoint. That’s much harder to defer to, and people obviously have very strong feelings on either side.
I don’t think the MPAA is a great analog here. I’d submit that the MPAA has designed its system carefully in light of the wholly advisory nature of its rulings. Placing things on a five-point continuum helps. I think only a small fraction of users would disagree more than one rating up/down from where the MPAA lands. So rarely would an end user completely disagree with the MPAA outcome. Where an end user knows that the MPAA grades more harshly/leniently than they do, the user can mentally adjust accordingly (as they might when they learn so many Harvard College students get 4.0s?)
And it’s easy for end users to practically opt out of the MPAA system without any real social sanction; if a theater owner decides to admit ten-year olds to R-rated movies with a signed parental consent, that is really none of my business. If a parent decides to take a seven-year old to one, that is also none of my business as long as the child is non-disruptive. The MPAA system is resilient to 20-30% of the population opting out, if they so chose.[1]
So I don’t think the features that make the MPAA system workable as a voluntary-deference system are likely to transfer over well to this context.
It’s harder for filmmakers to opt out—but they also got a lot out of the system, too. Mild self-regulation is preferable to government regulation, especially back in days when the First Amendment was not enforced with the same rigor it is today.
Jeff—thanks very much for sharing the link to that post. I encourage others to read it—it’s fairly short. It nicely sets out some of the difficulties around anonymity, doxxing, accusations, counter-accusations, etc.
I can’t offer any brilliant solutions to these issues, but I am glad to see that the risks of false or exaggerated allegations are getting some serious attention.
I wouldn’t classify Ben’s post as containing fully anonymous allegations. There was a named community member who implicitly vouched for the allegations having enough substance to lay before the Forum community. That means there was someone in a position to accept social and legal fallout if the decision to post those allegations is proven to have been foolhardy. That seems to be a substantial safeguard against the posting of spurious nonsense.
Maybe having such a person identified didn’t work out here, but I think it’s worth distinguishing between this case and a truly anonymous situation (e.g., burner account registered with throwaway account doing business via Tor, with low likelihood that even the legal system could identify the actual poster for imposition of consequences).
And notwithstanding that this technicality might put the accused party in an uncomfortable position, where they believe they have good reason for writing a response “sharing information” post, but they also know that doing so will likely make them the target of some severe backlash for implicitly deanonymizing their accuser.
That could be a feature rather than a bug for reasons similar to those described above. Deanonymizing someone who claims to be a whistleblower is a big deal—and arguably we should require an identified poster to accept the potential social and legal fallout if that decision wasn’t warranted, as a way of discouraging inappropriate deanonymization.
PS For the people downvoting and disagree-voting on my comment here:
I raised some awkward questions, without offering any answers, conclusions, or recommendations.
Are you disagreeing that it’s even legitimate to raise any issues about the ethics of ‘whistleblower’ anonymity in cases of potential false allegations?
I’d really like to understand what you’re disagreeing about here.
I think the questions you’re raising are important. I got kind of triggered by the issue I pointed out (and the fact that it’s something that has already been discussed in the comments of the other post), so I downvoted the comment overall. (Also, just because Chloe is currently anonymous doesn’t mean it’s risk-free to imply misleading and damaging things about her – anonymity can be fragile.)
There were many parts of your comment that I agree with. I agree that we probably shouldn’t have a norm that guarantees anonymity unconditionally. (But the anonymity protection needs to be strong enough that, if someone temporarily riles up public sentiment against the whistleblowers, people won’t jump to de-anonymizing [or to other, perhaps more targeted/discreet appropriate measures, such as the one suggested by Ivy here] too quickly; instead, the process there should be diligent and fair as well, just like an initial investigation prompted by the whistleblowers should be. (Not saying that this contradicts any of what you were suggesting!)
When things get heated and people downvote each others comments, it might be good to focus on things we do (probably) agree on. As I said on the Lesswrong version of this post:
Here are the list of values that are important to me about this whole affair and context:
I want whistleblower-type stuff to come to light because I think the damage bad leaders can do is often very large
I want investigations to be fair. In many cases, this means giving accused parties time to respond
I understand that there’s a phenotype of personality where someone has a habit of bad-talking others through false/misleading/distorted claims, and I think investigations (and analysis) should be aware of that
(FWIW, I assume that most people who vehemently disagree with me about some of the things I say in this comment and elsewhere would still endorse these above values.)
I raised some awkward questions, without offering any answers, conclusions, or recommendations.
I don’t feel like you raised discussion with no preference for what the community decided. When I gave my answer, which many people seem to agree with, your response was to question whether that’s REALLY what the EA community wants. I think it’s a bit disingenuous to suggest that you’re just asking a question when you clearly have a preference for how people answer!
Short answer: I think Ben should defer to the community health team as to whether to reveal identities to them or not (I’m guessing they know). And probably the community health team should take their names and add it to their list where orgs can ask CH about any potential hires and learn of red flags in their past. I think Alice should def be included on that list, and Chloe should maybe be included (that’s the part I’d let the CH team decide if it is was bad enough). It’s possible Alice should be revealed publicly, or maybe just revealed to community organizers in their locale and let them make the decision of how they want to handle Alice’s event attendance and use of community resources.
Extra answer: FWIW I already have bad feelings about CEA’s hardcore commitment to anonymity. I do feel EA is too hard on that side, where for example, people accused of things probably won’t even be told the names of their accusers or any potentially-identifying details that make accusations less vague. The only reason NL knew in this situation is because the details make it unavoidable that they’d know. But otherwise the standard across EA is that if you are accused of something via CEA’s community health team, you will never know who makes the critiques, and therefore will never be able to properly rebut them. And if that is how this had gone down, it would have been Kat and Emerson getting their names on CEA’s blacklist. I think this default justice system of this community is messed up, and there should be some exceptions in theory.
That said, I’m not sure this is one of those exceptions. I feel kind of conflicted about revealing their names even in this extreme case, because I have also had a weird employment experience in EA with an atypical structure, after which I felt emotionally wounded, and if someone had come to me and said “We suspect your ex-employer may be unethical and causing repeated serious harms! So I am asking you and others for all the details they can remember that made you feel bad in that situation” there is some likelihood I would share my experiences, rather than say my actual thought-through POV which is “No I actually don’t blame or hold ill will to my ex-employers and I won’t participate in any investigation against them. It was a complex situation and we all made mistakes”. In most cases I would not “sing” per se, but in some universes I surely would. Just because I was asked. Incidentally this parallels how I feel about the women in the sexual misconduct TIME piece too. It’s gotta be hard to not add to a (maybe false but also maybe true) narrative when you are literally asked to and commended for doing so? Most of us trust people who put themselves in that sort of investigator position.
I’m sure many readers who are shocked at Alice or Chloe now would, if asked, start singing if they had been in their position, thinking they are doing something useful as they are being so encouraged, after all, even if at first they felt conflicted. Even if they had felt conflicted about it for a year prior. Even if they were almost over it, and their mental health and humility were improving, wouldn’t being suddenly told “no you really might be a victim, please share, in fact I’m talking to another victim too” sort of.. undo that? Especially if they are working on mental health problems, I imagine it would feel soo temptingly cathartic and be hard to say no to a narrative that just might make you the hero, just might help someone else, for everything else you feel you lost. Does that make it their fault that they bit the bait? Or is it more the “fault” (or simply, “result”) of how the investigation was done?
I do agree that orgs should be able to factor this whole debacle into their decision if they end up about to hire Alice or even Chloe. CEA’s private list should mostly solve that problem. But my opinion on revealing names publicly beyond that hinges a lot on how the investigation kicked off. How much did Ben have to search and push to get them to say all that? What framing did he use? How much were they slandering NL before Ben even reached out, and was it about egregious or very important things? Was NL’s reputation unfairly damaged by Alice or Chloe before this investigation even kicked off? I would like to hear more of his thoughts on that.. with no judgement toward him for the past, because it is so critical that the community come together and discuss honestly and collaboratively what happened here.
Ivy—I really appreciate your long, thoughtful comment here. It’s exactly the sort of discussion I was hoping to spark.
I resonate to many of your conflicted feelings about these ethically complicated situations, given the many ‘stakeholders’ involved, and the many ways we can get our policies wrong in all kinds of ways.
My understanding is that Kat and Emerson did in fact get their names on CEA’s blacklist to some extent.
Here is the bigger problem I see with your proposed solution. If an employer reviewing an application from Alice or Chloe believes their side of this, then the employer would not factor in the fact of their presence on CEA’s blacklist, since the employer, by hypothesis, thinks CEA was mistaken to put them there. If, on the other hand, an employer reviewing an application from Alice or Chloe believes Nonliner’s side of this, then the employer may justifiably look at the fact that CEA erred by having blacklisted Kat and Emerson and choose not to consult CEA in their hiring decisions at all, and therefor not discover that their applicant was Alice or Chloe. Either way, CEA blacklisting Alice and Chloe seems ineffective.
There are some references here to the community health team’s practices that we think aren’t fully accurate. You can see more here about how we typically handle situations where we hear an accusation (or multiple accusations) and don’t have permission to discuss it with the accused.
Sorry, but I have (re)read that link and I don’t see how anything we said was in conflict with each other. Perhaps I didn’t word it well. Or am I misunderstanding you? If you could give some hard numbers like, only X% of complaints end up being handled anonymously, and of those, in Z% the complaints end up being unactionable and we just give a listening ear, and only in Y% do anonymous complaints end up being held against the person and meaningfully effecting their lives, then maybe I can agree I made the extent of the dilemma sound overblown. I’m also aware that other tactics come with their own dilemmas. I just wanted to acknowledge that there is a dilemma and that I am not a “never deanonymize” type of person before I made some other points.
Reading your link I felt it was not in conflict because: In the case where many people give complaints about Steve, not a single person was willing to have their concerns discussed in detail with him (out of fear that details would reveal them I suppose), let alone be deanonymized by name. So it does sound like EAs like to make complaints in (what I’d call) “extreme anonymity” by “default” and tbh that matches my social and cultural model of EAs. And in the next section you say that your policy is to be even more protective of confidentiality than some communities like universities. And you do make some decisions based on those things you might never fully discuss with the other party. You call them “compromises” but some are major reactions which could be EA-career-ending. Actually I find it hard to consider what other worse actions remain other than calling the police, writing a public expose about them, or messaging their employer out of the blue. So I don’t think it is going too far to say that maybe CEA could be too protective of anonymity, as you acknowledge your behavior can, at least sometimes, be abnormal or counter to what people would expect in other institutions.
In my view, it might be one of those cases where general society or others landed on the right institutional practices, but we EAs are naive in our tradeoff considerations by trying to use different systems, or draw the line of deanonymizing at different degrees. I don’t think this is a bold possibility. I expect you disagree with the idea that CH could be too protective of anonymity. Maybe most EAs would. But it’s a natural thing we can look at and not avert our eyes from that possibility. That’s all I wanted to say.
I’d also like to clarify that I was not trying to be harsh on CH and drag you all in with what I wrote. These are hard problems. I was merely trying to write an introduction I felt took seriously and related to the feelings of people who do want Alice and Chloe doxxed, and show how I understand and sympathize with that perspective very much, and then go from there to discuss why I wouldn’t be in favor of doxxing even in this case that so many are shocked by. I am mostly bullish on the CH team which is why in my “short answer” section, I claimed that EAs should mostly defer to the CH team on this issue.
Hm I guess that’s true. I guess I thought it went without saying that it would be when people want anonymity, I didn’t imagine there could be an alternative where CH removes names even if the complainant doesn’t request it. That would indeed be worse and a true “default” and I hope no one took that as what I meant.
But I think CH asks complainants what degree of anonymity and detail-sharing they are comfortable with by default. And I think a lot of people ask them to not give details, and by default CH does defer to that preference to what might be an abnormal extent, such that anonymity may be functionally the default in our culture and their dealings. But yeah I guess I wonder about hard numbers. It is striking to me that not one person was willing to have the details of the incident shared with Steve though
I assumed the mock-incident was just meant to illustrate how it might arise that someone doesn’t get full information, and it’s easier to get that point across if you have it as everyone requesting anonymity.
On the real world point, I do agree that if what happens is something like ‘CEA: do you want anonymity? Complainant: uh sure, might as well’, then that seems suboptimal. Though I’m not sure I could come up with any system that’s better overall.
Fair, that is a mock incident, but I don’t see that aspect as being dramatized or anything. Fwiw I have known multiple people whose experiences basically matched Steve’s.
I just think if we are going to talk about doxxing Alice and Chloe we might want to think what it might look like if they had gone elsewhere, or what it might look like in the future if they unduly report others. And as a community I think it must be reckoned with why some people feel upset right now at the protection that reporters face when accused get so few protections, not even the protection to know details of the claims against them. And a cultural standard where names of people who make provably false accusations are revealed could protect all of us. So I think it is worth reckoning with. Even though I came out supporting non-doxxing in this case
I think it’s important to separate out how CH handled the allegations vs how Ben did. IMO CH’s actions (banning presenting at EAG but not attending, recommending a contract be used) were quite measured, and of a completely different magnitude than making public anonymous allegations. And I think this whole situation would have been significantly improved if Ben had adopted CEA’s policy of not taking further actions if restrictions are requested.
I’ll respond to one aspect you raised that I think might be more significant than you realize. I’ll paint a black and white picture just for brevity.
If you’re running organizations and do so for several years with dozens of employees across time, you will make poor hiring decisions at one time or another. While making a bad hire seems bad, avoiding this risk at all costs is probably a far inferior strategy. If making a bad hire doesn’t get in the way of success and doing good, does it even make sense to fixate on it?
Also, if you’re blind to the signs before it happens, then you reap the consequences, learn an expensive lesson, and are less likely to make it in future, at least for that type of deficit in judgment. Sometimes the signs are obvious after having made an error, though occasionally the signs are so well hidden that anyone with better judgment than you could have still have made the same mistake.
The underlying theme I’m getting at is that embracing mistakes and imperfection is instrumental. Although many EAs might wish that we could all just get hard things right the first time all the time, that’s not realistic. We’re flawed human beings and respecting the fact of our limitations is far more practical than giving into fear and anxiety about not having ultimate control and predictability. If anything, being willing to make mistakes is both rational and productive compared to other alternatives.
Victor—this is total victim-blaming. Good people trying to hire good workers for their organizations can be exploited and ruined by bad employees, just as much as good employees can be exploited and ruined by bad employers.
You said ‘If making a bad hire doesn’t get in the way of success and doing good, does it even make sense to fixate on it?’
Well, we’ve just seen an example of two very bad hires (‘Alice’ and ‘Chloe’) almost ruin an organization permanently. They very much got in the way of success and doing good. I would not wish their personalities on any other employers. Why would you?
We shouldn’t ‘embrace mistakes’ if we can avoid them. And keeping bad workers anonymous is a way of passing along those hiring mistakes to other future employers without any consideration for the suffering and chaos that those bad workers are likely to impose, yet again.
What I think I’m hearing from you (and please correct me if I’m not hearing you) is that you feel conflicted by the thought that the efforts of good people with good intentions can be so easily be undone, and that you wish there were some concrete ways to prevent this happening to organizations, both individually and systemically. I hear you on thinking about how things could work better as a system/process/community in this context. (My response won’t go into this systems level, not because it’s not important, but because I don’t have anything useful to offer you right now.)
I acknowledge your two examples (“Alice and Chloe almost ruined an organization) and (keeping bad workers anonymous has negative consequences). I’m not trying to dispute these or convince you that you’re wrong. What I am trying to highlight is that there is a way to think about these that doesn’t involve requiring us to never make small mistakes with big consequences. I’m talking about a mindset, which isn’t a matter of right or wrong, but simply a mental model that one can choose to apply.
I’m asking you to stash away your being right and whatever you perspective you think I hold for a moment and do a thought experiment for 60 seconds. At t=0, it looks like ex-employee A, with some influential help, managed to inspire significant online backlash against organization X led by well-intentioned employer Z. It could easily look like Z’s project is done, their reputation is forever tarnished, their options have been severely constrained. Z might well feel that way themselves. Z is a person with good intentions, conviction, strong ambitions, interpersonal skills, and a good work ethic. Suppose that organization X got dismantled at t=1 year. Imagine Z’s “default trajectory” extending into t=2 years. What is Z up to now? Do you think they still feel exactly the way they did at t=0? At t=10, is Z successful? Did the events of t=0 really ruin their potential at the time? At t=40, what might Z say recalling the events of t=0 and how much that impacted their overall life? Did t=0 define their whole life? Did it definitely lead to a worse career path, or did adaptation lead to something unexpectedly better? Could they definitely say that their overall life and value satisfaction would have been better if t=0 never played out that way? In the grand scheme of things, how much did t=0 feeling like “Z’s life is almost ruined” translate into reality?
If you entertained this thought experiment, thank you for being open to doing so.
To express my opinion plainly, good and bad events are inevitable, it is inevitable that Z will make mistakes with negative consequences as part of their ambitious journey of life. Is it in Z’s best interests to avoid making obvious mistakes? Yes. Is it in their best interests to adopt a robust strategy such that they would never have fallen victim to t=0 events or similarly “bad” events at any other point? I don’t think so necessarily, because: we don’t know without long-term hindsight whether “traumatic” events t=0 lead to net positive changes or not; even if Z somehow became mistake-proof-without-being-perfect, that doesn’t mean something as significant as t=0 couldn’t still happen to them without them making a mistake; and lastly because being that robust is practically impossible for most people. All this to say, without knowing whether “things like t=0” are “unequivocally bad to ever let happen”, I think it’s more empowering to be curious about what we can learn from t=0 than to arrive at the conclusion at t<1 that preventing it is both necessary and good.
Victor—thanks for elaborating on your views, and developing this sort of ‘career longtermist’ thought experiment. I did it, and did take it seriously.
However.
I’ve known many, many academics, researchers, writers, etc who have been ‘cancelled’ by online mobs, who have made mountains out of molehills. In many cases, the reputations, careers, and prospects of the cancelled people are ruined. Which is, of course, the whole point of cancelling them—to silence them, to ostracize them, and to keep them from having any public influence.
In some cases, the cancelled people bounce back, or pivot, or pursue other interests. But in most cases, they cancellation is simply a tragedy, a huge setback, a ruinous misfortune, and a serious waste of their talents and potential.
Sometimes there’s a silver lining to their being cancelled, bullied, and ostracized, but mostly not. Bad things can happen to good people, and the good people do not always recover.
So, I think it’s very important for EA to consider the many serious costs and risks we would face if we don’t take seriously the challenge of minimizing false allegations against EA organizations and EA people.
Thanks for entertaining my thought experiment, and I’m glad because I better understand your perspective too now, and I think I’m in full agreement with your response.
A shift of topic content here, feel free to not engage if this doesn’t interest you.
To share some vague thoughts about how things could be different. I think that posts which are structurally equivalent to a hit piece can be considered against the forum rules, either implicitly already or explicitly. Moderators could intervene before most of the damage is done. I think that policing this isn’t as subjective as one might fear, and that certain criteria can be checked even without any assumptions about truthfulness or intentions. Maybe an LLM could work for flagging high-risk posts for moderators to review.
Another angle would be to try and shape discussion norms or attitudes. There might not be a reliable way to influence this space, but one could try for example by providing the right material that would better equip readers to have better online discussions in general as well as recognize unhelpful/manipulative writing. It could become a popular staple much like I think “Replacing Guilt” is very well regarded. Funnily enough, I have been collating a list of green/orange/red flags in online discussions for other educational reasons.
“Attitudes” might be way too subjective/varied to shape, whereas I believe “good discussion norms” can be presented in a concrete way that isn’t inflexibly limiting. NVC comes to mind as a concrete framework, and I am of the opinion that the original “sharing information” post can be considered violent communication.
I’ve just partly read and partly skim read that post for the first time. I do suspect that post would be ineligible under a hypothetical “no hit pieces under duck typing” rule. I’ll refer to posts like this as DTHP to express my view more generally. (I have no comment on whether it “should” have been allowed or not allowed in the past or what the past/current Forum standards are.)
I’ve not thought much about this, but the direction of my current view is that there are more constructive ways of expression than DTHPs, and here I’ll vaguely describe three alternatives that I suspect would be more useful. By useful I mean that these alternatives potentially promote better social outcomes within the community, while hopefully not significantly undermining desirable practical outcomes such as a shift in funding or priorities.
1. If nothing else, add emotional honesty to the framing of a DTHP. A DTHP becomes more constructive and less prone to inspire reader bias when they are introduced with a clear and honest statement of the needs, feelings, requests from the main author. Maybe two out of three is a good enough bar. I’m inclined to think that the NL DTHP failed spectacularly at this. 2. Post a personal invitation for relevant individuals to learn more. Something like “I believe org X is operating in an undesirable way and would urge funders who might otherwise consider donating to X to consider carefully. If you’re in this category, I’m happy to have a one on one call and to share my reasons why I don’t encourage donating to X.” (And during the one on one you can allude to the mountain of evidence you’ve gathered, and let someone decide whether they want to see it or not.) 3. Find ways to skirt around what makes a DTHP a DTHP. I think a simple alternative such as posting a DTHP verbatim to one’s personal blog, then only sharing or linking to it with people on a personal level is already incrementally less socially harmful than posting it to the forums.
Option 4 is we find some wonderful non-DTHP framework/template for expressing these types of concerns. I don’t know what that would look like.
These are suggestions for a potential writer. I haven’t attempted to provide community-level suggestions here which could be a thing.
I’m biased since I worked on that post, but I think of it as very carefully done and strongly beneficial in its effect, and I think it would be quite bad if similar ones were not allowed on the forum. So I see your proposed DTHP rule as not really capturing what we care about: if a post shares a lot of negative information, as long as it is appropriately fair and careful I think it can be quite a positive contribution here.
I appreciate your perspective, and FWIW I have no immediate concerns about the accuracy of your investigation or the wording of your post.
Correct me if I’m wrong: you would like any proposed change in rules or norms to still support what you tried to achieve in that post, which is provide accurate information, presented fairly, and hopefully leading people to update in a way that leads to better decision making.
I support this, I agree that it’s important to have some kind of channel to address the kinds of concerns you raised, and I probably would have seen your post as a positive contribution (had I read it and been a part of EA / etc back then but I’m not aware of the full context), and simultaneously I’m saying things like your post could have even better outcomes with a little bit of additional effort/adjustment in the writing.
I encourage you think about my proposed alternatives not as being blockers to this kind of positive contribution. That is not their intended purpose. As an example, if a DTHP rule allows DTHPs but requires a compulsory disclosure at the top addressing the relevant needs, feelings, requests of the writer, I don’t think this particularly bars contributions from happening, and I think it would also serve to 1) save time for the writer by reflecting on their underlying purpose for writing, and 2) dampen certain harmful biases that a reader is likely to experience from a traditional hit piece.
If such a rule existed back then, presumably you would have taken it into account during writing. If you visualize what you would have done in that situation, do you think the rule would have negatively impacted 1) what you set out to express in your post and 2) the downstream effects of your post?
TracingWoodgrains—thanks for an excellent post. I think it should lead many EAs to develop a new and more balanced perspective on this controversy.
And thanks for mentioning my EA Forum comments about Ben Pace doing amateur investigative reporting—reporting that doesn’t seem, arguably, to have lived up to the standards of basic journalistic integrity (regardless of how much time he and the Lightcone team may have put into it.)
This leaves us with a very awkward question about the ongoing anonymity of ‘Alice’ and ‘Chloe’, and I don’t know what the right answer is about this issue, but I’m curious what other EAs think.
We seem to be in a situation where two disgruntled ex-employees of an EA organization coordinated to spread very harmful, false or highly exaggerated claims about the organization with the deliberate intent of slandering it and harming its leaders. They convinced someone with power and influence in the community to spend a lot of time confirming their claims, writing a highly negative public report, and paying them as whistleblowers/informants. Later, the slandered organization published a long refutation of the ex-employees’ claims, showing that many of them were false or highly exaggerated.
Yet the key figures in this whole EA community drama, ‘Alice’ and ‘Chloe’, remain absent from the discussion—ghostly presences that, apparently, must be treated as blameless, as if they were innocent, virtuous, righteous whistle-blowers with no moral agency or accountability of their own. They have the luxury of remaining behind a cloak of anonymity, despite their apparently false allegations. But I’m not sure why.
I don’t know who ‘Alice’ and ‘Chloe’ are. I haven’t tried to find out. Someone once mentioned their real-world identities to me in passing, but I’ve tried to forget them, and I have no intention of sharing them.
But here’s the thing.
If I was running an EA organization, I would really want to know the names of any EA-involved people who seem willing to spread false, exaggerated, harmful, and slanderous claims about their former employers. I would not want to hire such people. And I would be furious if I did hire them, in ignorance of their previous slanders, and then they did the same thing to my organization that they did to their former organization—all while being protected by EA’s cloak of anonymity.
So, what do you all think? If anonymous ‘whistleblowers’ turn out to have made a lot of false, highly exaggerated, and/or highly harmful claims about EA people and organizations, should their anonymity remain protected? Or would it be in the general interests of the EA movement, and other EA organizations that they might work for in the future, for their names to be known, and for their actions to be held accountable?
I honestly do not know the right ethical line to take on this anonymity issue. But I’m raising it because it seems very odd that nobody else seems to be.
Whistleblower anonymity should remain protected in the vast majority of situations, including this one, imo
How would you define the set of circumstances that are not in the “vast majority”? My initial reaction is vaguely along the lines of: lack of good faith + clear falsity of at least the main thrust of the accusation + lack of substantial mistreatment of the psuedonymous person by their target. But how does one judge the good faith of a psuedonym?
Whistleblower protection is necessary when Abe provides evidence that Bill harmed Cindy; otherwise, Abe lacks incentive to help Cindy. It is less important when Abe defends himself against harm caused by Bill.
There’s something to this, but I don’t think the incentives argument maps neatly onto the presence/absence of third parties. It’s not entirely clear to me what tangible incentive “Alice” and “Chloe” would have to tell their stories to Ben with permission to share with the broader public. The financial payment seems to have not been anticipated. Having proceeded under pseudonyms, the bulk of any sympathy they might get from the community wouldn’t translate into better real-world outcomes for the individuals themselves.
In these kinds of cases, the motive will often be psychological. People in this position could be motivated by altruistic motives (e.g., a desire for others not to experience the same things they believe they did) or non-altruistic motives (e.g., a hope that the community will roast people who the pseudonymous individuals believe did them wrong). In the former case, a default norm of respecting pseudonymity is important. Altruistic whistleblowers aren’t getting much out of it themselves (and are already devoting a lot of time and stress to the communal good).
-13 karma from 5 votes for a comment that doesn’t seem to break any Forum norms? Odd
Even if the whistleblowers seem to be making serial false allegations against former employers?
Does EA really want to be a community where people can make false allegations with total impunity and no accountability?
Doesn’t that incentivize false allegations?
Has there been a suggestion that Chloe has made serial false allegations against former employers? I thought that was only Alice.
There’s a unilateralist’s curse issue here—if there are (say) 100 people who know the identities of Alice and Chloe, does only one of them have to decide breaching the psuedonyms would be justified?
[Edit to add: I think the questions Geoffrey is asking are worthwhile ones to ask. I am just struggling to see how an appropriate decision to unmask could be made given the community’s structure without creating this problem. I don’t see a principled basis for declaring that, e.g., CHSP can legitimately decide to unmask but everyone else had better not.]
I continue to think that something went wrong for people to come away with takes that lump together Alice and Chloe in these ways.
Not because I’m convinced that Alice is as bad as Nonlinear makes it sound, but because, even based on Nonlinear’s portrayal, Chloe is portrayed as having had a poor reaction to the specific employment situation, and (unlike Alice) not as having a general pattern/history of making false/misleading claims. That difference matters immensely regarding whether it’s appropriate to warn future potential employers. (Besides, when I directly compare Chloe’s writings to Nonlinear’s, I find it more likely that they’re unfair towards her than vice versa.)
FWIW, I’m not saying that coming away with this interpretation is all your fault. If someone is only skim-reading Nonlinear’s post, then I can see why they might form similarly negative views about both Alice and Chloe (though, on close reading, it’s apparent that also Nonlinear would agree that there’s a difference). My point is that this is more a feature of their black-and-white counterattack narrative and not so much appropriate for what I think most likely happened.
Lukas—I guess one disadvantage of pseudonyms like ‘Alice’ and ‘Chloe’ is that it’s quite difficult for outsiders who don’t know their real identities to distinguish between them very clearly—especially if their stories get very intertwined.
If we can’t attach real faces and names to the allegations, and we can’t connect their pseudonyms to any other real-world information about them, such as LinkedIn profiles, web pages, EA Forum posts, etc., then it’s much harder to remember who’s who, and to assess their relatively degrees of reliability or culpability.
That’s just how the psychology of ‘person perception’ works. The richer the information we have about people (eg real names, faces, profiles, backgrounds), the easier it is to remember them accurately, distinguish between their actions, and differentiate their stories.
You’re right about the effort involved, but when these are real people who you are discussing deanonymizing in order to try to stop them from getting jobs, you should make the effort.
Well all three key figures at Nonlinear are also real people, and they got deanonymized by Ben Pace’s highly critical post, which had the likely effect (unless challenged) of stopping Nonlinear from doing its work, and of stigmatizing its leaders.
So, I don’t understand the double standard, where those subject to false allegations don’t enjoy anonymity, and those making the false allegations do get to enjoy anonymity.
I don’t think all people in the replies were arguing that Ben’s initial post was okay and deanonymizing Alice and or Chloe would be bad (which I think you would call a double standard, which I’m not commenting on right now). Some probably do but some probably think that Ben’s initial post was bad and that deanonymizing Alice and or Chloe would also be bad and that we shouldn’t try to correct one bad with another bad, which doesn’t look like a double standard to me.
A quick reminder that moderators have asked, at least for the time being, to please not post personal information that would deanonymize Alice or Chloe.
Lorenzo—yes, I’m complying with that request.
I’m just puzzled about the apparent double standard where the first people to make allegations enjoy privacy & anonymity (even if their allegations seem to be largely false or exaggerated), but the people they’re accusing don’t enjoy the same privilege.
I agree that the Forum’s rules and norms on privacy protection are confused. A few observations:
(1) Suppose a universe in which the first post on this topic had been from from Nonlinear, and had accused Alice and Chloe (by their real names) of a pattern of mendaciously spreading lies about Nonlinear. Would that post have been allowed to stay up? If yes, it is hard to come up with a principled reason why Alice and Chloe can’t be named now.
If no, we would need to think about why this hypothetical post would have been disallowed. The best argument I came with would be that Alice and Chloe are, as far as I know, people with no real prominence/influence/power (“PIP”) within or without EA. Under this argument, there is a greater public interest in the actions of those with PIP, and accepting a role of PIP necessarily means sacrificing some of the privacy rights that non-PIPs get.
(2) Another possibility involves the idea of standing. Under this theory, Alice and Chloe had standing to name people at Nonlinear because they were the ones who allegedly experienced harm. Ben had derivative standing because Alice and Chloe had given him permission to share their stories on the Forum. Under this theory, Nonlinear (and individuals named in Ben’s post) would have standing to name Alice and Chloe as the parties allegedly harmed by their conduct. The rest of us would not have standing. [Edit to add footnote here.[1]]
Maybe the aggrieved party isn’t the best imaginable arbiter of whether someone should be unmasked, but it’s probably better than concluding that each and every one of us get to make that decision. That could more easily lead to a situation in which 99% of us agree that the names shouldn’t be disclosed, and the 1% gets to decide. Of course, that could happen under a standing theory as well, but the 1% minority opinion has to coincide with the list of parties with standing. That at least narrows the problem.
(3) This situation involves a mix of Forum and non-Forum conduct. I sense that the norm for not doxxing people is stronger for Forum conduct (or at least for conduct involving similar sorts of speech) than from non-Forum conduct. This may reflect a difference in the norms of Internet communities vs. physically-grounded communities, or a sense that Forum speech is often core expressive speech or otherwise entitled to extra solicitude.
In any event, based on the fact that a number of people seem to know their identities, it sounds like the tale of Alice and Chloe was not a closely-kept secret before Ben’s post went up. Presumably this information came, directly or indirectly, from Alice and Chloe. In other words, there has been potentially relevant non-Forum conduct that did not magically morph into Forum conduct merely because Ben decided to write a post sharing the same information.
That raises the following hypothetical. Let’s say that Dan allegedly made some homophobic, racist, sexist, and/or otherwise offensive comments.[2] Does the location in which the comments were made change whether he can be identified. For example, is it OK to write a Forum post identifying Dan by name if those comments were posted on the Forum under a pseudonym? What if they were said at an EAG afterparty? In a non-EA space on Reddit? With a mask on in public (by someone who recognized Dan’s voice)? If the answers are not the same, why is that the case? I don’t claim to know the answers, by the way; this isn’t a trick question.
(4) I do not at present have a clear opinion on whether naming Alice, Chloe, or both should be allowed on the Forum. I do have an opinion that there needs to be a clear, logical set of rules or at least principles—ideally, laid out in advance as far as that is practical. I don’t like the idea of apparently ad hoc decisions about who gets privacy protections and who does not.
[Added footnote: I should note that standing, which is used as a legal metaphor here, does not mean “is none of your business.” We often impose standing requirements to make sure the person making the decision has appropriate incentives, context, etc.
Here, the various people who A & C have allegedly spread lies about are in a much better position than I am to know the relevant facts. If they have not concluded that disclosure of A & C’s names is warranted, it’s not clear why I—or any other reader of the relevant posts—would do a better job as decisionmaker.
Another reason for limited standing is practicality. For instance, in US law, we don’t usually allow suits where the alleged harm is “I’m a taxpayer, and I paid two cents of tax toward this program I think is illegal.” You’d have hundreds of millions of people who could challenge any line item in the federal budget, and if you multiply that by how many line items someone might object to . . . Even though there is de minimis financial harm, and that’s usually enough, we’ve decided that isn’t enough where the taxpayer’s interest is basically of the same nature as every other taxpayer’s interest in the matter. The analogy here is that we don’t want 100 different people being able to unilaterally decide that A & C should be named. If 99 of them decide that maintaining anonymity is appropriate, and 1 disagrees, the odds of the 1 being correct are pretty low.]
This is not to equate Dan’s hypothetical conduct to what Alice and Chloe are alleged to have done.
I expect no one was interested in writing something about Alice and/or Chloe (A/C), by name or otherwise, before Ben’s post, and people only want to name them now because they think A/C should face consequences for falsely (they believe) alleging abuse. Which is very close to retaliating against whistleblowers, and we should be very careful, which includes maybe accepting a rule that will have some false positives.
To take a different example, my non-professional understanding is it would normally be legal for an MA employer to report their employee to immigration authorities, but if the employer did this right after the employee had filed a complaint with the attorney general’s office, even a false one, this is probably actually illegal retaliation. This will have the occasional false positive, where the employer really was going to report the employee anyway but can’t prove it, but we accept that because avoiding the harms of retaliation is more important.
Jeff—actual ‘whistleblowers’ make true and important allegations that withstand scrutiny and fact-checking. I agree that legit whistleblowers need the protection of anonymity.
But not all disgruntled ex-employees with a beef against their former bosses are whistleblowers in this sense. Many are pursuing their own retaliation strategies, often turning trivial or imagined slights into huge subjective moral outrages—and often getting credulous friends, family, journalists, or activists to support their cause and amplify their narrative.
It’s true that most EAs had never heard of ‘Alice’ or ‘Chloe’, and didn’t care about them, until they made public allegations against Nonlinear via Ben Pace’s post. And then, months later, many of us were dismayed and angry that many of their allegations turned out to be fabricated or exaggerated—harming Nonlinear, wasting thousands of hours of our time, and creating schisms within out community.
So, arguably, we have a case here of two disgruntled ex-employees retaliating against a former employer. Why should their retaliation be protected by anonymity?
Conversely, when Kat Woods debunked many of the claims of Ben Pace (someone with much more power and influence in the EA/Rationalist community), why was she not considered a ‘whistleblower’ calling out his bullying and slander?
Yet again, the gender bias in ‘moral typecasting’ becomes important, as I mentioned in a previous comment here.
Highlighting that is an important crux (and one on which I have mixed feelings). Not all allegations of incorrect conduct rise to the level of “whistleblowing.” A whistleblower brings alleged misconduct on a matter of public importance to light. We grant lots of protections in furtherance of that public interest, not out of regard for the whistleblower’s private interests.
Is this a garden-variety dispute between an employer and two employees about terms of employment? Or is this a story about influential people allegedly using their power to mistreat two people who were in a vulnerable position which is of public import because it should update us on how much influence to allow those people?
In Australia people can be, and have been, prosecuted when they whistleblow on something commercially or otherwise sensitive (that was of major public importance!) by disclosing it publicly without completely exhausting internal whistleblowing processes. So even in cases of proper whistleblowing, countervailing factors can dominate in what the consequences are for whistleblower.
I think that’s too strong? For example, under my amateur understanding of MA law I don’t see anything about the anti-retailation provisions being conditional requiring a complaint to withstand scrutiny and fact-checking. And if this were changed to allow employers to retaliate in cases where employees claims were not sustained then I think we’d see, as a chilling effect, a decrease in employees raising true claims.
I agree that requiring that the claims be sustained would have a chilling effect. However, in many contexts, we don’t extend protections to claims submitted in bad faith. For instance, we grant immunity from legal retaliation against people who file reports of child abuse . . . but that is usually conditioned on the allegations were made in good faith. If a reported individual can prove that the report was fabricated out of whole cloth, we don’t shield the reporter from a defamation suit or other legal consequences.
Note that this is generally a subjective standard—if the reporter honestly believed the report was appropriate, we shield the reporter from liability. This doubtless allows some bad actors to slip through with immunity. However, we believe that is necessary to avoid reporters deciding not to report out of fear that someone will Monday-morning quarterback then and decide that reporting was objectively unreasonable.
In your example, I suspect that knowingly filing a false report with a state agency is a crime in MA (as it is with a federal agency at the federal level), so there is at least some potential enforcement mechanism for dealing with malicious lies.
(Correctly) surmising a lack of interest in writing a hypothetical expose about A/C isn’t quite the same thing as reaching a conclusion that the post shouldn’t have been allowed to remain. However, I think there is a lot of overlap between the two; the reasons for lack of interest seem similar to the arguments for why the post shouldn’t be allowed. So I think we are both somewhere vaguely near “there would be no legitimate/plausible reason for someone to write an expose about A/C, unless one accepted that their whistleblowing activity made it legitimate.”
One interesting thing about this framing is that it raises the possibility that the whistleblowers’ identities are relevant to the decision. If a major figure in EA were going around telling malicious lies about other EAs, that would be (the subject of an appropriate post / something people would be interested in writing about) independently of any specifically whistleblowing-retaliation angle.
One could stake out an anti-standing argument in which Nonlinear et al. would not be able to identify A/C because we would be worried that vindictive or retaliatory desires were affecting their judgment, but an truly disinterested person (e.g., not friends of Nonlinear employees) could identify them—because they are likely to be acting from purer, less emotionally-invested motives (e.g., protection of the community from those they perceive as brazen liars). I’m not endorsing that view, but it is interesting to ponder.
Continuing with your analogy, if you were a random person who found out about the employee filing with the AG independently of the employer, and you somehow were able to determine that the employee knowingly filed a false report out of (e.g.) racial animus, would it be OK for you to report the employee to the immigration authorities? [Asking ethically, not under MA law.]
Relatedly, your example relates to a situation with a significant power imbalance (employer/employee). It probably isn’t illegal for me to report someone to the immigration authorities where my real motive is that they cheated on me, they cheated on my friend, etc. [I didn’t specifically check that example.] So it seems that we often protect individuals exercising socially-important functions like whistleblowing from retaliation by some actors but not others.
All good points! I’m quite conflicted here.
Could you explain what “ core expressive speech” and “extra solicitude” are?
My use of core expressive speech was inspired by “core political speech” in U.S. First Amendment doctrine. E.g., this article describing a hierarchy of protected speech. I meant that Forum speech may be more likely to be speech about the stuff that matters, discouragement of which (including by denying psuedonymity) poses particularly great harms. Probably “high-value speech” would have been clearer here.
Solicitude is care or concern, so here I meant that we might particularly care about protecting Forum speech as opposed to other kinds of speech for some reason.
Writing in a personal capacity.
Hi Geoffrey, I think you raise a very reasonable point.
There’s some unfortunate timing at play here: 3⁄7 of the active mod team—Lizka, Toby, and JP—have been away at a CEA retreat for the past ~week, and have thus mostly been offline. In my view, we would have ideally issued a proper update by now on the earlier notice: “For the time being, please do not post personal information that would deanonymize Alice or Chloe.”
In lieu of that, I’ll instead publish one of my comments from the moderators’ Slack thread, along with some commentary. I’m hoping that this shows some of our thinking and adds to the ongoing discussion here.[1] I’m grateful to yourself, @Ivy Mazzola, @Jason, @Jeff Kaufman and others for helping push this conversation forward.
Starting context: Majority of moderators in agreement that our current policy on doxing, “Revealing someone’s real name if they are anonymous on the Forum or elsewhere on the internet is prohibited” (link), should apply in the Alice+Chloe case. (And that it should apply whether or not Alice and/or Chloe have exaggerated their allegations.)
There was then commentary from a couple of the other moderators acknowledging that:
Under our current policies, it is indeed fair game for Kat to write a post sharing information on those former employees using their real names, as long as that post doesn’t refer directly to Ben’s one. Which does seem like a weird technicality.
The three broad paths forward seem to be: 1) “give the first accuser the right to remain anonymous;” 2) “ban anonymous allegations altogether;” 3) “stick with our current policies, notwithstanding that there is that technicality.[3]”
There’s since been the beginnings of a discussion on whether certain categories of anonymous allegations—notably, employees sharing information on employers—could be exceptions to the rule if the second path is taken.[4]
Closing context: At the time of me posting this comment, inter-moderator discussion on the issue is inconclusive.
The thinking shown will of course be skewed towards my personal view, over the views of other moderators.
A later comment of mine, which ties in here:
And notwithstanding that this technicality may well put an accused party in an uncomfortable position, where they believe they have good reason for writing a response “sharing information” post, but they also know that doing so likely means receiving some backlash (for implicitly deanonymizing their accuser).
My personal view is that this rules out the third path: I don’t think a fair set of policies would force someone into this kind of a lose-lose position.
In other words, one consideration is whether the benefit of allowing anonymous whistleblowing on the Forum—where the purpose of whistleblowing is in part to compensate for employer-employee asymmetry—outweighs the cost of not having first-accuser-second-accuser symmetry.
I do not like the incentive structure that this would create if adopted. Kat did not get to look at this particular drama and decide whether she wanted it discussed under a real or pseudonymous username. Her decision point was when she created her forum account however many years ago, at a time when she had no idea that this kind of drama would erupt. If this position becomes policy, then it incentivizes every person, at the time that they create a forum account, to choose a pseudonym rather than use their real name, to avoid having any unforeseeable future drama publicly associated with their real name. I think this would be bad. People in a community can’t build trust if they don’t know the identities of the people they are building trust with.
A rule that you couldn’t directly name people of moderate or greater prominence wouldn’t work well anyway. People here are awfully clever, and I’m sure one could easily write a whistleblowing piece on such a person that left very little doubt about their identity without actually saying their name or other unique identifiers. In fact, I’m not sure if Ben’s piece could have been effectively written without most of the Forum readership knowing who Alice and Chloe had worked for.
Will—thanks very much for sharing your views, and some of the discussion amongst the EA Forum moderators.
These are tricky issues, and I’m glad to see that they’re getting some serious attention, in terms of the relative costs, benefits, and risks of different possible politicies.
I’m also concerned about ‘setting a precedent of first-mover advantage’. A blanket policy of first-mover (or first-accuser) anonymity would incentivize EAs to make lots of allegations before the people they’re accusing could make counter-allegations. That seems likely to create massive problems, conflicts, toxicity, and schisms within EA.
Thanks for sharing this!
I had a bunch of thoughts on this situation, enough that I wrote them up as a post. Unfortunately your response came out while I was writing and I didn’t see it, but I think doesn’t change much?
In addition to your three paths forward, I see a fourth one: you extend the policy to have the moderators (or another widely-trusted entity) make decisions on when there should be exceptions in cases like this, and write a bit about how you’ll make those decisions.
There may be a fifth, which could be seen as a bit of a cop-out.
It’s not clear to me whether the mods claim jurisdiction over deanonymizing conduct that doesn’t happen on the Forum. I think the answer here is that claiming such jurisdiction here would be inappropriate.
As far as I know, it wouldn’t violate the rules of X, Facebook, or most other sites to post that “[Real Names] have been spreading malicious lies about things that happened when they were Nonlinear employees.” It certainly would not violate the rules of a state or federal court to do that in a court complaint. The alleged harm of Alice and Chloe spreading malicious lies about Kat, Emerson, and Nonlinear existed off-the-Forum prior to anything being published on the Forum. I don’t see why Ben’s act of including those allegations in a Forum post creates off-Forum obligations for Nonlinear et al. (or anyone else) that did not exist prior to Ben’s post. Alice and Chloe, and people in similar situations, have to accept that many fora exist that do not have norms against this kind of conduct.[1]
If there is no jurisdiction over off-Forum naming here, it seems that the people who want Alice and Chloe named can do so in other places, and everyone who wants to know will know soon enough. If that’s the case, I’m not sure whether—at least in these circumstances [2] -- Jeff’s suggestion offers enough added value to justify the rather significant costs of the mods adjudicating this particular matter in a reasonably thorough manner. If the names can be plastered all over X and Facebook, and ~everyone who cares to know will find out that way, does it make a huge amount of difference whether or not the names are also on the Forum? Under those circumstances, declining to adjudicate because the question would be of limited practical importance would be justifiable.
I express no opinion as to whether the mods could legitimately exercise broader jurisdiction over attempting to disclose the identity of a Forum poster, where the only relevant conduct was on-Forum.
That is, enough people who know of their identity appear to be motivated to share it.
This isn’t about the Forum mods as as representatives of the Forum, but instead as the most obvious trusted community members (possibly in consultation with CH) to make a decision.
What centralized adjudication avoids is each person having to make their own judgment about whether deanonymization is appropriate in a given circumstance. Let’s say NL starts posting the real names on Twitter: should I think poorly of them for breaking an important norm or is this an exception? Is that an unreasonable unilateral escalation of this dispute? Should I pressure them not to do this?
That approach certainly does offer some significant advantages, but I think it’s a lot harder to pull off. Will’s three options, the narrower version of mod discretion (which is limited to whether A/C can be named on the Forum), and my fifth option (declining to allow in this case because if people decide to name, everyone will find out whether it’s on the Forum or not) are all open to the mods because they are mods.
The possibility of a centralized adjudication that is recognized as binding in all places requires outside buy-in. I think it needs either (1) the consent of every party directly in interest or (2) the consent of Nonlinear, broad community support, and the centralized adjudicator’s willingness to either release the names themselves or allow widespread burner accounts naming them.
Option (1) is basically arbitration on the consent of the parties; they would be free to choose the mods, Qualy, a poll, or a coin flip. Alice and Chloe would consent to being named if the arbitrators chose, and Nonlinear would agree to not name if the arbitrators ruled against them.[1] If the arbitrators rule for naming, no one should judge Nonlinear because it would have named Alice and Chloe with their consent. If they rule against naming and Nonlinear did it anyway, everyone should judge them for breaking their agreement. And there’s a strong argument to me that we bystanders should honor the decision of those directly involved on a resolution.
But reaching an agreement to arbitrate may be challenging. A rational party would not consent to arbitrate unless it concluded its interests were expected to be better off under arbitration than the counterfactual. Settlements can be mutually beneficial, but I am not yet convinced arbitration would be in Alice and Chloe’s interests. So long as a substantial fraction of the community would judge Nonlinear for naming, it probably will not do so. So the status quo for Alice and Chloe would be a win vs. an uncertain future in arbitration.
The other, less certain option is that Nonlinear and the significant majority of the community consented to abide by the arbitration result. Even here, there is a risk that the arbitration process may become a no-win scenario for Nonlinear. If enough community members reserve their right to independently and adversely judge Nonlinear for naming, then it is in a pickle even if it “wins” the arbitration.
A possible workaround might be either that the arbitration panel itself will release the names if it rules against Alice and Chloe, or that it will allow untraceable anonymous posters to flood the Forum with their names. In other words, if Alice and Chloe do not consent, and there is a contingent of anti-namers in the community, then any blowback for releasing needs to fall on someone other than Nonlinear.
One complexity is that, to the extent that Alice and/or Chloe allegedly slandered other people, there are other potential parties in interest.
I think you might be thinking too formally? We sometimes have things that work because we decide to respect an authority that doesn’t have any formal power. If you make a film you don’t have to submit it to the MPAA to get a rating, and if you run a theater you don’t have to follow MPAA ratings in deciding whether someone is mature enough to be let into an R-rated movie, but everyone just goes along with the system.
I’m imagining that the Forum mods would make a decision for the Forum, and then we’d just go along with it voluntarily even off the Forum, as long as they kept making reasonable decisions.
I’m not seeing any real consensus on what standard to apply for deanonymizing someone. I think a voluntary deference model is much easier when such a consensus exists. If you’re on board with the basic decision standard, it’s easier to defer even when you disagree with the application in a specific case. In sports, the referees usually get the call right, and errors are evenly distributed between your team and your opponents. But if you fundamentally disagree with the decision standard, the calls will go systematically against your viewpoint. That’s much harder to defer to, and people obviously have very strong feelings on either side.
I don’t think the MPAA is a great analog here. I’d submit that the MPAA has designed its system carefully in light of the wholly advisory nature of its rulings. Placing things on a five-point continuum helps. I think only a small fraction of users would disagree more than one rating up/down from where the MPAA lands. So rarely would an end user completely disagree with the MPAA outcome. Where an end user knows that the MPAA grades more harshly/leniently than they do, the user can mentally adjust accordingly (as they might when they learn so many Harvard College students get 4.0s?)
And it’s easy for end users to practically opt out of the MPAA system without any real social sanction; if a theater owner decides to admit ten-year olds to R-rated movies with a signed parental consent, that is really none of my business. If a parent decides to take a seven-year old to one, that is also none of my business as long as the child is non-disruptive. The MPAA system is resilient to 20-30% of the population opting out, if they so chose.[1]
So I don’t think the features that make the MPAA system workable as a voluntary-deference system are likely to transfer over well to this context.
It’s harder for filmmakers to opt out—but they also got a lot out of the system, too. Mild self-regulation is preferable to government regulation, especially back in days when the First Amendment was not enforced with the same rigor it is today.
Jeff—thanks very much for sharing the link to that post. I encourage others to read it—it’s fairly short. It nicely sets out some of the difficulties around anonymity, doxxing, accusations, counter-accusations, etc.
I can’t offer any brilliant solutions to these issues, but I am glad to see that the risks of false or exaggerated allegations are getting some serious attention.
I wouldn’t classify Ben’s post as containing fully anonymous allegations. There was a named community member who implicitly vouched for the allegations having enough substance to lay before the Forum community. That means there was someone in a position to accept social and legal fallout if the decision to post those allegations is proven to have been foolhardy. That seems to be a substantial safeguard against the posting of spurious nonsense.
Maybe having such a person identified didn’t work out here, but I think it’s worth distinguishing between this case and a truly anonymous situation (e.g., burner account registered with throwaway account doing business via Tor, with low likelihood that even the legal system could identify the actual poster for imposition of consequences).
That could be a feature rather than a bug for reasons similar to those described above. Deanonymizing someone who claims to be a whistleblower is a big deal—and arguably we should require an identified poster to accept the potential social and legal fallout if that decision wasn’t warranted, as a way of discouraging inappropriate deanonymization.
PS For the people downvoting and disagree-voting on my comment here:
I raised some awkward questions, without offering any answers, conclusions, or recommendations.
Are you disagreeing that it’s even legitimate to raise any issues about the ethics of ‘whistleblower’ anonymity in cases of potential false allegations?
I’d really like to understand what you’re disagreeing about here.
I think the questions you’re raising are important. I got kind of triggered by the issue I pointed out (and the fact that it’s something that has already been discussed in the comments of the other post), so I downvoted the comment overall. (Also, just because Chloe is currently anonymous doesn’t mean it’s risk-free to imply misleading and damaging things about her – anonymity can be fragile.)
There were many parts of your comment that I agree with. I agree that we probably shouldn’t have a norm that guarantees anonymity unconditionally. (But the anonymity protection needs to be strong enough that, if someone temporarily riles up public sentiment against the whistleblowers, people won’t jump to de-anonymizing [or to other, perhaps more targeted/discreet appropriate measures, such as the one suggested by Ivy here] too quickly; instead, the process there should be diligent and fair as well, just like an initial investigation prompted by the whistleblowers should be. (Not saying that this contradicts any of what you were suggesting!)
When things get heated and people downvote each others comments, it might be good to focus on things we do (probably) agree on. As I said on the Lesswrong version of this post:
I don’t feel like you raised discussion with no preference for what the community decided. When I gave my answer, which many people seem to agree with, your response was to question whether that’s REALLY what the EA community wants. I think it’s a bit disingenuous to suggest that you’re just asking a question when you clearly have a preference for how people answer!
I disagree-voted because your first paragraph praised the OP.
Short answer: I think Ben should defer to the community health team as to whether to reveal identities to them or not (I’m guessing they know). And probably the community health team should take their names and add it to their list where orgs can ask CH about any potential hires and learn of red flags in their past. I think Alice should def be included on that list, and Chloe should maybe be included (that’s the part I’d let the CH team decide if it is was bad enough). It’s possible Alice should be revealed publicly, or maybe just revealed to community organizers in their locale and let them make the decision of how they want to handle Alice’s event attendance and use of community resources.
Extra answer: FWIW I already have bad feelings about CEA’s hardcore commitment to anonymity. I do feel EA is too hard on that side, where for example, people accused of things probably won’t even be told the names of their accusers or any potentially-identifying details that make accusations less vague. The only reason NL knew in this situation is because the details make it unavoidable that they’d know. But otherwise the standard across EA is that if you are accused of something via CEA’s community health team, you will never know who makes the critiques, and therefore will never be able to properly rebut them. And if that is how this had gone down, it would have been Kat and Emerson getting their names on CEA’s blacklist. I think this default justice system of this community is messed up, and there should be some exceptions in theory.
That said, I’m not sure this is one of those exceptions. I feel kind of conflicted about revealing their names even in this extreme case, because I have also had a weird employment experience in EA with an atypical structure, after which I felt emotionally wounded, and if someone had come to me and said “We suspect your ex-employer may be unethical and causing repeated serious harms! So I am asking you and others for all the details they can remember that made you feel bad in that situation” there is some likelihood I would share my experiences, rather than say my actual thought-through POV which is “No I actually don’t blame or hold ill will to my ex-employers and I won’t participate in any investigation against them. It was a complex situation and we all made mistakes”. In most cases I would not “sing” per se, but in some universes I surely would. Just because I was asked. Incidentally this parallels how I feel about the women in the sexual misconduct TIME piece too. It’s gotta be hard to not add to a (maybe false but also maybe true) narrative when you are literally asked to and commended for doing so? Most of us trust people who put themselves in that sort of investigator position.
I’m sure many readers who are shocked at Alice or Chloe now would, if asked, start singing if they had been in their position, thinking they are doing something useful as they are being so encouraged, after all, even if at first they felt conflicted. Even if they had felt conflicted about it for a year prior. Even if they were almost over it, and their mental health and humility were improving, wouldn’t being suddenly told “no you really might be a victim, please share, in fact I’m talking to another victim too” sort of.. undo that? Especially if they are working on mental health problems, I imagine it would feel soo temptingly cathartic and be hard to say no to a narrative that just might make you the hero, just might help someone else, for everything else you feel you lost. Does that make it their fault that they bit the bait? Or is it more the “fault” (or simply, “result”) of how the investigation was done?
I do agree that orgs should be able to factor this whole debacle into their decision if they end up about to hire Alice or even Chloe. CEA’s private list should mostly solve that problem. But my opinion on revealing names publicly beyond that hinges a lot on how the investigation kicked off. How much did Ben have to search and push to get them to say all that? What framing did he use? How much were they slandering NL before Ben even reached out, and was it about egregious or very important things? Was NL’s reputation unfairly damaged by Alice or Chloe before this investigation even kicked off? I would like to hear more of his thoughts on that.. with no judgement toward him for the past, because it is so critical that the community come together and discuss honestly and collaboratively what happened here.
Ivy—I really appreciate your long, thoughtful comment here. It’s exactly the sort of discussion I was hoping to spark.
I resonate to many of your conflicted feelings about these ethically complicated situations, given the many ‘stakeholders’ involved, and the many ways we can get our policies wrong in all kinds of ways.
Thanks for your kind comment :)
My understanding is that Kat and Emerson did in fact get their names on CEA’s blacklist to some extent.
Here is the bigger problem I see with your proposed solution. If an employer reviewing an application from Alice or Chloe believes their side of this, then the employer would not factor in the fact of their presence on CEA’s blacklist, since the employer, by hypothesis, thinks CEA was mistaken to put them there. If, on the other hand, an employer reviewing an application from Alice or Chloe believes Nonliner’s side of this, then the employer may justifiably look at the fact that CEA erred by having blacklisted Kat and Emerson and choose not to consult CEA in their hiring decisions at all, and therefor not discover that their applicant was Alice or Chloe. Either way, CEA blacklisting Alice and Chloe seems ineffective.
There are some references here to the community health team’s practices that we think aren’t fully accurate. You can see more here about how we typically handle situations where we hear an accusation (or multiple accusations) and don’t have permission to discuss it with the accused.
Sorry, but I have (re)read that link and I don’t see how anything we said was in conflict with each other. Perhaps I didn’t word it well. Or am I misunderstanding you? If you could give some hard numbers like, only X% of complaints end up being handled anonymously, and of those, in Z% the complaints end up being unactionable and we just give a listening ear, and only in Y% do anonymous complaints end up being held against the person and meaningfully effecting their lives, then maybe I can agree I made the extent of the dilemma sound overblown. I’m also aware that other tactics come with their own dilemmas. I just wanted to acknowledge that there is a dilemma and that I am not a “never deanonymize” type of person before I made some other points.
Reading your link I felt it was not in conflict because: In the case where many people give complaints about Steve, not a single person was willing to have their concerns discussed in detail with him (out of fear that details would reveal them I suppose), let alone be deanonymized by name. So it does sound like EAs like to make complaints in (what I’d call) “extreme anonymity” by “default” and tbh that matches my social and cultural model of EAs. And in the next section you say that your policy is to be even more protective of confidentiality than some communities like universities. And you do make some decisions based on those things you might never fully discuss with the other party. You call them “compromises” but some are major reactions which could be EA-career-ending. Actually I find it hard to consider what other worse actions remain other than calling the police, writing a public expose about them, or messaging their employer out of the blue. So I don’t think it is going too far to say that maybe CEA could be too protective of anonymity, as you acknowledge your behavior can, at least sometimes, be abnormal or counter to what people would expect in other institutions.
In my view, it might be one of those cases where general society or others landed on the right institutional practices, but we EAs are naive in our tradeoff considerations by trying to use different systems, or draw the line of deanonymizing at different degrees. I don’t think this is a bold possibility. I expect you disagree with the idea that CH could be too protective of anonymity. Maybe most EAs would. But it’s a natural thing we can look at and not avert our eyes from that possibility. That’s all I wanted to say.
I’d also like to clarify that I was not trying to be harsh on CH and drag you all in with what I wrote. These are hard problems. I was merely trying to write an introduction I felt took seriously and related to the feelings of people who do want Alice and Chloe doxxed, and show how I understand and sympathize with that perspective very much, and then go from there to discuss why I wouldn’t be in favor of doxxing even in this case that so many are shocked by. I am mostly bullish on the CH team which is why in my “short answer” section, I claimed that EAs should mostly defer to the CH team on this issue.
You make a lot of fair points here, and we’ve grappled with these questions a lot.
Well the first thing that stands out to me is you don’t specify that the anonymity occurs only if the complainant requests it
Hm I guess that’s true. I guess I thought it went without saying that it would be when people want anonymity, I didn’t imagine there could be an alternative where CH removes names even if the complainant doesn’t request it. That would indeed be worse and a true “default” and I hope no one took that as what I meant.
But I think CH asks complainants what degree of anonymity and detail-sharing they are comfortable with by default. And I think a lot of people ask them to not give details, and by default CH does defer to that preference to what might be an abnormal extent, such that anonymity may be functionally the default in our culture and their dealings. But yeah I guess I wonder about hard numbers. It is striking to me that not one person was willing to have the details of the incident shared with Steve though
I assumed the mock-incident was just meant to illustrate how it might arise that someone doesn’t get full information, and it’s easier to get that point across if you have it as everyone requesting anonymity.
On the real world point, I do agree that if what happens is something like ‘CEA: do you want anonymity? Complainant: uh sure, might as well’, then that seems suboptimal. Though I’m not sure I could come up with any system that’s better overall.
Fair, that is a mock incident, but I don’t see that aspect as being dramatized or anything. Fwiw I have known multiple people whose experiences basically matched Steve’s.
I just think if we are going to talk about doxxing Alice and Chloe we might want to think what it might look like if they had gone elsewhere, or what it might look like in the future if they unduly report others. And as a community I think it must be reckoned with why some people feel upset right now at the protection that reporters face when accused get so few protections, not even the protection to know details of the claims against them. And a cultural standard where names of people who make provably false accusations are revealed could protect all of us. So I think it is worth reckoning with. Even though I came out supporting non-doxxing in this case
I think it’s important to separate out how CH handled the allegations vs how Ben did. IMO CH’s actions (banning presenting at EAG but not attending, recommending a contract be used) were quite measured, and of a completely different magnitude than making public anonymous allegations. And I think this whole situation would have been significantly improved if Ben had adopted CEA’s policy of not taking further actions if restrictions are requested.
I’ll respond to one aspect you raised that I think might be more significant than you realize. I’ll paint a black and white picture just for brevity.
If you’re running organizations and do so for several years with dozens of employees across time, you will make poor hiring decisions at one time or another. While making a bad hire seems bad, avoiding this risk at all costs is probably a far inferior strategy. If making a bad hire doesn’t get in the way of success and doing good, does it even make sense to fixate on it?
Also, if you’re blind to the signs before it happens, then you reap the consequences, learn an expensive lesson, and are less likely to make it in future, at least for that type of deficit in judgment. Sometimes the signs are obvious after having made an error, though occasionally the signs are so well hidden that anyone with better judgment than you could have still have made the same mistake.
The underlying theme I’m getting at is that embracing mistakes and imperfection is instrumental. Although many EAs might wish that we could all just get hard things right the first time all the time, that’s not realistic. We’re flawed human beings and respecting the fact of our limitations is far more practical than giving into fear and anxiety about not having ultimate control and predictability. If anything, being willing to make mistakes is both rational and productive compared to other alternatives.
Victor—this is total victim-blaming. Good people trying to hire good workers for their organizations can be exploited and ruined by bad employees, just as much as good employees can be exploited and ruined by bad employers.
You said ‘If making a bad hire doesn’t get in the way of success and doing good, does it even make sense to fixate on it?’
Well, we’ve just seen an example of two very bad hires (‘Alice’ and ‘Chloe’) almost ruin an organization permanently. They very much got in the way of success and doing good. I would not wish their personalities on any other employers. Why would you?
We shouldn’t ‘embrace mistakes’ if we can avoid them. And keeping bad workers anonymous is a way of passing along those hiring mistakes to other future employers without any consideration for the suffering and chaos that those bad workers are likely to impose, yet again.
What I think I’m hearing from you (and please correct me if I’m not hearing you) is that you feel conflicted by the thought that the efforts of good people with good intentions can be so easily be undone, and that you wish there were some concrete ways to prevent this happening to organizations, both individually and systemically. I hear you on thinking about how things could work better as a system/process/community in this context. (My response won’t go into this systems level, not because it’s not important, but because I don’t have anything useful to offer you right now.)
I acknowledge your two examples (“Alice and Chloe almost ruined an organization) and (keeping bad workers anonymous has negative consequences). I’m not trying to dispute these or convince you that you’re wrong. What I am trying to highlight is that there is a way to think about these that doesn’t involve requiring us to never make small mistakes with big consequences. I’m talking about a mindset, which isn’t a matter of right or wrong, but simply a mental model that one can choose to apply.
I’m asking you to stash away your being right and whatever you perspective you think I hold for a moment and do a thought experiment for 60 seconds.
At t=0, it looks like ex-employee A, with some influential help, managed to inspire significant online backlash against organization X led by well-intentioned employer Z.
It could easily look like Z’s project is done, their reputation is forever tarnished, their options have been severely constrained. Z might well feel that way themselves.
Z is a person with good intentions, conviction, strong ambitions, interpersonal skills, and a good work ethic.
Suppose that organization X got dismantled at t=1 year. Imagine Z’s “default trajectory” extending into t=2 years. What is Z up to now? Do you think they still feel exactly the way they did at t=0?
At t=10, is Z successful? Did the events of t=0 really ruin their potential at the time?
At t=40, what might Z say recalling the events of t=0 and how much that impacted their overall life? Did t=0 define their whole life? Did it definitely lead to a worse career path, or did adaptation lead to something unexpectedly better? Could they definitely say that their overall life and value satisfaction would have been better if t=0 never played out that way?
In the grand scheme of things, how much did t=0 feeling like “Z’s life is almost ruined” translate into reality?
If you entertained this thought experiment, thank you for being open to doing so.
To express my opinion plainly, good and bad events are inevitable, it is inevitable that Z will make mistakes with negative consequences as part of their ambitious journey of life. Is it in Z’s best interests to avoid making obvious mistakes? Yes. Is it in their best interests to adopt a robust strategy such that they would never have fallen victim to t=0 events or similarly “bad” events at any other point? I don’t think so necessarily, because: we don’t know without long-term hindsight whether “traumatic” events t=0 lead to net positive changes or not; even if Z somehow became mistake-proof-without-being-perfect, that doesn’t mean something as significant as t=0 couldn’t still happen to them without them making a mistake; and lastly because being that robust is practically impossible for most people.
All this to say, without knowing whether “things like t=0” are “unequivocally bad to ever let happen”, I think it’s more empowering to be curious about what we can learn from t=0 than to arrive at the conclusion at t<1 that preventing it is both necessary and good.
Victor—thanks for elaborating on your views, and developing this sort of ‘career longtermist’ thought experiment. I did it, and did take it seriously.
However.
I’ve known many, many academics, researchers, writers, etc who have been ‘cancelled’ by online mobs, who have made mountains out of molehills. In many cases, the reputations, careers, and prospects of the cancelled people are ruined. Which is, of course, the whole point of cancelling them—to silence them, to ostracize them, and to keep them from having any public influence.
In some cases, the cancelled people bounce back, or pivot, or pursue other interests. But in most cases, they cancellation is simply a tragedy, a huge setback, a ruinous misfortune, and a serious waste of their talents and potential.
Sometimes there’s a silver lining to their being cancelled, bullied, and ostracized, but mostly not. Bad things can happen to good people, and the good people do not always recover.
So, I think it’s very important for EA to consider the many serious costs and risks we would face if we don’t take seriously the challenge of minimizing false allegations against EA organizations and EA people.
Thanks for entertaining my thought experiment, and I’m glad because I better understand your perspective too now, and I think I’m in full agreement with your response.
A shift of topic content here, feel free to not engage if this doesn’t interest you.
To share some vague thoughts about how things could be different. I think that posts which are structurally equivalent to a hit piece can be considered against the forum rules, either implicitly already or explicitly. Moderators could intervene before most of the damage is done. I think that policing this isn’t as subjective as one might fear, and that certain criteria can be checked even without any assumptions about truthfulness or intentions. Maybe an LLM could work for flagging high-risk posts for moderators to review.
Another angle would be to try and shape discussion norms or attitudes. There might not be a reliable way to influence this space, but one could try for example by providing the right material that would better equip readers to have better online discussions in general as well as recognize unhelpful/manipulative writing. It could become a popular staple much like I think “Replacing Guilt” is very well regarded. Funnily enough, I have been collating a list of green/orange/red flags in online discussions for other educational reasons.
“Attitudes” might be way too subjective/varied to shape, whereas I believe “good discussion norms” can be presented in a concrete way that isn’t inflexibly limiting. NVC comes to mind as a concrete framework, and I am of the opinion that the original “sharing information” post can be considered violent communication.
What does this mean?
a piece of writing with most of the stereotypical properties of a hit piece, regardless of the intention behind it
Do you think Concerns with Intentional Insights should have been ineligible for the Forum under this standard?
I’ve just partly read and partly skim read that post for the first time. I do suspect that post would be ineligible under a hypothetical “no hit pieces under duck typing” rule. I’ll refer to posts like this as DTHP to express my view more generally. (I have no comment on whether it “should” have been allowed or not allowed in the past or what the past/current Forum standards are.)
I’ve not thought much about this, but the direction of my current view is that there are more constructive ways of expression than DTHPs, and here I’ll vaguely describe three alternatives that I suspect would be more useful. By useful I mean that these alternatives potentially promote better social outcomes within the community, while hopefully not significantly undermining desirable practical outcomes such as a shift in funding or priorities.
1. If nothing else, add emotional honesty to the framing of a DTHP. A DTHP becomes more constructive and less prone to inspire reader bias when they are introduced with a clear and honest statement of the needs, feelings, requests from the main author. Maybe two out of three is a good enough bar. I’m inclined to think that the NL DTHP failed spectacularly at this.
2. Post a personal invitation for relevant individuals to learn more. Something like “I believe org X is operating in an undesirable way and would urge funders who might otherwise consider donating to X to consider carefully. If you’re in this category, I’m happy to have a one on one call and to share my reasons why I don’t encourage donating to X.” (And during the one on one you can allude to the mountain of evidence you’ve gathered, and let someone decide whether they want to see it or not.)
3. Find ways to skirt around what makes a DTHP a DTHP. I think a simple alternative such as posting a DTHP verbatim to one’s personal blog, then only sharing or linking to it with people on a personal level is already incrementally less socially harmful than posting it to the forums.
Option 4 is we find some wonderful non-DTHP framework/template for expressing these types of concerns. I don’t know what that would look like.
These are suggestions for a potential writer. I haven’t attempted to provide community-level suggestions here which could be a thing.
I’m biased since I worked on that post, but I think of it as very carefully done and strongly beneficial in its effect, and I think it would be quite bad if similar ones were not allowed on the forum. So I see your proposed DTHP rule as not really capturing what we care about: if a post shares a lot of negative information, as long as it is appropriately fair and careful I think it can be quite a positive contribution here.
I appreciate your perspective, and FWIW I have no immediate concerns about the accuracy of your investigation or the wording of your post.
Correct me if I’m wrong: you would like any proposed change in rules or norms to still support what you tried to achieve in that post, which is provide accurate information, presented fairly, and hopefully leading people to update in a way that leads to better decision making.
I support this, I agree that it’s important to have some kind of channel to address the kinds of concerns you raised, and I probably would have seen your post as a positive contribution (had I read it and been a part of EA / etc back then but I’m not aware of the full context), and simultaneously I’m saying things like your post could have even better outcomes with a little bit of additional effort/adjustment in the writing.
I encourage you think about my proposed alternatives not as being blockers to this kind of positive contribution. That is not their intended purpose. As an example, if a DTHP rule allows DTHPs but requires a compulsory disclosure at the top addressing the relevant needs, feelings, requests of the writer, I don’t think this particularly bars contributions from happening, and I think it would also serve to 1) save time for the writer by reflecting on their underlying purpose for writing, and 2) dampen certain harmful biases that a reader is likely to experience from a traditional hit piece.
If such a rule existed back then, presumably you would have taken it into account during writing. If you visualize what you would have done in that situation, do you think the rule would have negatively impacted 1) what you set out to express in your post and 2) the downstream effects of your post?