Closing Notes on Nonlinear Investigation
Over the past seven months, I’ve been working part-time on an investigation of Nonlinear, culminating in last week’s post. As I’m wrapping up this project, I want to share my personal perspective, and share some final thoughts.
This post mostly has some thoughts and context that didn’t fit into the previous post. I also wish to accurately set expectations that I’m not working on this investigation any more.
Why I Got Into Doing an Investigation
From literally the very first day, my goal has been to openly share some credible allegations I had heard, so as to contribute to a communal epistemic accounting.
On the Tuesday of the week Kat Woods first visited (March 7th), someone in the office contacted me with concerns about their presence (the second person in good standing to do so). I replied proposing to post the following one-paragraph draft in a public Lightcone Offices slack channel.
I have heard anonymized reports from prior employees that they felt very much taken advantage of while working at Nonlinear under Kat. I can’t vouch for them personally, I don’t know the people, but I take them pretty seriously and think it’s more likely than not that something seriously bad happened. I don’t think uncheckable anonymized reports should be sufficient to boot someone from community spaces, especially when they’ve invested a bunch into this ecosystem and seems to me to plausibly be doing pretty good work, so I’m still inviting them here, but I would feel bad not warning people that working with them might go pretty badly.
(Note that I don’t think the above is a great message, nonetheless I’m sharing it here as info about my thinking at the time.)
That would not have represented any particular vendetta against Nonlinear. It would not have been an especially unusual act, or even much of a call out. Rather it was intended as the kind of normal sharing of information that I would expect from any member of an epistemic community that is trying to collectively figure out what’s true.
But the person who shared the concerns with me recommended that I not post that, because it could trigger severe repercussions for Alice and Chloe. They responded as follows.
Person A: I’m trying to formulate my thoughts on this, but something about this makes me very uncomfortable.
...
Person A: In the time that I have been involved in EA spaces I have gotten the sense that unless abuse is extremely public and well documented nothing much gets done about it. I understand the “innocent until proven guilty” mentality, and I’m not disagreeing with that, but the result of this is a strong bias toward letting the perpetrators of abuse off the hook, and continue to take advantage of what should be safe spaces. I don’t think that we should condemn people on the basis of hearsay, but I think we have a responsibility to counteract this bias in every other way possible. It is very scary to be a victim, when the perpetrator has status and influence and can so easily destroy your career and reputation (especially given that they have directly threatened one of my friends with this).
Could you please not speak to Kat directly? One of my friends is very worried about direct reprisal.
BP: I’m afraid I can’t do that, insofar as I’m considering uninviting her, I want to talk to her and give her a space to say her piece to me. Also I already brought up these concerns with her when I told her she was invited.
I am not going to name you or anyone else who raised concerns to me, and I don’t plan to give any info that isn’t essentially already in the EA Forum thread. I don’t know who the people are who are starting this info.
This first instance is an example of a generalized dynamic. At virtually every step of this process, I wanted to share, publicly, what information I had, but there kept being (in my opinion, legitimate) reasons why I couldn’t.
(I’ve added a few more example chat logs in the footnotes here[1][2][3][4][5][6][7][8][9].)
Eventually, after getting to talk with Alice and Chloe, it seemed to me Alice and Chloe would be satisfied to share a post containing accusations that were received as credible. They expected that the default trajectory, if someone wrote up a post, was that the community wouldn’t take any serious action, that Nonlinear would be angry for “bad-mouthing” them, and quietly retaliate against them (by, for instance, reaching out to their employer and recommending firing them, and confidentially sharing very negative stories). They wanted to be confident that any accusations made would be strong enough that people wouldn’t just shrug and move on with their lives. If that happened, the main effect would be to hurt them further and drive them out of the ecosystem.
It seemed to me that I could not personally vouch for any of the claims (at the time), but also that if I did vouch for them, then people would take them seriously. I didn’t know either Alice or Chloe before, and I didn’t know Nonlinear, so I needed to do a relatively effortful investigation to get a better picture of what Nonlinear was like, in order to share the accusations that I had heard.
I did not work on this post because it was easy. I worked on it because I thought it would be easy. I kept wanting to just share what I’d learned. I ended up spending about ~320 hours (two months of work), over the span of six calendar months, to get to a place where I was personally confident of the basic dynamics (even though I expect I have some of the details wrong), and that Alice and Chloe felt comfortable with my publishing.
On June 15th I completed the first draft of the post, which I’d roughly say had ~40% overlap in terms of content with the final post. On Wednesday August 30th, after several more edits, I received private written consent from both Alice and Chloe to publish. A week later I published.
I worked on this for far too long. Had I been correctly calibrated about how much work this was at the beginning, I likely wouldn’t have pursued it. But once I got started I couldn’t see a way to share what I knew without finishing, and I didn’t want to let down Alice and Chloe.
My goal here was not to punish Nonlinear, per se. My goal was to get to the point where the accusations I’d found credible could be discussed, openly, at all.
When I saw on Monday that Chloe had decided to write a comment on the post, I felt a sense of “Ah, the job is done.” That’s all I wanted. For both sides to be able to share their perspective openly without getting dismissed, and for others to be able to come to their own conclusions.
I have no plans to do more investigations of this sort. I am not investigating Nonlinear further. If someone else wants to pick it up, well, now you know a lot of what I know!
Please don’t think that because I took the time to follow up on these accusations on this occasion, that there is “a lifeguard on duty”, that either bad behavior or info suppression will be reliably noticed or called out. We’ve shut down the Lightcone Offices, I’ve no plans to do this again, and don’t particularly want to.
My sense is that there are a good number more injustices and predators in the EA ecosystem, most of which do not look exactly like this case. But it is not my job to uncover them and I am not making it my job. If you want to have an immune system that ferrets out bad behavior, you’ll have to take responsibility for building that.
Assorted Closing Thoughts
Some final thoughts about Nonlinear
I’ve still got a lot of genuine uncertainty about who did what and how responsible the core Nonlinear team are for all the horrible experiences Alice and Chloe had. I just wanted to get it out into a state where Nonlinear weren’t in a position to just attack their former employees’ characters and push the post away. I hope for Nonlinear’s sakes that they are able to show that they’re not as culpable for the harms as it seems. I’ve had to work pretty hard to be confident that the harms won’t be inappropriately pushed under the rug.
For the record, a bunch of the stuff that Nonlinear tried, if they were to apologize for, seems forgivable to me, and not obviously norm-violating ex ante. Traveling around the world in a small group sounds fun (though after seeing how it went down here I’d now be much more worried about it). I have been very financially dependent on my cofounder in the past, and worked without a legal structure. I think it’s generally quite hard to have a personal assistant that actual solves your personal problems and stays out of your way where there isn’t a bunch of friction and a bit of a strange power dynamic. I think all of these things went quite badly wrong here and I think they should’ve tried to make that up to the ex-employees, but I don’t think these things should never be tried again (though not all at once), and that if they had made it up to them that would’ve been okay.
The primary thing that really isn’t okay according to my ethical norms, is silencing and intimidating people who were harmed and who disagree with you about why. That’s why I tried so hard to communicate Alice and Chloe’s perspective here, so that won’t happen.
In general, I think it’s fine for teams to try really weird things. But I think Nonlinear in particular needs to credibly signal that, if someone works with them and feels burned afterward, or get into some other conflict, they will be free to share openly that they feel that way and why, without fearing retaliation professionally or otherwise.
(Also everyone involved should write things down more! I think things go better when people jot down verbal agreements in writing. Makes it much easier months later to check in on what expectations were set.)
To be clear I think there’s a good chance that Kat and Emerson are very straightforwardly responsible for basically all the messed up things that happened here, and their best response is to stop trying to manage people, admit to themselves that they have major character flaws that are not easily patched, and focus on projects that don’t involve having much power over other people or paying people tiny-or-no-salaries. And most people’s best response is to keep a safe distance from them.
Kat and Emerson seem to me to be in denial. Most of their comments seem to me to have been sustaining a narrative that this is all just malicious lies from Alice and Chloe. At no point in either conversation that I had with them did I feel that they could see the harms I was worried about. I hope they can see now. Then they can actually respond to that, and grow/change.
By default, when ex-employees criticize an organization, I don’t think the ex-employees have a right to anonymity. However in this instance my opinion is that Kat and Emerson have erred way too high on the side of signaling that they will be retributive, and I think if they want to be trusted around this ecosystem in future years right now they clearly should avoid actions that seem obviously retributive. As I said, the personal costs of working at Nonlinear have haunted Alice and Chloe for 1.5 years, and I would consider it an exceedingly inappropriate escalation for Nonlinear to dox them in response to my post, even if they have valid criticisms.
Sometimes I’m concerned that I portrayed Nonlinear in an overly unpleasant light, given that I don’t know a lot of details and am painting a broad picture. Sometimes I re-read my many interview notes and remember pretty concerning things I didn’t include (due to reasons like privacy (on all sides) or because it’s from a 3rd hand report), and I start forming a hunch that if all was revealed their actions would turn out to have been much worse than what I display in the post. (What I’m saying is that I have a lot of uncertainty still in both directions.)
Some final thoughts about this investigation
One of the hard things for me was being respectful of Alice and Chloe whilst also trying to work with them on something I knew was painful for them. My relationship to them in this whole thing has felt pretty confusing to me. From one perspective I’m just a stranger showing up in their life repeatedly interviewing them about terrible things that happened to them and saying I’m gonna try to do something about it. I was generally pretty confused about the boundaries of what sort of input from them made sense to ask for — is it appropriate to ask them to spend much time searching through texts and emails to answer some questions about what happened? I’ll admit to also having some concerns about them not being the best at asserting their boundaries. I moved more slowly and carefully on that account. My guess is that had I got it all done much faster, this could have been much more painful during the process but overall they’d get past it faster and that would’ve been better for them. It would also have increased the risk of them regretting ever talking to me, which I was pretty worried about. I’m pretty sure I made some notable mistake here but I still don’t know what precisely I wish I’d done differently.
One guess is that I should’ve said something like “I am willing to spend N hours working with you to make a serious case here, and if I believe it, then I’ll publish it, and if I don’t believe it at that point I’m going to move on” and then have them decide how much effort they wanted to put into that, and if it wasn’t worth it, move on. But man, it felt wrong to have serious and credible accusations and not be able to let other people know. I didn’t really feel I could let it go.
New people have started giving me more surprising information about Kat/Emerson that suggests other bad situations have occurred, but I’m not doing this job any more. And anyway, I think my last post gives people most of what they need to know.
Another sign to me that it was right for me to do this, was that many of the people I interviewed said things like “I have felt ethical concerns about Nonlinear but I didn’t know what to do about them” and reported feeling relieved that they could share their thoughts with someone (me).
Generally, everyone who I spoke with, or got references from, seemed honest and open. But, of course, it may eventually come out that there is someone who I was mistaken to put my trust in.
In my last post, I advised people not to bother Alice and Chloe about this situation. I would like to revise this to say that, while I wouldn’t want people to bother them about their experiences with Nonlinear, I’ll say I think it’d be pretty nice for people who are friendly with them to send them messages of warmth and friendship and support. I got a fair few of them when I wrote the post and that was helpful for me (sorry I didn’t reply to most of them).
On the CEA Community Health Team (and the EA ecosystem in general)
[Edit: Oops, I’ve edited the first few bullets out, I’ll check some things privately and come back to edit this in the next couple days. I think it’ll probably be fine, but worth checking. Sorry for the confusion, I’ll leave a comment saying so when I’ve returned them.]
I think the CEA Community Health team is much more like an institutionalized whisper network than it is like the police, where lots of people will quietly give it sensitive information, but it mostly isn’t in a position to use it, and on the rare occasions that it does it’s not via an accountable and inspectable procedure. I think that everyone should be very clear that CEA Community Health basically doesn’t police the EA ecosystem, in the sense of reliably investigating and prosecuting credible accusations of wrongdoing or injustice. There are a swath of well-intentioned people in the EA ecosystem, but I think it’s pretty clear there is no reliable justice system for when things go wrong.
Relatedly, four interviewees who gave me some pretty helpful info would only talk to me on the condition that I not share my info with the CEA Community Health team. They didn’t trust (what I’m calling) the “institutionalized whisper network” to respect them, and some expected that it would hurt their ability to get funding to share any info.
My current impression is that many people in the EA ecosystem feel a false sense of safety from the existence of CEA Community Health, hoping that it will pursue justice for them, when (to a first approximation) it will not. While I respect many people on the team and consider some of them friends, my current sense is that the world would probably be better if the CEA Community Health team was disbanded and it was transparent that there is little-to-no institutional protection from bullies in the EA ecosystem, and more people do not get burned by assuming or hoping that it will play that role.
Going forward, for me, personally
I’m basically finished winding down my investigator sub-process, and plan to get back to other work starting Monday.
As I mentioned above, I have had a few calls with other people about some strongly negative experiences with some of the relevant Nonlinear team. I don’t plan to investigate those stories or any of the other people in them, though it did give me some more bayesian evidence about some of the dynamics I’d written about being accurate.
Perhaps Kat and Emerson will be able to provide helpful evidence that changes how their time with Alice and Chloe reflects on them. I hope so. But either way it’s a part of their reputation now, and that seems right to me.
If Nonlinear writes up their account of things, or a critique of my post, I’ll probably read it, but I’m not committing to any substantial engagement.
I don’t really want to do more of this kind of work. Our civilization is hurtling toward extinction by building increasingly capable, general, and unalignable ML systems, and I hope to do something about that. Still, I’m open to trades, and my guess is that if you wanted to pay Lightcone around $800k/year, it would be worth it to continue having someone (e.g. me) do this kind of work full-time. I guess if anyone thinks that that’s a good trade, they should email me.
Right now, I’m getting back to working on LessWrong.com, after a long detour into office spaces in Berkeley, hotel renovations, and a little investigative work.
- ^
Meta: The footnote editor kept crashing due to length, so I’ve included 5 chat logs spread over 9 footnotes.
March 7th
Person A: I just wanted to flag a concern I have about some of guests currently at Lightcone. Yesterday and today I saw both Drew Spartz and Kat Woods using the Lightcone spaces, and this worries me a lot. Their company Nonlinear has a history of illegal and unethical behavior, where they will attract young and naive people to come work for them, and subject them to inhumane working conditions when they arrive, fail to pay them what was promised, and ask them to do illegal things as a part of their internship. I personally know two people who went through this, and they are scared to speak out due to the threat of reprisal, specifically by Kat Woods and Emerson Spartz. Someone took initiative and posted this comment to the EA Forum: https://forum.effectivealtruism.org/posts/L4S2NCysoJxgCBuB6/?commentId=5P75dFuKLo894MQFf
From my friends who worked there, I know that the abuse went far beyond what is detailed in this comment.I’m worried about them being here. I’m worried that more people will have the experiences that my friends had. I’m worried about not taking seriously the damage that bad actors can do (especially given everything that has happened in the last 6 months in EA). I know this is not a lot to go on, but I would not have been happy with myself if I didn’t say something.
Thanks, [name]
BP: Pretty reasonable! I was planning to post publicly about this in one of the slack channels that I’d heard this, to let other people know too.
My current plan is to say something like
> “I have heard anonymized reports from prior employees that they felt very much taken advantage of while working at Nonlinear under Kat. I can’t vouch for them personally, I don’t know the people, but I take them pretty seriously and think it’s more likely than not that something seriously bad happened. I don’t think uncheckable anonymized reports should be sufficient to boot someone from community spaces, especially when they’ve invested a bunch into this ecosystem and seems to me to plausibly be doing pretty good work, so I’m still inviting them here, but I would feel bad not warning people that working with them might go pretty badly.”
Person A: I’m trying to formulate my thoughts on this, but something about this makes me very uncomfortable.
- ^
BP: Yeah, interested in hearing more.
Can also hop on an audio call if that’s easier to talk on!
Am interested what to you seems bad about it, e.g.:
1) Giving up too much info about the people reporting on Kat
2) I’m making the wrong call given the info I have
3) I’m being overly aggressive to Kat by talking about this openly(I think prolly I will/would actually chat with Kat first, to get her take, before posting.)
Person A: In the time that I have been involved in EA spaces I have gotten the sense that unless abuse is extremely public and well documented nothing much gets done about it. I understand the “innocent until proven guilty” mentality, and I’m not disagreeing with that, but the result of this is a strong bias toward letting the perpetrators of abuse off the hook, and continue to take advantage of what should be safe spaces. I don’t think that we should condemn people on the basis of hearsay, but I think we have a responsibility to counteract this bias in every other way possible. It is very scary to be a victim, when the perpetrator has status and influence and can so easily destroy your career and reputation (especially given that they have directly threatened one of my friends with this).
Could you please not speak to Kat directly? One of my friends is very worried about direct reprisal.
- ^
BP: I’m afraid I can’t do that, insofar as I’m considering uninviting her, I want to talk to her and give her a space to say her piece to me. Also I already brought up these concerns with her when I told her she was invited.
I am not going to name you or anyone else who raised concerns to me, and I don’t plan to give any info that isn’t essentially already in the EA Forum thread. I don’t know who the people are who are starting this info.
- ^
March 10th
BP: Babble of next steps:
Post in the announcements channel that I’m disinviting Non-Linear from Lightcone and other spaces that we’ll be hosting, and that I’m happy to chat about why, and give some basic reasoning in the slack.
[redacted]
Mention that there’s confidential info here but that I’m happy to be pinged about this to give more specific takes if someone needs to make a decision.
Maybe share some probabilities of mine on certain statements, to give a shape of my views.
Chat with Emerson to hear his side of the story.
Honestly confused about what questions to ask given confidentiality, that could give them a fair shake.
Maybe later see if any of the employees are open to me saying certain things with slightly more info, such as there being multiple employees who are no longer willing to speak with Nonlinear and who consider their time there to be quite traumatic, and also to explain the compensation setup and general working dynamics.
- ^
Person B: Please don’t do anything without consulting me / the people who’s experiences reported
[One of them] tells me that writing and sharing that causes her to relive it all, feel paralyzed, and unable to sleep. [The other of them] reported worse
BP: Not planning to do anything right now.
Person B: I think having read their docs, it’d be good for you to chat before making any public statement and before offering to share info downstream of the docs with other people
BP: Definitely down to chat with either of them (or indeed any former employees).
Person B: [Chloe] and [Alice] are at the stage of having Lightcone/CEA health do investigation but not necessarily want all the details spread widely publicly (might eventually be okay with that, but I think they need to prepare themselves)
Person C: a not-great-but-okay option is to just have a call with Emerson similar to with Kat, i.e. “I’ve heard some concerning things [cite public comments], do you want to talk about Nonlinear’s employment practices from your perspective.”
- ^
BP: Yeah, I guess that’s the default.
Person B: I think this situation is a case where we ought to figure out how to work with victims/survivors who are kind of traumatized
And figure out how to get justice in a way that doesn’t punish (cause harm to them) them for speaking up in a way that just makes other people wary of speaking up
I think part of that is being careful with how you use information they provide, not sharing it in ways the victims might feel really uncomfortable with. Yes, hella annoying. But they’ve already been so reluctant and scared.
BP: I think the problem is that the thing has gotten sufficiently bad that the former employees are both (a) very hurt and (b) want to not have the bad things that happened to them widely known or discussed.
Person B: I think they’re open it to eventually. They considered just making a public post. It’s more that I think we ought to check with them on how the info gets used
I think what they really don’t want is to be taken by surprise.
BP: When you f*ck up hard enough that the other party won’t openly talk about what happened, it gets much harder to sort things out.
- ^
April 3rd
BP: Current plan I’m thinking about:
—Talk with both about a whistleblower payout of [redacted range]
—Then do some standard investigating, talk to both sides, check the facts, talk to more interns / former employees, etc
—Then publish my takeaways along with statements from all involved
- ^
April 12th
This day there was a thread on LW about Nonlinear.
BP: I was thinking of writing this:
> It is not clear to me that Nonlinear’s work has been executed especially poorly; the audio library seems worthwhile, and I would be quite interested to know how many and which projects were funded through the Emergency Fund project.
> That said, I’ve chatted with a number of former staff/interns about their experiences for about 10 hours, and I would strongly advise future employees to agree on salary and financial agreements ahead of time, in written contract, and not do any work for free. It also seems to me that Nonlinear hasn’t been very competent at managing the legal details of running a non-profit (and indeed lost their non-profit status at some point in the last few years due to not filing basic paperwork), and I would be concerned about them managing the finances of other prizes if the money was actually handed to Nonlinear at any point.
Person B: I think that comment is tantamount to publishing conclusions of your investigation before you actually publish your conclusions. Also I predict that it will attract a lot more attention than you are maybe thinking. I’d hold off, but perhaps try to be quick about, getting your actually verdict.
- ^
April 16th
BP: My guess is that this isn’t as hard as I’m making it out to be. I think the single goal is to make it so that
1) [Chloe] and [Alice] are open that they had a strongly negative experience with Nonlinear and are critical of it, with a bunch of details public
2) Nonlinear is not in a position to retaliate in an underhanded way
I think that’s my main proposal, is that I get a basic public statement from them that addresses the overall details, and that I can check-in with Nonlinear about. Just get it to be out in the open.
- Sharing Information About Nonlinear by 7 Sep 2023 6:51 UTC; 432 points) (
- Effective Aspersions: How the Nonlinear Investigation Went Wrong by 19 Dec 2023 12:00 UTC; 346 points) (
- Practically A Book Review: Appendix to “Nonlinear’s Evidence: Debunking False and Misleading Claims” by 3 Jan 2024 23:16 UTC; 314 points) (
- 13 Dec 2023 3:19 UTC; 217 points) 's comment on Nonlinear’s Evidence: Debunking False and Misleading Claims by (
- Effective Aspersions: How the Nonlinear Investigation Went Wrong by 19 Dec 2023 12:00 UTC; 175 points) (LessWrong;
- Nonlinear’s Evidence: Debunking False and Misleading Claims by 12 Dec 2023 13:15 UTC; 151 points) (
- Nonlinear’s Evidence: Debunking False and Misleading Claims by 12 Dec 2023 13:16 UTC; 104 points) (LessWrong;
- 16 Sep 2023 8:21 UTC; 7 points) 's comment on Closing Notes on Nonlinear Investigation by (LessWrong;
- 13 Dec 2023 22:04 UTC; 4 points) 's comment on Nonlinear’s Evidence: Debunking False and Misleading Claims by (
The closing remarks about CH seem off to me.
Justice is incredibly hard; doing justice while also being part of a community, while trying to filter false accusations and thereby not let the community turn on itself, is one of the hardest tasks I can think of.
So I don’t expect disbanding CH to improve justice, particularly since you yourself have shown the job to be exhausting and ambiguous at best.
You have, though, rightly received gratitude and praise—which they don’t often, maybe just because we don’t often praise people for doing their jobs. I hope the net effect of your work is to inspire people to speak up.
The data on their performance is profoundly censored. You simply will not hear about all the times CH satisfied a complainant, judged risk correctly, detected a confabulator, or pre-empted a scandal through warnings or bans. What denominator are you using? What standard should we hold them to? You seem to have chosen “being above suspicion” and “catching all bullies”.
It makes sense for people who have been hurt to be distrustful of nearby authorities, and obviously a CH team which isn’t trusted can’t do its job. But just to generate some further common knowledge and meliorate a distrust cascade: I trust CH quite a lot. Every time I’ve reported something to them they’ve surprised me with the amount of skill they put in, hours per case. (EDIT: Clarified that I’ve seen them work actual cases.)
Yeah, I think it is actually incredibly easy to undervalue CH, particularly if people don’t regularly interact with it or make use of them rather than just having a single anecdata to go off of. So much of what I do in the community (everything from therapy to mediation to teaching at the camps) is made easier by Community Health, and no one knows about any of it because why would they? I guess I should make a post to highlight this.
Some brief reactions:
I mostly don’t like the ‘justice’ process involved in other cases insofar as it is primarily secret and hidden. I don’t think it’s much of a justice system where you often don’t know the accusations against you or why you’re being punished.
The data on negative performance is also profoundly censored! I am not sure why you think this makes this more likely to make me update positively on the process involved.
I am pro having some surveys of people’s general attitudes toward CEA Community Health. Questions like “Have you ever reported an issue to them” and “To your knowledge have you been investigated by the CEA Community Health team” and “How much do you trust CEA Community Health team to protect the EA ecosystem from bad behavior” and “How much do you trust CEA Community Health team to respect you if you go to them” and “For how many years have you been involved in the EA ecosystem”. I think that would clear up this question substantially.
Fwiw, seems like the positive performance is more censored in expectation than the negative performance: while a case that CH handled poorly could either be widely discussed or never heard about again, I’m struggling to think of how we’d all hear about a case that they handled well, since part of handling it well likely involves the thing not escalating into a big deal and respecting people’s requests for anonymity and privacy.
It does seem like a big drawback that the accused don’t know the details of the accusations, but it also seems like there are obvious tradeoffs here, and it would make sense for this to be very different from the criminal justice system given the difference in punishments (loss of professional and financial opportunities and social status vs. actual prison time).
Agreed that a survey seems really good.
With regards to 2: There is some information CH has made public about how many cases they handle and what actions they take. In a 12 month period around 2021, they handled 19 cases of interpersonal harm. Anonymized summaries of the cases and actions taken are available in the appendix of this post. They ranged from serious:
to out of scope:
Oh great, thanks. I would guess that these discrete cases form a minority of their work, but hopefully someone with actual knowledge can confirm.
I did some more research and 20 complaints a year of varying severity is typical, according to what Julia Wise told TIME magazine for their article:
I’m Chana, a manager on the Community Health team. This comment is meant to address some of the things Ben says in the post above as well as things other commenters have mentioned, though very likely I won’t have answered all the questions or concerns.
High level
I agree with some of those commenters that our role is not always clear, and I’m sorry for the difficulties that this causes. Some of this ambiguity is intrinsic to our work, but some is not, and I would like people to have a better sense of what to expect from us, especially as our strategy develops. I’d like to give some thoughts here that hopefully give some clarity, and we might communicate more about how we see our role in the future.
For a high level description of our work: We aim to address problems that could prevent the effective altruism community from fulfilling its potential for impact. That looks like: taking seriously problems with the culture, and problems from individuals or organizations; hearing and addressing concerns about interpersonal or organizational issues (primarily done by our community liaisons); thinking about community-wide problems and gaps and occasionally trying to fill those; and advising various actors in the EA space based on the information and expertise we have. This work allows us to address specific problems, be aware of concerning actors, and give advice to help the community do its best work.
Context on our responses
Sometimes we have significant constraints on what we can do and say that result in us being unable to share our complete perspective (or any perspective at all). Sometimes that is because people have requested that we keep some or all information about them confidential, including what actions our team has taken. Sometimes it is because us weighing in will increase public discussion that could be harmful to some or all of the people involved. This information asymmetry can be particularly tricky when someone else in the community shares some information about a situation that we think is inaccurate or is only a small part of the picture, but we’re not in a position to correct it. I’m sorry for how frustrating this can be.
I imagine this might end up being relevant to responses to this comment (and which and how and when we respond to them), so I think it’s useful to highlight.
I’ll also flag that many of our staff are at events for the next two weeks, so it might be an especially slow time for Community Health responses.
About what to expect
I think some of the disagreements here come from different understanding of what the Community Health team’s mission is or should be. We want to hear and (where possible) address problems in the community, at the interpersonal, organizational, and community levels. But we often won’t resolve a situation to the satisfaction of everyone involved, or do everything that would be helpful for individuals who were harmed. Ben mentions people “hoping that it [Community Health] will pursue justice for them.” I want to be totally upfront that we don’t see pursuing justice as our mission (and I don’t think we’ve claimed to). In the same vein, protecting people from bullies is sometimes a part of our work, and something we’d always like to be able to do, but it’s not our primary goal and sadly, we won’t always be able to do it.
We don’t want people to have a false impression of what they can expect from talking to us.
Sometimes people come to us with a picture of what they’d like to happen, but we won’t always take the steps they hope we’ll take, either because 1) we don’t agree that those steps are the right call, 2) we’re not willing to take the steps based on the information we have (for example if we don’t have their permission to ask for the other person’s side of the story), or 3) the costs (time, legal risk etc) are too great. We generally explain our considerations to the people involved, but could probably communicate better about this publicly, and as we continue thinking about strategic changes, we’ll want to give people an accurate picture of what to expect.
(At other times people come to us without specific steps they’d like us to take. Sometimes they think something should be done, but don’t know what is feasible, other times they share information as “I don’t think this is very bad and don’t want much to be done, but I thought you should know and be able to look for patterns”, which can be quite helpful.)
We talk about confidentiality and what actions we might be able to take by default in calls. Typically this results in people deciding to go forward with working with us, but some people might decide that what we’re likely to be able to provide isn’t a good match for their situation.
I don’t think the downside of a false sense of security people might get from our team’s existence is strong enough to counteract the benefits.
It’s true that we rarely write up our findings publicly. I don’t take that as damning since I don’t think that is or should be the default expectation. I think public writeups can be a valuable tool in some cases, but often there are good reasons to use other tools instead.
One main reason is the large amount of time they take — Ben pointed out that he didn’t necessarily endorse how much time this project took him, but that it was really hard to do less.
I agree with Ben that we aren’t the EA police. We have some levers we can pull related to advising on a number of decisions, and we do our best to use these to address problems and concerns. I think describing occasions that we use the information we have as “rare” is very much not reflective of the reality of our day-to-day work.
I’m sad to read in some comments that we didn’t satisfy people’s needs or wants in those situations. I’m very open to receiving feedback, concerns or complaints in my capacity as a manager on the team—feel free to message me on the forum or email me (including anonymously). I recognize someone not wanting to talk to the Community Health team might not want to share feedback with that same team, but I want the offer available for anyone who might. You can also send feedback to CEA interim CEO Ben West here.
I also think not feeling satisfied with our actions is plausibly a normal outcome even if everything is going well—sometimes the best available choice won’t make everyone (or anyone) happy. I definitely want people to come in expecting that they might not end up happy with our choices (though I think in many cases they are).
Again, if people think we’re making wrong calls, I’m interested to hear about it. Under some circumstances we can also re-review cases.
Regarding trust
We’re aware that some people might feel hesitant to talk to us (and of course, it’s entirely up to them). There are many understandable reasons for this (even if our team was flawless). Our team isn’t flawless, though, which means there are likely additional cases where people don’t want to talk to us, which I’m sad about. I don’t know how much of a problem this is.
In particular, we are worried to hear that some people didn’t feel that they’d be treated with respect (I can’t tell if they mean by our team or the general institutional network we’re a part of, or something else). In this case, it sounds like potentially they aren’t confident we’d handle their information well or treat them respectfully. If that is what they meant, that sounds like a bad (and potentially stressful) situation and I’m really sorry to hear about it. I could imagine there being a concerning pattern around this that we should prioritize learning about and working on. If at any point people wanted to share information on the reasons they wouldn’t talk to us, I’m interested (including anonymously—here for the community liaisons, here for me personally and here for Ben West, interim CEO of CEA).
People might also worry that we’d negatively update our perception of them if they were implicated in something. (This is one of the reasons people might not want to speak to us that might be implied by this post, though I am not at all sure this is what was meant). I don’t currently think we should have a strict policy of amnesty for any concerning information people provide about themselves, though we in fact try hard to not make people regret talking to us. (Strict amnesty of that kind would probably result in less of us doing things about issues we hear about and make Ben’s concerns worse rather than better, though I haven’t gone and researched this question.)
In general, we care a lot about not making people regret speaking to us and not pressuring people to do or share more than they’re comfortable with. These are big elements of why we sometimes do less than we’d like, since we don’t want to take actions they’re not comfortable with, or push them to stay involved in a situation they’d like to be done with, or to do anything that would cause them to be worried we might inadvertently deanonymize them.
My general sense (though of course there are selection effects here) is that people who talk to our team in person or on calls about our decision making often end up happier and finding us largely reasonable. I haven’t figured out how to do that at scale e.g. in public writing.
Thanks all for your thoughts and feedback.
A design decision to not have “justice” or “countering bullies” seems sort of big and touches on deep subjects.
I guess this viewpoint above could be valid and deep, (but I’m slightly skeptical the comm. health team has that depth).
It seems possible that, basically, just pursuing justice or countering bullies in a straightforward way might be robustly good and support other objectives. Honestly, it doesn’t seem that complicated, and its slightly a yellow flag if it is hard in EA.
I think writing on this (like Julia W’s writing on on her considerations, not going to search it up but it was good). Such a piece would (ideally) show wisdom and considerations that is illuminating.
I’ll try to produce something, maybe not under this name or an obvious form.
This seems extremely uncharitable. It’s impossible for every good thing to be the top priority, and I really dislike the rhetorical move of criticising someone who says their top priority is X for not caring at all about Y.
In the post you’re replying to Chana makes the (in my view) virtuous move of actually being transparent about what CH’s top priorities are, a move which I think is unfortunately rare because of dynamics like this. You’ve chosen to interpret this as ‘a decision not to have’ [other nice things that you want], apparently realised that it’s possible the thinking here isn’t actually extremely shallow, but then dismissed the possibility of anyone on the team being capable of non-shallow thinking anyway for currently unspecified reasons.
editing this in rather than continuing a thread as I don’t feel able to do protracted discussion at the moment:
Chana is a friend. We haven’t talked about this post, but that’s going to be affecting my thinking.
She’s also, in my view (which you can discount if you like), unusually capable of deep thinking about difficult tradeoffs, which made the comment expressing skepticism about CH’s depth particularly grating.
More generally, I’ve seen several people I consider friends recently put substantial effort into publicly communicating their reasoning about difficult decisions, and be rewarded for this effort with unhelpful criticism.
All that is to say that I’m probably not best placed to impartially evaluate comments like this, but at the end of the day I re-read it and it still feels like what happened is someone responded to Chana saying “our top priority is X” with “it seems possible that Y might be good”, and I called that uncharitable because I’m really, really sure that that possibility has not escaped her notice.
Your reply contains a very strong and in my view, highly incorrect read, and says I am far too judgemental and critical.
Please review my comment again.
I’m simply pointing to a practice or principle common in many orgs, companies, startups and teams to have principles and flow from them, in addition to “maximizing EV” or “maximizing profits”. This may be wrong or right.
I’m genuinely not judging but keeping it open, like, I literally said this. I specifically suggest writing.
While this wasn’t the focus, I haven’t thought about it, but I probably do think Chana’s writing is virtuous. I actually have very specific reasons to think why the work is shallow, but this is a distinct thing from the principle or choice I’ve talked about. Community health is hard and the team is sort of given an awkward ball to catch.
An actual uncharitable opinion: I understand this is the EA forum, so as one of the challenges of true communication, critiques and devastating things written by “critics” are often masked or coached as insinuations, but I don’t feel like this happened and I kind of resent having to put my comments through these lenses.
BTW, I kind of see Alex L as one of the “best EAs” and I sort of attribute this issue to the forum, and now sort of reinforces my distrust of EA discourse (like, I think there’s an ongoing 50 comment thread or something because a grantmaker asked someone if english was their second language, come on).
The community health team’s work on interpersonal harm in the community
Julia here from the community health team. As you might guess, we have a pretty different view on some of Ben’s takes about our team. There are a variety of things that make this difficult to discuss publicly; we’ll see if we can say more at some point. For now, we wanted to say that we’re following the conversation and thinking a lot about these questions.
I really appreciate what you said about the community health team. I reported what I viewed as serious misconduct that they had jurisdiction over, and very little was done about it.
I trusted them based on what I understood to be their reputation. So it really broke me when, despite them saying they took my allegation seriously, they did not treat it as if they took it seriously. Their non-response was a massive contributor to what were some of the worst months of my life. I am still recovering.
That’s not to say anyone should completely write them off based on my account. I’m posting this anonymously with no details, and so there is absolutely no reason to trust my account of things. There is no reason to believe that I am perceiving the situation accurately/reasonably.
One piece of detail I’ll provide is that the nature of the misconduct involved repeated threats.
But I do wish people trusted them less. I think it could have saved me a lot of pain if I knew going in that they had a mixed reputation. If I had gone in with some uncertainty about how much I should have relied on them. I could have set better expectations.
Hi KnitKnack—I’m really sorry to hear you had a bad experience with the CH team, and that it contributed to some especially bad moments in your life. I totally endorse that people should have accurate expectations, which means that they should not expect we’ll always be able to resolve each issue to everyone’s satisfaction. I think that even in worlds where we did everything quote-unquote “right” (in terms of fair treatment of each of the people involved, and the overall safety and functioning of the community), some people would be disappointed in how much we acted or what we did, and all the more so in worlds where we made mistakes. If you’d like to talk about the situation you were in, feel free to contact me as the manager of the team members handling situations like this; I’d be interested to hear your feedback (happy to do this anonymously, such as through the forum). Entirely understandable if you’d rather not, though, and I wish you all the best.
Pre-committing to not elaborating further, but I wanted to echo what is said in this comment and give a non-anonymous account as someone who (due to personal experience with reporting misconduct) also has similar feelings as KnitKnack.
Edit: I think Chana’s comment is helpful context i.e. it seems good if people’s expectations going in are calibrated to CEA CH’s position is that it is there to “address problems that could prevent the effective altruism community from fulfilling its potential for impact”.
In particular, they “don’t see pursuing justice as [their] mission” and “protecting people from bullies is sometimes a part of [their] work, and something [they’d] always like to be able to do, but it’s not [their] primary goal”.
On a personal note, my advice to people who are considering going to CEA CH is to keep this in mind. To the extent to which there is a trade-off between impact and justice, it may not resolve in a way that is “just” from your POV, and their work on interpersonal harm does take the talent bottleneck seriously e.g. you should probably think about what they perceive the potential impact of the perpetrator to be.
Thank you for all your efforts in this endeavor Ben, you’ve performed a very valuable service to the community.
Your comments about CEA’s Community Health team in this post seem particularly important to me. If CEA’s CH team had in depth knowledge of how Alice and Chloe described their experiences, had found no reason to doubt those accounts, and still declined to make any kind of public statement, that’s incredibly damning. I’m open to hearing CH’s take on things, but if that’s actually the case I agree with your view that “the world would probably be better if the CEA Community Health team was disbanded and it was transparent that there is little-to-no institutional protection from bullies in the EA ecosystem.” That’s definitely a new position for me; while I’ve criticized CH work before my prior assumption was that the team could be fixed.
While I think I disagree pretty strongly with the idea CEA CH should be disbanded, I would like to see an updated post from the team on what the community should and should not expect from them, with the caveat that they may be somewhat limited in what they can say legally about their scope.
Correct me if I’m wrong but I believe CEA was operating without in-house legal counsel until about a year ago. This was while engaging in many situations that could have easily led to a defamation suit should they have investigated someone sufficiently resourced and litigious. I think it makes sense their risk tolerance will have shifted while EVF is under Charity Commission investigation post-FTX and with the hiring of attorneys who are making risk assessments and recommendations across programs.
The issue for me is less “are they doing everything I’d like them to do” and more “does the community have appropriate expectations for them,” which is in keeping with the general idea EA projects should make their scopes transparent.
Whether or not CEA/EV had in-house counsel, I’d like to think they had an ability to access legal advice. If not, that seems like a poorly thought out setup.
I agree it makes sense for EV to have a lower risk tolerance in light of the Charity Commission investigation. However, I’m making the following assumptions (it would be great if a lawyer could opine on whether they are accurate):
There are simple steps CH could have taken that would carry very little risk of a defamation suit. For instance, I find it hard to believe CH would be liable if they’d issued a public statement along the lines of “Alice and Chloe report XYZ about Nonlinear; Nonlinear disputes these claims. CH is not publicly picking a side but wants to make people aware of the dispute.” Maybe Alice and Chloe would have objected to that kind of statement, but it seems like it wouldn’t have material defamation risk (though again, I’m not a lawyer).
Inaction by CH could also carry legal risk. For example, if CH hears credible complaints against an org that is still allowed to come to EAG (an event run by CH’s CEA colleagues), and then someone joins that org at EAG and subsequently suffers the same treatment that CH was aware of, I imagine CEA/CV could in some cases have liability if that person wanted to sue.
First, CEA definitely have access to legal counsel.
Second, I don’t think these issues are that relevant, after reading Ben’s posts.
Regardless of legal risk, the reasons for not making claims public are clear -
(A) It took Ben hundreds of hours to feel confident and clear enough to make a useful public statement while also balancing the potential harms to Alice and Chloe. This is not uncommon in such situations and I think people should not expect CH to be able to do this in most cases.
(B) CEA is not in charge of Nonlinear or most other EA orgs. Just like Ben tried to be responsible for behavior in his offices and ended up down a rabbit hole, CEA tries to be responsible for behavior at their events, and has to choose which rabbit holes to go down. As Ben has said, they are not the EA police.
I agree it would be good to be clear about what jobs they are not doing, but think it would absolutely be worse to have no people in EA paid to work on similar issues rather than 2-3 people who still cannot do all of the work people might ask of them.
In that example, what would CH be adding relative to Alice and Chloe making the public statement themselves? For example, if the idea is that people will give the report greater weight because it comes via CH and people know that CH wouldn’t host a report like this if they didn’t give it some credence, then that sounds (not a lawyer) potentially libelous, especially with how strict the UK is in this area.
(Disclosure: married to a CH team member)
If initial due diligence conducted by an independent third party didn’t uncover obvious evidence about which side is correct, IMO that’s very helpful info for the broader community and it really seems like there should be a way of expressing that in a way that doesn’t introduce legal liability.
Agree this would be helpful. In addition to clarifying what community expectations should be, I’d like to know whether the Nonlinear affair will be included either or both of the internal and external reviews that are (were?) being conducted. And if so, would that inclusion have taken place if Ben hadn’t published his post?
Appreciate the comment. I sadly decided to edit out a few bullets on that to check in on what’s okay to share. That’s my fault, I will make sure to leave a new comment when I am able to add them back in, probably in a day or two (but might be longer).
Let me know if you’d like me to remove my comment while this gets sorted out.
Thanks for saying that, but no request from me. (And my guess is it’ll be fine and I’ll add my bullets back in a day or so.)
Thanks this reflection was really useful and demonstrated the extent of how d complicated and messy this kind of investigative work is practically and emotionally.
It might just be a personal thing, but this line near the end jarred with me a bit.
“i’m open to trades, and my guess is that if you wanted to pay Lightcone around $800k/year, it would be worth it to continue having someone (e.g. me) do this kind of work full-time.”
I’m not completely sure why this jarred, but maybe it’s that bringing in even tangentially an odd fundraising option for your org on the back of a messy and hard situation didn’t seem right. I’m also not sure trading a lot of money to do a job which you “don’t really want to do” is the best idea either.
Noted. FYI in my culture it’s considered pro-social to let people know what trades you’d be up for and what price.
Also, and there’s a good chance that this isn’t the main thing you’re responding to, but FWIW we’re not doing active fundraising any more (as we were successful at getting our basic needs met for continuing), so this isn’t like me trying to get my salary fundraised or anything like that.
Thanks, appreciate the response and that makes sense.
$800k per year? For one person to do investigative journalism? What would all that money be spent on?
Compensating the person sufficiently that they’re willing to do the work (because, ex, they don’t enjoy it, or it displaces other work they see as much more valuable).
I read this as coming from a culture of listing “happy/cheerful prices”.
In case you missed it, and you’re interested. I’ve put some updates relating to the the Community Health and Special Projects Team thinking and actions about concerns about Nonlinear on Ben’s initial post.
General statement (10th Sept)
An incomplete list of actions we’ve taken to reduce risk of other people ending up in similarly bad situations (11th Sept)
Rough timeline and thinking (New − 20th Sept)
The comments/arguments about the community health team mostly make me think something more like “it should change its name” than be disbanded. I think it’s good to have a default whisper network to report things to and surreptitiously check in with, even if they don’t really enforce/police things. If the problem is that people have a false sense of security, I think there are better ways to avoid that problem.
Just maintaining the network is probably a fair chunk of work.
That said – I think one problem is that the comm-health team has multiple roles. I’m honestly not sure I understand all the roles they consider themselves to have taken on. But it seems likely to me that at least some of those roles are “try to help individuals” and at least some of those roles are more like “protect the ecosystem as whole” and “protect the interests of CEA in particular”, and those might come into conflict with the “help individuals” one. And it’s hard to tell from the outside how those tradeoffs get made.
I know a person who maintained a whisper network in a local community, who I’d overall trust more than CEA in that role, because basically their only motivation was “I want to help my friends and have my community locally be safe.” And in some sense this is more trustworthy than “also, I want to help the world as a whole flourish”, because there’s fewer ways for them to end up conflicted or weighing multiple tradeoffs.
But, I don’t think the solution can necessarily be “well, give the Whisper Network Maintenance role to less ambitious people, so that their motives are pure”, because, well, less ambitious people don’t have as high a profile and a newcomer won’t know where to find them.
In my mind this adds up to “it makes sense for CEA to keep a public-node of a whisper network running, but it should be clearer about it’s limitations, and they should be upfront that there are some limits as to what people can/should trust/expect from it.” (and, ideally there should maybe be a couple different overlapping networks, so in situations where people don’t trust CEA, they have alternatives. i.e. Healthy Competition is good, etc)
I appreciate you doing this Ben. I had some very vague concerns, as did several acquaintances I spoke to over the last couple years. My default move would be to share concerns with a charity’s trustees, so I tried to look up Nonlinear’s in case I or someone else would need to contact them, but I found out they weren’t registered as a charity and I didn’t have any other way of finding out if there was anyone overseeing Kat and Emerson in any meaningful sense.
I do think that the majority of EA organisations have either a non-profit or for-profit board, or are being incubated inside of a larger org with one of those things, so that should give most organisations a bit clearer accountability. I think Nonlinear was uniquely difficult.
Does the Community Health team have any concerns about this? Do they have a plan to regain trust?
Because otherwise I don’t see what the point of a Community Health team that inspires so much mistrust is. Even if the team was 100% competent, they could not do their jobs effectively without people trusting them.
Ex remote non-linear intern here: I wasn’t interviewed by Ben, but if I had been, then there’s information I would have shared with Ben, but not community health.
(Though I have less faith in Ben than before after seeing him publish without waiting a week)
(I don’t have any direct knowledge of the claims in the post as I was remote and had already finished my internship)
It seems to me like by publishing it when he did, he acted according to Alice and Chloe’s interested who were protected by an earlier publication at a cost to other parties.
If I were in the position of someone like Alice or Chloe and think about whether or not to talk to Ben, that would make me more likely to talk to Ben not less.
I guess there’s a difference between being the person who was hurt vs. someone on the sidelines who has general information about how someone is like as a boss.
If you’ve been hurt, then you would probably want someone to fight for your side. If you’re on the sidelines, you might want someone who’s trying their best to form a fair picture overall. You might not want to share anything that could be used to paint an unfairly negative picture.
So would you say that although you have less faith in Ben than before, Alice and Chloe should have more faith in him? That seems wrong to me; I feel like “faith” in context should cash out as something less interpersonal than that? Like it should be a prediction about how Ben will act in future situations. Then “Alice should have more faith in Ben than me” sounds like a prediction that in future Ben will favor team Alice over team Chris; but that’s not a prediction I’d make and I don’t think it’s a prediction you’d make.
(It does seem reasonable to predict something like “in future, Ben will favor team person-who-was-hurt over team person-on-sidelines-who...”. But I don’t think that’s where you’re going with this either?)
I assume this trust difference is due to perceived or real value differences among different EAs, not rampant mistrust of CH among all EAs. Trust would only be shifted around rather than “solved” by having different people in CH roles.
I was not interviewed or involved in this situation but I have asked Julia and Catherine for support on other issues and felt supported. While Chris would share more things with Ben than he would share with CH, I would share more things with the current CH team than I would share with Ben. Chris trusts Ben more; I trust CH more.
I respect many things about Ben based on his writing (and I would be more willing to talk to him now after reading about his experiences with Alice and Chloe), but I would still reach out to CH team members first. It’s not a critique of Ben, it’s just a fact based on our different views and experiences. I assume there are plenty of people like me and plenty of people like Chris, though I don’t know the distribution.
Because EA includes people with a variety of values, uniformity of trust in just 2-3 individuals should not to be expected. That said, if someone could hire Ben to be a more trusted CH rep to the people who do trust him more, I’m sure they would. If we could do that for all the sub-communities in EA, we likely would. But Ben, and most others!, don’t want that job as he’s said.
Maybe Chris also prefers Ben’s independence from CEA. I do wonder if there’s an argument for making a CH investigative team more independent from CEA but that has pros and cons.
I don’t see a reason to think that dismantling a team that others do indeed trust will make things better.
I wouldn’t surprise me if active Less Wrong members were more favourable disposed towards Ben than other people.
Thank you for doing this Ben. Thank you for navigating a way through all of these issues that have prevented others from doing this.
Thank you for your incredibly hard work here, I know I argued it was a mistake an additional week, but I want to emphasise that I’m nonetheless really glad that you published this post.
In particular, there are certain claims that I think it was very important for the EA community to have been made aware of and to have the opportunity to deal with. I also hope that the publishing of these claims allows Alice and Chloe to move forward without fear.
Disclaimer: Previously interned at Non-Linear.
There was a recent post about how depending on the country you live, working in animal advocacy could be your comparative advantage as an EA. I wonder what country has creating a competitor to CEA Community Health as a comparative advantage?
Is community health UK or USA based? UK has laws that make winning libel cases easy (at least historically, I think there’s been some movement), but USA has very strong protecting for speech. I doubt there’s many countries that would be much better than USA, unless the comparative advantage is a non-functional legal system. Perhaps a state with stronger anti-SLAP laws could work though?
It seems that a major problem of a competitor to CEA Community Health is that it’s harder for someone outside of the US to have the connection to get the necessary information.
The Wikileaks strategy against defamation suits was to have the spokesperson of the organization be a digital nomad, so there’s no address to which you can easily serve papers for lawsuits.
Otherwise, maybe Scandinavian countries or some Eastern European ones could have a good combination of low legal costs of lawsuits and strong free speech laws.
Maybe you can do all the money movement for the org in crypto and have no clear country to which the org belongs.
My assumption was that Community Health work can be done remotely. Indeed, I suspect the current Community Health team works remotely?
Living in a convenient time zone (or having a weird sleep schedule) could be important.
Lack of connections could be a good thing if it helps objectivity. (Recalling the controversy over the connection between Owen C-B and Julia Wise)
I assume most people will volunteer information if someone contacts them and says “we are concerned about your behavior and we are thinking of writing a post about it, but we want to hear from you first”.
I imagine the best way to gain the community’s trust would be to produce detailed writeups like the one Ben produced. Once the new org has gained trust from producing good writeups, perhaps they can manage a list of people who they suggest to ban from conferences etc.
Also having minimal assets seems potentially good, so the incentive for a lawsuit is low.
(Side note: It seems fairly likely to me that the current Community Health team actually does a lot of great work and we just don’t hear about it.)
Another thought regarding building trust for a new org. (This is a bit of a tangent.)
In a different subthread, Chris Leong said “there’s information I would have shared with Ben, but not community health.”
Community health work involves both making factual judgments about what occurred, and also value judgments about what constitutes a misdeed or what constitutes an appropriate punishment.
Insofar as people mistrust CEA Community Health, my guess would be that it’s because of disagreements re: the value judgments that CEA Community Health has made. That’s hard to avoid, because value judgments will often be controversial.
If I was to start an org competing with CEA Community Health, I think I would make it a priority to firewall factual judgments from value judgments. Focus first on gaining a reputation for making accurate factual judgments, then if you do make value judgments, keep them separate from the factual judgments. (E.g. “Our opinion is that...” or “Our random polling procedure suggests the median EA thinks that...” or even “If you have X values, you are likely to feel Y about this situation” for various different values of X.)
I’d like to see more of an analysis of where we are now with what people want from CH, what they think it does, and what it actually does, and to what extent and why gaps exist between these things, before we go too deep into what alternative structures could exist. Currently I don’t feel like we really understand the problem well enough to solve it.
Thanks for all your work Ben.
But a glum aphorism comes to mind: the frame control you can expose is not the true frame control.
I think it’s true that frame control (or, manipulation in general) tends to be designed to make it hard to expose, but, I think the actual issue here is more like “manipulation is generally harder to expose than it is to execute, so, people trying to expose manipulation have to do a lot of disproportionate work.”