Sharing Information About Nonlinear
Added (11th Sept): Nonlinear have commented that they intend to write a response, have written a short follow-up, and claim that they dispute 85 claims in this post. I’ll link here to that if-and-when it’s published.
Added (11th Sept): One of the former employees, Chloe, has written a lengthy comment personally detailing some of her experiences working at Nonlinear and the aftermath.
Added (12th Sept): I’ve made 3 relatively minor edits to the post. I’m keeping a list of all edits at the bottom of the post, so if you’ve read the post already, you can just go to the end to see the edits.
Added (15th Sept): I’ve written a follow-up post saying that I’ve finished working on this investigation and do not intend to work more on it in the future. The follow-up also has a bunch of reflections on what led up to this post.
Added (12th Dec): Nonlinear has written a lengthy reply, which you can read here.
Epistemic status: Once I started actively looking into things, much of my information in the post below came about by a search for negative information about the Nonlinear cofounders, not from a search to give a balanced picture of its overall costs and benefits. I think standard update rules suggest not that you ignore the information, but you think about how bad you expect the information would be if I selected for the worst, credible info I could share, and then update based on how much worse (or better) it is than you expect I could produce. (See section 5 of this post about Mistakes with Conservation of Expected Evidence for more on this.) This seems like a worthwhile exercise for at least non-zero people to do in the comments before reading on. (You can condition on me finding enough to be worth sharing, but also note that I think I have a relatively low bar for publicly sharing critical info about folks in the EA/x-risk/rationalist/etc ecosystem.)
tl;dr: If you want my important updates quickly summarized in four claims-plus-probabilities, jump to the section near the bottom titled “Summary of My Epistemic State”.
When I used to manage the Lightcone Offices, I spent a fair amount of time and effort on gatekeeping — processing applications from people in the EA/x-risk/rationalist ecosystem to visit and work from the offices, and making decisions. Typically this would involve reading some of their public writings, and reaching out to a couple of their references that I trusted and asking for information about them. A lot of the people I reached out to were surprisingly great at giving honest references about their experiences with someone and sharing what they thought about someone.
One time, Kat Woods and Drew Spartz from Nonlinear applied to visit. I didn’t know them or their work well, except from a few brief interactions that Kat Woods seems high-energy, and to have a more optimistic outlook on life and work than most people I encounter.
I reached out to some references Kat listed, which were positive to strongly positive. However I also got a strongly negative reference — someone else who I informed about the decision told me they knew former employees who felt taken advantage of around things like salary. However the former employees reportedly didn’t want to come forward due to fear of retaliation and generally wanting to get away from the whole thing, and the reports felt very vague and hard for me to concretely visualize, but nonetheless the person strongly recommended against inviting Kat and Drew.
I didn’t feel like this was a strong enough reason to bar someone from a space — or rather, I did, but vague anonymous descriptions of very bad behavior being sufficient to ban someone is a system that can be straightforwardly abused, so I don’t want to use such a system. Furthermore, I was interested in getting my own read on Kat Woods from a short visit — she had only asked to visit for a week. So I accepted, though I informed her that this weighed on my mind. (This is a link to the decision email I sent to her.)
(After making that decision I was also linked to this ominous yet still vague EA Forum thread, that includes a former coworker of Kat Woods saying they did not like working with her, more comments like the one I received above, and links to a lot of strongly negative Glassdoor reviews for Nonlinear Cofounder Emerson Spartz’s former company “Dose”. Note that more than half of the negative reviews are for the company after Emerson sold it, but this is a concerning one from 2015 (while Emerson Spartz was CEO/Cofounder): “All of these super positive reviews are being commissioned by upper management. That is the first thing you should know about Spartz, and I think that gives a pretty good idea of the company’s priorities… care more about the people who are working for you and less about your public image”. A 2017 review says “The culture is toxic with a lot of cliques, internal conflict, and finger pointing.” There are also far worse reviews about a hellish work place which are very worrying, but they’re from the period after Emerson’s LinkedIn says he left, so I’m not sure to what extent he is responsible he is for them.)
On the first day of her visit, another person in the office privately reached out to me saying they were extremely concerned about having Kat and Drew in the office, and that they knew two employees who had had terrible experiences working with them. They wrote (and we later discussed it more):
Their company Nonlinear has a history of illegal and unethical behavior, where they will attract young and naive people to come work for them, and subject them to inhumane working conditions when they arrive, fail to pay them what was promised, and ask them to do illegal things as a part of their internship. I personally know two people who went through this, and they are scared to speak out due to the threat of reprisal, specifically by Kat Woods and Emerson Spartz.
This sparked (for me) a 100-200 hour investigation where I interviewed 10-15 people who interacted or worked with Nonlinear, read many written documents and tried to piece together some of what had happened.
My takeaway is that indeed their two in-person employees had quite horrendous experiences working with Nonlinear, and that Emerson Spartz and Kat Woods are significantly responsible both for the harmful dynamics and for the employees’ silence afterwards. Over the course of investigating Nonlinear I came to believe that the former employees there had no legal employment, tiny pay, a lot of isolation due to travel, had implicit and explicit threats of retaliation made if they quit or spoke out negatively about Nonlinear, simultaneously received a lot of (in my opinion often hollow) words of affection and claims of familial and romantic love, experienced many further unpleasant or dangerous experiences that they wouldn’t have if they hadn’t worked for Nonlinear, and needed several months to recover with friends and family afterwards before they felt able to return to work.
(Note that I don’t think the pay situation as-described in the above quoted text was entirely accurate, I think it was very small — $1k/month — and employees implicitly expected they would get more than they did, but there was mostly not salary ‘promised’ that didn’t get given out.)
After first hearing from them about their experiences, I still felt unsure about what was true — I didn’t know much about the Nonlinear cofounders, and I didn’t know which claims about the social dynamics I could be confident of. To get more context, I spent about 30+ hours on calls with 10-15 different people who had some professional dealings with at least one of Kat, Emerson and Drew, trying to build up a picture of the people and the org, and this helped me a lot in building my own sense of them by seeing what was common to many people’s experiences. I talked to many people who interacted with Emerson and Kat who had many active ethical concerns about them and strongly negative opinions, and I also had a 3-hour conversation with the Nonlinear cofounders about these concerns, and I now feel a lot more confident about a number of dynamics that the employees reported.
For most of these conversations I offered strict confidentiality, but (with the ex-employees’ consent) I’ve here written down some of the things I learned.
In this post I do not plan to name most of the people I talked to, but two former employees I will call “Alice” and “Chloe”. I think the people involved mostly want to put this time in their life behind them and I would encourage folks to respect their privacy, not name them online, and not talk to them about it unless you’re already good friends with them.
Conversation with Kat on March 7th, 2023
Returning to my initial experience: on the Tuesday of their visit, I still wasn’t informed about who the people were or any details of what happened, but I found an opportunity to chat with Kat over lunch.
After catching up for ~15 mins, I indicated that I’d be interested in talking about the concerns I raised in my email, and we talked in a private room for 30-40 mins. As soon as we sat down, Kat launched straight into stories about two former employees of hers, telling me repeatedly not to trust one of the employees (“Alice”), that she has a terrible relationship with truth, that she’s dangerous, and that she’s a reputational risk to the community. She said the other employee (“Chloe”) was “fine”.
Kat Woods also told me that she expected to have a policy with her employees of “I don’t say bad things about you, you don’t say bad things about me”. I am strongly against this kind of policy on principle (as I told her then). This and other details raised further red flags to me (i.e. the salary policy) and I wanted to understand what happened.
Here’s an overview of what she told me:
When they worked at Nonlinear, Alice and Chloe had expenses covered (room, board, food) and Chloe also got a monthly bonus of $1k/month.
Alice and Chloe lived in the same house as Kat, Emerson and Drew. Kat said that she has decided to not live with her employees going forward.
She said that Alice, who incubated their own project (here is a description of the incubation program on Nonlinear’s site), was able to set their own salary, and that Alice almost never talked to her (Kat) or her other boss (Emerson) about her salary.
Kat doesn’t trust Alice to tell the truth, and that Alice has a history of “catastrophic misunderstandings”.
Kat told me that Alice was unclear about the terms of the incubation, and said that Alice should have checked in with Kat in order to avoid this miscommunication.
Kat suggested that Alice may have quit in substantial part due to Kat missing a check-in call over Zoom toward the end.
Kat said that she hoped Alice would go by the principle of “I don’t say bad things about you, you don’t say bad things about me” but that the employee wasn’t holding up her end and was spreading negative things about Kat/Nonlinear.
Kat said she gives negative references for Alice, advises people “don’t hire her” and not to fund her, and “she’s really dangerous for the community”.
She said she didn’t have these issues with her other employee Chloe, she said she was “fine, just miscast” for her role of “assistant / operations manager”, which is what led to her quitting. Kat said Chloe was pretty skilled but did a lot of menial labor tasks for Kat that she didn’t enjoy.
The one negative thing she said about Chloe was that she was being paid the equivalent of $75k[1] per year (only $1k/month, the rest via room and board), but that at one point she asked for $75k on top of all expenses being paid and that was out of the question.[2]
A High-Level Overview of The Employees’ Experience with Nonlinear
Background
The core Nonlinear staff are Emerson Spartz, Kat Woods, and Drew Spartz.
Kat Woods has been in the EA ecosystem for at least 10 years, cofounding Charity Science in 2013 and working there until 2019. After a year at Charity Entrepreneurship, in 2021 she cofounded Nonlinear with Emerson Spartz, where she has worked for 2.5 years.
Nonlinear has received $599,000 from the Survival and Flourishing Fund in the first half of 2022, and $15,000 from Open Philanthropy in January 2022.
Emerson primarily funds the project through his personal wealth from his previous company Dose and from selling Mugglenet.com (which he founded). Emerson and Kat are romantic partners, and Emerson and Drew are brothers. They all live in the same house and travel across the world together, jumping from AirBnb to AirBnb once or twice per month. The staff they hire are either remote, or live in the house with them.
My current understanding is that they’ve had around ~4 remote interns, 1 remote employee, and 2 in-person employees (Alice and Chloe). Alice was the only person to go through their incubator program.
Nonlinear tried to have a fairly high-commitment culture where the long-term staff are involved very closely with the core family unit, both personally and professionally. However they were given exceedingly little financial independence, and a number of the social dynamics involved seem really risky to me.
Alice and Chloe
Alice travelled with Nonlinear from November 2021 to June 2022 and started working for the org from around February, and Chloe worked there from January 2022 to July 2022. After talking with them both, I learned the following:
Neither were legally employed by the non-profit at any point.
Chloe’s and Alice’s finances (along with Kat’s and Drew’s) all came directly from Emerson’s personal funds (not from the non-profit). This left them having to get permission for their personal purchases, and they were not able to live apart from the family unit while they worked with them, and they report feeling very socially and financially dependent on the family during the time they worked there.
Chloe’s salary was verbally agreed to come out to around $75k/year. However, she was only paid $1k/month, and otherwise had many basic things compensated i.e. rent, groceries, travel. This was supposed to make traveling together easier, and supposed to come out to the same salary level. While Emerson did compensate Alice and Chloe with food and board and travel, Chloe does not believe that she was compensated to an amount equivalent to the salary discussed, and I believe no accounting was done for either Alice or Chloe to ensure that any salary matched up. (I’ve done some spot-checks of the costs of their AirBnbs and travel, and Alice/Chloe’s epistemic state seems pretty reasonable to me.)
Alice joined as the sole person in their incubation program. She moved in with them after meeting Nonlinear at EAG and having a ~4 hour conversation there with Emerson, plus a second Zoom call with Kat. Initially while traveling with them she continued her previous job remotely, but was encouraged to quit and work on an incubated org, and after 2 months she quit her job and started working on projects with Nonlinear. Over the 8 months she was there Alice claims she received no salary for the first 5 months, then (roughly) $1k/month salary for 2 months, and then after she quit she received a ~$6k one-off salary payment (from the funds allocated for her incubated organization). She also had a substantial number of emergency health issues covered.[3]
Salary negotiations were consistently a major stressor for Alice’s entire time at Nonlinear. Over her time there she spent through all of her financial runway, and spent a significant portion of her last few months there financially in the red (having more bills and medical expenses than the money in her bank account) in part due to waiting on salary payments from Nonlinear. She eventually quit due to a combination of running exceedingly low on personal funds and wanting financial independence from Nonlinear, and as she quit she gave Nonlinear (on their request) full ownership of the organization that she had otherwise finished incubating.
From talking with both Alice and Nonlinear, it turned out that by the end of Alice’s time working there, since the end of February Kat Woods had thought of Alice as an employee that she managed, but that Emerson had not thought of Alice as an employee, primarily just someone who was traveling with them and collaborating because she wanted to, and that the $1k/month plus other compensation was a generous gift.
Alice and Chloe reported that Kat, Emerson, and Drew created an environment in which being a valuable member of Nonlinear included being entrepreneurial and creative in problem-solving — in practice this often meant getting around standard social rules to get what you wanted was strongly encouraged, including getting someone’s favorite table at a restaurant by pressuring the staff, and finding loopholes in laws pertaining to their work. This also applied internally to the organization. Alice and Chloe report being pressured into or convinced to take multiple actions that they seriously regretted whilst working for Nonlinear, such as becoming very financially dependent on Emerson, quitting being vegan, and driving without a license in a foreign country for many months. (To be clear I’m not saying that these laws are good and that breaking them is bad, I’m saying that it sounds to me from their reports like they were convinced to take actions that could have had severe personal downsides such as jail time in a foreign country, and that these are actions that they confidently believe they would not have taken had it not been due to the strong pressures they felt from the Nonlinear cofounders and the adversarial social environment internal to the company.) I’ll describe these events in more detail below.
They both report taking multiple months to recover after ending ties with Nonlinear, before they felt able to work again, and both describe working there as one of the worst experiences of their lives.
They both report being actively concerned about professional and personal retaliation from Nonlinear for speaking to me, and told me stories and showed me some texts that led me to believe that was a very credible concern.
An assortment of reported experiences
There are a lot of parts of their experiences at Nonlinear that these two staff found deeply unpleasant and hurtful. I will summarize a number of them below.
I think many of the things that happened are warning flags, I also think that there are some red lines, I’ll discuss my thoughts on which are the red lines in my takeaways at the bottom of this post.
My Level of Trust in These Reports
Most of the dynamics were described to me as accurate by multiple different people (low pay, no legal structure, isolation, some elements of social manipulation, intimidation), leading me to have high confidence in them, and Nonlinear themselves confirmed various parts of these accounts.
People whose word I would meaningfully update on about this sort of thing have vouched for Chloe’s word as reliable.
The Nonlinear staff and a small number of other people who visited during Alice and Chloe’s employment have strongly questioned Alice’s trustworthiness and suggested she has told outright lies. Nonlinear showed me texts where people who had spoken with Alice came away with the impression that she was paid $0 or $500, which is inaccurate (she was paid ~$8k on net, as she told me).
That said, I personally found Alice very willing and ready to share primary sources with me upon request (texts, bank info, etc), so I don’t believe her to be acting in bad faith.
In my first conversation with her, Kat claimed that Alice had many catastrophic miscommunications, but that Chloe was (quote) “fine”. In general nobody questioned Chloe’s word and broadly the people who told me they questioned Alice’s word said they trusted Chloe’s.
Personally I found all of their fears of retaliation to be genuine and earnest, and in my opinion justified.
Why I’m sharing these
I do have a strong heuristic that says consenting adults can agree to all sorts of things that eventually hurt them (i.e. in accepting these jobs), even if I paternalistically might think I could have prevented them from hurting themselves. That said, I see clear reasons to think that Kat and Emerson intimidated these people into accepting some of the actions or dynamics that hurt them, so some parts do not seem obviously consensual to me.
Separate from that, I think it’s good for other people to know what they’re getting into, so I think sharing this info is good because it is relevant for many people who have any likelihood of working with Nonlinear. And most importantly to me, I especially want to do it because it seems to me that Nonlinear has tried to prevent this negative information from being shared, so I am erring strongly on the side of sharing things.
(One of the employees also wanted to say something about why she contributed to this post, and I’ve put it in a footnote here.[4])
Highly dependent finances and social environment
Everyone lived in the same house. Emerson and Kat would share a room, and the others would make do with what else was available, often sharing bedrooms.
Nonlinear primarily moved around countries where they typically knew no locals and the employees regularly had nobody to interact with other than the cofounders, and employees report that they were denied requests to live in a separate AirBnb from the cofounders.
Alice and Chloe report that they were advised not to spend time with ‘low value people’, including their families, romantic partners, and anyone local to where they were staying, with the exception of guests/visitors that Nonlinear invited. Alice and Chloe report this made them very socially dependent on Kat/Emerson/Drew and otherwise very isolated.
The employees were very unclear on the boundaries of what would and wouldn’t be paid for by Nonlinear. For instance, Alice and Chloe report that they once spent several days driving around Puerto Rico looking for cheaper medical care for one of them before presenting it to senior staff, as they didn’t know whether medical care would be covered, so they wanted to make sure that it was as cheap as possible to increase the chance of senior staff saying yes.
The financial situation is complicated and messy. This is in large-part due to them doing very little accounting. In summary Alice spent a lot of her last 2 months with less than €1000 in her bank account, sometimes having to phone Emerson for immediate transfers to be able to cover medical costs when she was visiting doctors. At the time of her quitting she had €700 in her account, which was not enough to cover her bills at the end of the month, and left her quite scared. Though to be clear she was paid back ~€2900 of her outstanding salary by Nonlinear within a week, in part due to her strongly requesting it. (The relevant thing here is the extremely high level of financial dependence and wealth disparity, but Alice does not claim that Nonlinear failed to pay them.)
One of the central reasons Alice says that she stayed on this long was because she was expecting financial independence with the launch of her incubated project that had $100k allocated to it (fundraised from FTX). In her final month there Kat informed her that while she would work quite independently, they would keep the money in the Nonlinear bank account and she would ask for it, meaning she wouldn’t have the financial independence from them that she had been expecting, and learning this was what caused Alice to quit.
One of the employees interviewed Kat about her productivity advice, and shared notes from this interview with me. The employee writes:
During the interview, Kat openly admitted to not being productive but shared that she still appeared to be productive because she gets others to do work for her. She relies on volunteers who are willing to do free work for her, which is her top productivity advice.
The employees report that some interns later gave strongly negative feedback on working unpaid, and so Kat decided that she would no longer have interns at all.
Severe downsides threatened if the working relationship didn’t work out
In a conversation between Emerson Spartz and one of the employees, the employee asked for advice for a friend that wanted to find another job while being employed, without letting their current employer know about their decision to leave yet. Emerson reportedly immediately stated that he now has to update towards considering that the said employee herself is considering leaving Nonlinear. He went on to tell her that he gets mad at his employees who leave his company for other jobs that are equally good or less good; he said he understands if employees leave for clearly better opportunities. The employee reports that this led them to be very afraid of leaving the job, both because of the way Emerson made the update on thinking the employee is now trying to leave, as well as the notion of Emerson being retaliative towards employees that leave for “bad reasons”.
For background context on Emerson’s business philosophy: Alice quotes Emerson advising the following indicator of work progress: “How much value are you able to extract from others in a short amount of time?”[5] Another person who visited described Emerson to me as “always trying to use all of his bargaining power”. Chloe told me that, when she was negotiating salaries with external partners on behalf of Nonlinear, Emerson advised her when negotiating salaries, to offer “the lowest number you can get away with”.
Many different people reported that Emerson Spartz would boast about his business negotiations tactics to employees and visitors. He would encourage his employees to read many books on strategy and influence. When they read the book The 48 Laws of Power he would give examples of him following the “laws” in his past business practices.
One story that he told to both employees and visitors was about his intimidation tactics when involved in a conflict with a former teenage mentee of his, Adorian Deck.
(For context on the conflict, here’s links to articles written about it at the time: Hollywood Reporter, Jacksonville, Technology & Marketing Law Blog, and Emerson Spartz’s Tumblr. Plus here is the Legal Contract they signed that Deck later sued to undo.)
In brief, Adorian Deck was a 16 year-old who (in 2009) made a Twitter account called “OMGFacts” that quickly grew to having 300,000+ followers. Emerson reached out to build companies under the brand, and agreed to a deal with Adorian. Less than a year later Adorian wanted out of the deal, claiming that Emerson had made over $100k of profits and he’d only seen $100, and sued to end the deal.
According to Emerson, it turned out that there’s a clause unique to California (due to the acting profession in Los Angeles) where even if a minor and their parent signs a contract, it isn’t valid unless the signing is overseen by a judge, and so they were able to simply pull out of the deal.
But to this day Emerson’s company still owns the OMGfacts brand and companies and Youtube channels.
(Sidenote: I am not trying to make claims about who was “in the right” in these conflicts, I am reporting these as examples of Emerosn’s negotiation tactics that he reportedly engages in and actively endorses during conflicts.)
Emerson told versions of this story to different people who I spoke to (people reported him as ‘bragging’).
In one version, he claimed that he strong-armed Adorian and his mother with endless legal threats and they backed down and left him with full control of the brand. This person I spoke to couldn’t recall the details but said that Emerson tried to frighten Deck and his mother, and that they (the person Emerson was bragging to) found it “frightening” and thought the behavior was “behavior that’s like 7 standard deviations away from usual norms in this area.”
Another person was told the story in the context of the 2nd Law from “48 Laws of Power”, which is “Never put too much trust in friends, learn how to use enemies”. The summary includes
“Be wary of friends—they will betray you more quickly, for they are easily aroused to envy. They also become spoiled and tyrannical… you have more to fear from friends than from enemies.”
For this person who was told the Adorian story, the thing that resonated most when he told it was the claim that he was in a close, mentoring relationship with Adorian, and leveraged knowing him so well that he would know “exactly where to go to hurt him the most” so that he would back off. In that version of the story, he says that Deck’s life-goal was to be a YouTuber (which is indeed Deck’s profession until this day — he produces about 4 videos a month), and that Emerson strategically contacted the YouTubers that Deck most admired, and told them stories of Deck being lazy and trying to take credit for all of Emerson’s work. He reportedly threatened to do more of this until Deck relented, and this is why Deck gave up the lawsuit. The person said to me “He loved him, knew him really well, and destroyed him with that knowledge.”[6]
I later spoke with Emerson about this. He does say that he was working with the top YouTubers to create videos exposing Deck, and this is what brought Deck back to the negotiating table. He says that he ended up renegotiating a contract where Deck receives $10k/month for 7 years. If true, I think this final deal reflects positively on Emerson, though I still believe the people he spoke to were actively scared by their conversations with Emerson on this subject. (I have neither confirmed the existence of the contract nor heard Deck’s side of the story.)
He reportedly told another negotiation story about his response to getting scammed in a business deal. I won’t go into the details, but reportedly he paid a high-price for the rights to a logo/trademark, only to find that he had not read the fine print and had been sold something far less valuable. He gave it as an example of the “Keep others in suspended terror: cultivate an air of unpredictability” strategy from The 48 Laws of Power:
Be deliberately unpredictable. Behavior that seems to have no consistency or purpose will keep them off-balance, and they will wear themselves out trying to explain your moves. Taken to an extreme, this strategy can intimidate and terrorize.
In that business negotiation, he (reportedly) acted unhinged. According to the person I spoke with, he said he’d call the counterparty and say “batshit crazy things” and yell at them, with the purpose of making them think he’s capable of anything, including dangerous and unethical things, and eventually they relented and gave him the deal he wanted.
Someone else I spoke to reported him repeatedly saying that he would be “very antagonistic” toward people he was in conflict with. He reportedly gave the example that, if someone tried to sue him, he would be willing to go into legal gray areas in order to “crush his enemies” (a phrase he apparently used a lot), including hiring someone to stalk the person and their family in order to freak them out. (Emerson denies having said this, and suggests that he was probably describing this as a strategy that someone else might use in a conflict that one ought to be aware of.)
After Chloe eventually quit, Alice reports that Kat/Emerson would “trash talk” her, saying she was never an “A player”, criticizing her on lots of dimensions (competence, ethics, drama, etc) in spite of previously primarily giving Chloe high praise. This reportedly happened commonly toward other people who ended or turned down working together with Nonlinear.
Here are some texts between Kat Woods and Alice shortly after Alice had quit, before the final salary had been paid.
A few months later, some more texts from Kat Woods.
(I can corroborate that it was difficult to directly talk with the former employee and it took a fair bit of communication through indirect social channels before they were willing to identify themselves to me and talk about the details.)
Effusive positive emotion not backed up by reality, and other manipulative techniques
Multiple people who worked with Kat reported that Kat had a pattern of enforcing arbitrary short deadlines on people in order to get them to make the decision she wants e.g. “I need a decision by the end of this call”, or (in an email to Alice) “This is urgent and important. There are people working on saving the world and we can’t let our issues hold them back from doing their work.”
Alice reported feeling emotionally manipulated. She said she got constant compliments from the founders that ended up seeming fake.
Alice wrote down a string of the compliments at the time from Kat Woods (said out loud and that Alice wrote down in text), here is a sampling of them that she shared with me:
“You’re the kind of person I bet on, you’re a beast, you’re an animal, I think you are extraordinary”
“You can be in the top 10, you really just have to think about where you want to be, you have to make sacrifices to be on the top, you can be the best, only if you sacrifice enough”
“You’re working more than 99% because you care more than 99% because you’re a leader and going to save the world”
“You can’t fail if you commit to [this project], you have what it takes, you get sh*t done and everyone will hail you in EA, finally an executor among us.”
Alice reported that she would get these compliments near-daily. She eventually had the sense that this was said in order to get something out of her. She reported that one time, after a series of such compliments, the Kat Woods then turned and recorded a near-identical series of compliments into their phone for a different person.
Kat Woods reportedly several times cried while telling Alice that she wanted the employee in their life forever and was worried that this employee would ever not be in Kat’s life.
Other times when Alice would come to Kat with money troubles and asking for a pay rise, Alice reports that Kat would tell them that this was a psychological issue and that actually they had safety, for instance they could move back in with their parents, so they didn’t need to worry.
Alice also reports that she was explicitly advised by Kat Woods to cry and look cute when asking Emerson Spartz for a salary improvement, in order to get the salary improvement that she wanted, and was told this was a reliable way to get things from Emerson. (Alice reports that she did not follow this advice.)
Many other strong personal costs
Alice quit being vegan while working there. She was sick with covid in a foreign country, with only the three Nonlinear cofounders around, but nobody in the house was willing to go out and get her vegan food, so she barely ate for 2 days. Alice eventually gave in and ate non-vegan food in the house. She also said that the Nonlinear cofounders marked her quitting veganism as a ‘win’, as they thad been arguing that she should not be vegan.
(Nonlinear disputes this, and says that they did go out and buy her some vegan burgers food and had some vegan food in the house. They agree that she quit being vegan at this time, and say it was because being vegan was unusually hard due to being in Puerto Rico. Alice disputes that she received any vegan burgers.)
Alice said that this generally matched how she and Chloe were treated in the house, as people generally not worth spending time on, because they were ‘low value’ (i.e. in terms of their hourly wage), and that they were the people who had to do chores around the house (e.g. Alice was still asked to do house chores during the period where she was sick and not eating).
By the same reasoning, the employees reported that they were given 100% of the menial tasks around the house (cleaning, tidying, etc) due to their lower value of time to the company. For instance, if a cofounder spilled food in the kitchen, the employees would clean it up. This was generally reported as feeling very demeaning.
Alice and Chloe reported a substantial conflict within the household between Kat and Alice. Alice was polyamorous, and she and Drew entered into a casual romantic relationship. Kat previously had a polyamorous marriage that ended in divorce, and is now monogamously partnered with Emerson. Kat reportedly told Alice that she didn’t mind polyamory “on the other side of the world”, but couldn’t stand it right next to her, and probably either Alice would need to become monogamous or Alice should leave the organization. Alice didn’t become monogamous. Alice reports that Kat became increasingly cold over multiple months, and was very hard to work with.[7]
Alice reports then taking a vacation to visit her family, and trying to figure out how to repair the relationship with Kat. Before she went on vacation, Kat requested that Alice bring a variety of illegal drugs across the border for her (some recreational, some for productivity). Alice argued that this would be dangerous for her personally, but Emerson and Kat reportedly argued that it is not dangerous at all and was “absolutely risk-free”. Privately, Drew said that Kat would “love her forever” if she did this. I bring this up as an example of the sorts of requests that Kat/Emerson/Drew felt comfortable making during Alice’s time there.
Chloe was hired by Nonlinear with the intent to have them do executive assistant tasks for Nonlinear (this is the job ad they responded to). After being hired and flying out, Chloe was informed that on a daily basis their job would involve driving e.g. to get groceries when they were in different countries. She explained that she didn’t have a drivers’ license and didn’t know how to drive. Kat/Emerson proposed that Chloe learn to drive, and Drew gave her some driving lessons. When Chloe learned to drive well enough in parking lots, she said she was ready to get her license, but she discovered that she couldn’t get a license in a foreign country. Kat/Emerson/Drew reportedly didn’t seem to think that mattered or was even part of the plan, and strongly encouraged Chloe to just drive without a license to do their work, so she drove ~daily for 1-2 months without a license. (I think this involved physical risks for the employee and bystanders, and also substantial risks of being in jail in a foreign country. Also, Chloe basically never drove Emerson/Drew/Kat, this was primarily solo driving for daily errands.) Eventually Chloe had a minor collision with a street post, and was a bit freaked out because she had no idea what the correct protocols were. She reported that Kat/Emerson/Drew didn’t think that this was a big deal, but that Alice (who she was on her way to meet) could clearly see that Chloe was distressed by this, and Alice drove her home, and Chloe then decided to stop driving.
(Car accidents are the second most common cause of death for people in their age group. Insofar as they were pressured to do this and told that this was safe, I think this involved a pretty cavalier disregard for the safety of the person who worked for them.)
Chloe talked to a friend of hers (who is someone I know fairly well, and was the first person to give me a negative report about Nonlinear), reporting that they were very depressed. When Chloe described her working conditions, her friend was horrified, and said she had to get out immediately since, in their words, “this was clearly an abusive situation”. The friend offered to pay for flights out of the country, and tried to convince her to quit immediately. Eventually Chloe made a commitment to book a flight by a certain date and then followed through with that.
Lax on legalities and adversarial business practices
I did not find the time to write much here. For now I’ll simply pass on my impressions.
I generally got a sense from speaking with many parties that Emerson Spartz and Kat Woods respectively have very adversarial and very lax attitudes toward legalities and bureaucracies, with the former trying to do as little as possible that is asked of him. If I asked them to fill out paperwork I would expect it was filled out at least reluctantly and plausibly deceptively or adversarially in some way. In my current epistemic state, I would be actively concerned about any project in the EA or x-risk ecosystems that relied on Nonlinear doing any accounting or having a reliable legal structure that has had the basics checked.
Personally, if I were giving Nonlinear funds for any project whatsoever, including for regranting, I’d expect it’s quite plausible (>20%) that they didn’t spend the funds on what they told me, and instead will randomly spend it on some other project. If I had previously funded Nonlinear for any projects, I would be keen to ask Nonlinear for receipts to show whether they spent their funds in accordance with what they said they would.
This is not a complete list
I want to be clear that this is not a complete list of negative or concerning experiences, this is an illustrative list. There are many other things that I was told about that I am not including here due to factors like length and people’s privacy (on all sides). Also I split them up into the categories as I see them; someone else might make a different split.
Perspectives From Others Who Have Worked or Otherwise Been Close With Nonlinear
I had hoped to work this into a longer section of quotes, but it seemed like too much back-and-forth with lots of different people. I encourage folks to leave comments with their relevant impressions.
For now I’ll summarize some of what I learned as follows:
Several people gave reports consistent with Alice and Chloe being very upset and distressed both during and after their time at Nonlinear, and reaching out for help, and seeming really strongly to want to get away from Nonlinear.
Some unpaid interns (who worked remotely for Nonlinear for 1-3 months) said that they regretted not getting paid, and that when they brought it up with Kat Woods she said some positive sounding things and they expected she would get back to them about it, but that never happened during the rest of their internships.
Many people who visited had fine experiences with Nonlinear, others felt much more troubled by the experience.
One person said to me about Emerson/Drew/Kat:
“My subjective feeling is like ‘they seemed to be really bad and toxic people’. And they at the same time have a decent amount of impact. After I interacted repeatedly with them I was highly confused about the dilemma of people who are mistreating other people, but are doing some good.”
Another person said about Emerson:
“He seems to think he’s extremely competent, a genius, and that everyone else is inferior to him. They should learn everything they can from him, he has nothing to learn from them. He said things close to this explicitly. Drew and (to a lesser extent) Kat really bought into him being the new messiah.”
One person who has worked for Kat Woods (not Alice or Chloe) said the following:
I love her as a person, hate her as a boss. She’s fun, has a lot of ideas, really good socialite, and I think that that speaks to how she’s able to get away with a lot of things. Able to wear different masks in different places. She’s someone who’s easy to trust, easy to build social relationships with. I’d be suspicious of anyone who gives a reference who’s never been below Kat in power.
Ben: Do you think Kat is emotionally manipulative?
I think she is. I think it’s a fine line about what makes an excellent entrepreneur. Do whatever it takes to get a deal signed. To get it across the line. Depends a lot on what the power dynamics are, whether it’s a problem or not. If people are in equal power structures it’s less of a problem.
There were other informative conversations that I won’t summarize. I encourage folks who have worked with or for Nonlinear to comment with their perspective.
Conversation with Nonlinear
After putting the above together, I got permission from Alice and Chloe to publish, and to share the information I had learned as I saw fit. So I booked a call with Nonlinear, sent them a long list of concerns, and talked with Emerson, Kat and Drew for ~3 hours to hear them out.
Paraphrasing Nonlinear
On the call, they said their primary intention in the call was to convince me that Alice is a bald-faced liar. They further said they’re terrified of Alice making false claims about them, and that she is in a powerful position to hurt them with false accusations.
Afterwards, I wrote up a paraphrase of their responses. I shared it with Emerson and he replied that it was a “Good summary!”. Below is the paraphrase of their perspective on things that I sent them, with one minor edit for privacy. (The below is written as though Nonlinear is speaking, but to be clear this 100% my writing.)
We hired one person, and kind-of-technically-hired a second person. In doing so, our intention wasn’t just to have employees, but also to have members of our family unit who we traveled with and worked closely together with in having a strong positive impact in the world, and were very personally close with.
We nomadically traveled the globe. This can be quite lonely so we put a lot of work into bringing people to us, often having visitors in our house who we supported with flights and accommodation. This probably wasn’t perfect but in general we’d describe the environment as “quite actively social”.
For the formal employee, she responded to a job ad, we interviewed her, and it all went the standard way. For the gradually-employed employee, we initially just invited her to travel with us and co-work, as she seemed like a successful entrepreneur and aligned in terms of our visions for improving the world. Over time she quit her existing job and we worked on projects together and were gradually bringing her into our organization.
We wanted to give these employees a pretty standard amount of compensation, but also mostly not worry about negotiating minor financial details as we traveled the world. So we covered basic rent/groceries/travel for these people. On top of that, to the formal employee we gave a $1k/month salary, and to the semi-formal employee we eventually did the same too. For the latter employee, we roughly paid her ~$8k over the time she worked with us.
From our perspective, the gradually-hired employees gave a falsely positive impression of their financial and professional situation, suggesting they’d accomplished more than they had and were earning more than they had. They ended up being fairly financially dependent on us and we didn’t expect that.
Eventually, after about 6-8 months each, both employees quit. Overall this experiment went poorly from our perspective and we’re not going to try it in future.
For the formal employee, we’re a bit unsure about why exactly she quit, even though we did do exit interviews with her. She said she didn’t like a lot of the menial work (which is what we hired her for), but didn’t say that money was the problem. We think it is probably related to everyone getting Covid and being kind of depressed around that time.
For the other employee, relations got bad for various reasons. She ended up wanting total control of the org she was incubating with us, rather than 95% control as we’d discussed, but that wasn’t on the table (the org had $250k dedicated to it that we’d raised!), and so she quit.
When she was leaving, we were financially supportive. On the day we flew back from the Bahamas to London, we paid all our outstanding reimbursements (~$2900). We also offered to pay for her to have a room in London for a week as she got herself sorted out. We also offered her rooms with our friends if she promised not to tell them lies about us behind our backs.
After she left, we believe she told a lot of lies and inaccurate stories about us. For instance, two people we talked to had the impression that we either paid her $0 or $500, which is demonstrably false. Right now we’re pretty actively concerned that she is telling lots of false stories in order to paint us in a negative light, because the relationship didn’t work out and she didn’t get control over her org (and because her general character seems drama-prone).
There were some points around the experiences of these employees that we want to respond to.
First; the formal employee drove without a license for 1-2 months in Puerto Rico. We taught her to drive, which she was excited about. You might think this is a substantial legal risk, but basically it isn’t, as you can see here, the general range of fines for issues around not-having-a-license in Puerto Rico is in the range of $25 to $500, which just isn’t that bad.
Second; the semi-employee said that she wasn’t supported in getting vegan food when she was sick with Covid, and this is why she stopped being vegan. This seems also straightforwardly inaccurate, we brought her potatoes, vegan burgers, and had vegan food in the house. We had been advising her to 80⁄20 being a vegan and this probably also weighed on her decision.
Third; the semi-employee was also asked to bring some productivity-related and recreational drugs over the border for us. In general we didn’t push hard on this. For one, this is an activity she already did (with other drugs). For two, we thought it didn’t need prescription in the country she was visiting, and when we found out otherwise, we dropped it. And for three, she used a bunch of our drugs herself, so it’s not fair to say that this request was made entirely selfishly. I think this just seems like an extension of the sorts of actions she’s generally open to.
Finally, multiple people (beyond our two in-person employees) told Ben they felt frightened or freaked out by some of the business tactics in the stories Emerson told them. To give context and respond to that:
I, Emerson, have had a lot of exceedingly harsh and cruel business experience, including getting tricked or stabbed-in-the-back. Nonetheless, I have often prevailed in these difficult situations, and learned a lot of hard lessons about how to act in the world.
The skills required to do so seem to me lacking in many of the earnest-but-naive EAs that I meet, and I would really like them to learn how to be strong in this way. As such, I often tell EAs these stories, selecting for the most cut-throat ones, and sometimes I try to play up the harshness of how you have to respond to the threats. I think of myself as playing the role of a wise old mentor who has had lots of experience, telling stories to the young adventurers, trying to toughen them up, somewhat similar to how Prof Quirrell[8] toughens up the students in HPMOR through teaching them Defense Against the Dark Arts, to deal with real monsters in the world.
For instance, I tell people about my negotiations with Adorian Deck about the OMGFacts brand and Twitter account. We signed a good deal, but a California technicality meant he could pull from it and take my whole company, which is a really illegitimate claim. They wouldn’t talk with me, so I was working with top YouTubers to make some videos publicizing and exposing his bad behavior. This got him back to the negotiation table and we worked out a deal where he got $10k/month for seven years, which is not a shabby deal, and meant that I got to keep my company!
It had been reported to Ben that Emerson said he would be willing to go into legal gray areas in order to “crush his enemies” (if they were acting in very reprehensible and norm-violating ways). Emerson thinks this has got to be a misunderstanding, that he was talking about what other people might do to you, which is a crucial thing to discuss and model.
(Here I cease pretending-to-be-Nonlinear and return to my own voice.)
My thoughts on the ethics and my takeaways
Summary of My Epistemic State
Here are my probabilities for a few high-level claims relating to Alice and Chloe’s experiences working at Nonlinear.
Emerson Spartz employs more vicious and adversarial tactics in conflicts than 99% of the people active in the EA/x-risk/AI Safety communities: 95%
Alice and Chloe were more dependent on their bosses (combining financial, social, and legally) than employees are at literally every other organization I am aware of in the EA/x-risk/AI Safety ecosystem: 85%[9]
In working at Nonlinear Alice and Chloe were both took on physical and legal risks that they strongly regretted, were hurt emotionally, came away financially worse off, gained ~no professional advancement from their time at Nonlinear, and took several months after the experience to recover: 90%
Alice and Chloe both had credible reason to be very scared of retaliation for sharing negative information about their work experiences, far beyond that experienced at any other org in the EA/x-risk/AI Safety ecosystem: 85%[10]
General Comments From Me
Going forward I think anyone who works with Kat Woods, Emerson Spartz, or Drew Spartz, should sign legal employment contracts, and make sure all financial agreements are written down in emails and messages that the employee has possession of. I think all people considering employment by the above people at any non-profits they run should take salaries where money is wired to their bank accounts, and not do unpaid work or work that is compensated by ways that don’t primarily include a salary being wired to their bank accounts.
I expect that if Nonlinear does more hiring in the EA ecosystem it is more-likely-than-not to chew up and spit out other bright-eyed young EAs who want to do good in the world. I relatedly think that the EA ecosystem doesn’t have reliable defenses against such predators. These are not the first, nor sadly the last, bright-eyed well-intentioned people who I expect to be taken advantage of and hurt in the EA/x-risk/AI safety ecosystem, as a result of falsely trusting high-status people at EA events to be people who will treat them honorably.
(Personal aside: Regarding the texts from Kat Woods shown above — I have to say, if you want to be allies with me, you must not write texts like these. A lot of bad behavior can be learned from, fixed, and forgiven, but if you take actions to prevent me from being able to learn that the bad behavior is even going on, then I have to always be worried that something far worse is happening that I’m not aware of, and indeed I have been quite shocked to discover how bad people’s experiences were working for Nonlinear.)
My position is not greatly changed by the fact that Nonlinear is overwhelmingly confident that Alice is a “bald-faced liar”. From my current perspective, they probably have some legitimate grievances against her, but that in no way makes it less costly to our collective epistemology to incentivize her to not share her own substantial grievances. I think the magnitude of the costs they imposed on their employees-slash-new-family are far higher than I or anyone I know would have expected was happening, and they intimidated both Alice and Chloe into silence about those costs. If it were only Alice then I would give this perspective a lot more thought/weight, but Chloe reports a lot of the same dynamics and similar harms.
To my eyes, the people involved were genuinely concerned about retaliation for saying anything negative about Nonlinear, including the workplace/household dynamics and how painful their experiences had been for them. That’s a red line in my book, and I will not personally work with Nonlinear in the future because of it, and I recommend their exclusion from any professional communities that wish to keep up the standard of people not being silenced about extremely negative work experiences. “First they came for the epistemology. We don’t know what happened after that.”
Specifically, the things that cross my personal lines for working with someone or viewing them as an ally:
Kat Woods attempted to offer someone who was really hurting, and in a position of strong need, very basic resources with the requirement of not saying bad things about her.
Kat Woods’ texts that read to me as a veiled threat to destroy someone’s career for sharing negative information about her.
Emerson Spartz reportedly telling multiple people he will use questionably legal methods in order to crush his enemies (such as spurious lawsuits and that he would hire a stalker to freak someone out).
Both employees were actively afraid that Emerson Spartz would retaliate and potentially using tactics like spurious lawsuits and further things that are questionably legal, and generally try to destroy their careers and leave them with no resources. It seems to me (given the other reports I’ve heard from visitors) that Emerson behaved in a way that quite understandably led them to this epistemic state, and I consider that to be his responsibility to not give his employees this impression.
I think in almost any functioning professional ecosystem, there should be some general principles like:
If you employ someone, after they work for you, unless they’ve done something egregiously wrong or unethical, they should be comfortable continuing to work and participate in this professional ecosystem.
If you employ someone, after they work for you, they should feel comfortable talking openly about their experience working with you to others in this professional ecosystem.
Any breaking of the first rule is very costly, and any breaking of the second rule is by-default a red-line for me not being willing to work with you.
I do think that there was a nearby world where Alice, having run out of money, gave in and stayed at Nonlinear, begging them for money, and becoming a fully dependent and subservient house pet — a world where we would not have learned the majority of this information. I think we’re not that far from that world, I think a weaker person than Alice might have never quit, and it showed a lot of strength to quit at the point where you have ~no runway left and you have heard the above stories about the kinds of things Emerson Spartz considers doing to former business partners that he is angry with.
I’m very grateful to the two staff members involved for coming forward and eventually spending dozens of hours clarifying and explaining their experiences to me and others who were interested. To compensate them for their courage, the time and effort spent to talk with me and explain their experiences at some length, and their permission to allow me to publish a lot of this information, I (using personal funds) am going to pay them each $5,000 after publishing this post.
I think that whistleblowing is generally a difficult experience, with a lot riding on the fairly personal account from fallible human beings. It’s neither the case that everything reported should be accepted without question, nor that if some aspect is learned to be exaggerated or misreported that the whole case should be thrown out. I plan to reply to further questions here in the comments, I also encourage everyone involved to comment insofar as they wish to answer questions or give their own perspective on what happened.
Addendum
This is a list of edits made post-publication.
“Alice worked there from November 2021 to June 2022” became “Alice travelled with Nonlinear from November 2021 to June 2022 and started working for the org from around February”
“using Lightcone funds” became “using personal funds”
“I see clear reasons to think that Kat, Emerson and Drew intimidated these people” became “I see clear reasons to think that Kat and Emerson intimidated these people”.
- ^
In a later conversation, Kat clarified that the actual amount discussed was $70k.
- ^
Comment from Chloe:
In my resignation conversation with Kat, I was worried about getting into a negotiation conversation where I wouldn’t have strong enough reasons to leave. To avoid this, I started off by saying that my decision to quit is final, and not an ultimatum that warrants negotiation of what would make me want to stay. I did offer to elaborate on the reasons for why I was leaving. As I was explaining my reasons, she still insisted on offering me solutions to things I would say I wanted, to see if that would make me change my mind anyway. One of the reasons I listed was the lack of financial freedom in not having my salary be paid out as a salary which I could allocate towards decisions like choices in accommodation for myself, as well as meals and travel decisions. She wanted to know how much I wanted to be paid. I kept evading the question since it seemed to tackle the wrong part of the problem. Eventually I quoted back the number I had heard her reference to when she’d talk about what my salary is equivalent to, suggesting that if they’d pay out the 75k as a salary instead of the compensation package, then that would in theory solve the salary issue. There was a miscommunication around her believing that I wanted that to be paid out on top of the living expenses—I wanted financial freedom and a legal salary. I believe the miscommunication stems from me mentioning that salaries are more expensive for employers to pay out as they also have to pay tax on the salaries, e.g. social benefits, pension (depending on the country). Kat was surprised to hear that and understood it as me wanting a 75k salary before taxes. I do not remember that conversation concluding with her thinking I wanted everything paid for and also 75k.
- ^
Note that Nonlinear and Alice gave conflicting reports about which month she started getting paid, February vs April. It was hard for me to check as it’s not legally recorded and there’s lots of bits of monetary payments unclearly coded between them.
- ^
Comment from one of the employees:
I had largely moved on from the subject and left the past behind when Ben started researching it to write a piece with his thoughts on it. I was very reluctant at first (and frightened at the mere thought), and frankly, will probably continue to be. I did not agree to post this publicly with any kind of malice, rest assured. The guiding thought here is, as Ben asked, “What would you tell your friend if they wanted to start working for this organization?” I would want my friend to be able to make their own independent decision, having read about my experience and the experiences of others who have worked there. My main goal is to create a world where we can all work together towards a safe, long and prosperous future, and anything that takes away from that (like conflict and drama) is bad and I have generally avoided it. Even when I was working at Nonlinear, I remember saying several times that I just wanted to work on what was important and didn’t want to get involved in their interpersonal drama. But it’s hard for me to imagine a future where situations like that are just overlooked and other people get hurt when it could have been stopped or flagged before. I want to live in a world where everyone is safe and cared for. For most of my life I have avoided learning about anything to do with manipulation, power frameworks and even personality disorders. By avoiding them, I also missed the opportunity to protect myself and others from dangerous situations. Knowledge is the best defense against any kind of manipulation or abuse, so I strongly recommend informing yourself about it, and advising others to do so too.
- ^
This is something Alice showed me was written in her notes from the time.
- ^
I do not mean to make a claim here about who was in the right in that conflict. And somewhat in Emerson’s defense, I think some of people’s most aggressive behavior comes out when they themselves have just been wronged — I expect this is more extreme behavior than he would typically respond with. Nonetheless, it seems to me that there was reportedly a close, mentoring relationship — Emerson’s tumblr post on the situation says “I loved Adorian Deck” in the opening paragraph — but that later Emerson reportedly became bitter and nasty in order to win the conflict, involving threatening to overwhelm someone with lawsuits and legal costs, and figure out the best way to use their formerly close relationship to hurt them emotionally, and reportedly gave this as an example of good business strategy. I think this sort of story somewhat justifiably left people working closely with Emerson very worried about the sort of retaliation he might carry out if they were ever in a conflict, or he were to ever view them as an ‘enemy’.
- ^
After this, there were further reports of claims of Kat professing her romantic love for Alice, and also precisely opposite reports of Alice professing her romantic love for Kat. I am pretty confused about what happened.
- ^
Note that during our conversation, Emerson brought up HPMOR and the Quirrell similarity, not me.
- ^
With the exception of some FTX staff.
- ^
One of the factors lowering my number here is that I’m not quite sure what the dynamics are like at places like Anthropic and OpenAI — who have employees sign non-disparagement clauses, and are involved in geopolitics — or whether they would even be included. I also could imagine finding out that various senior people at CEA/EV are terrified of information coming out about them. Also note that I am not including Leverage Research in this assessment.
- Effective Aspersions: How the Nonlinear Investigation Went Wrong by 19 Dec 2023 12:00 UTC; 346 points) (
- Practically A Book Review: Appendix to “Nonlinear’s Evidence: Debunking False and Misleading Claims” by 3 Jan 2024 23:16 UTC; 314 points) (
- How has FTX’s collapse impacted EA? by 17 Oct 2023 17:02 UTC; 242 points) (
- Closing Notes on Nonlinear Investigation by 15 Sep 2023 22:31 UTC; 202 points) (
- Effective Aspersions: How the Nonlinear Investigation Went Wrong by 19 Dec 2023 12:00 UTC; 179 points) (LessWrong;
- 13 Dec 2023 7:24 UTC; 155 points) 's comment on Nonlinear’s Evidence: Debunking False and Misleading Claims by (
- Nonlinear’s Evidence: Debunking False and Misleading Claims by 12 Dec 2023 13:15 UTC; 151 points) (
- Nonlinear’s Evidence: Debunking False and Misleading Claims by 12 Dec 2023 13:16 UTC; 104 points) (LessWrong;
- A quick update from Nonlinear by 7 Sep 2023 21:26 UTC; 85 points) (
- Posts from 2023 you thought were valuable (and underrated) by 21 Mar 2024 23:34 UTC; 82 points) (
- 22 Dec 2023 4:47 UTC; 79 points) 's comment on Nonlinear’s Evidence: Debunking False and Misleading Claims by (
- A quick update from Nonlinear by 7 Sep 2023 21:28 UTC; 72 points) (LessWrong;
- 7 Sep 2023 12:47 UTC; 47 points) 's comment on Matt_Sharp’s Quick takes by (
- 20 Sep 2023 20:08 UTC; 33 points) 's comment on Closing Notes on Nonlinear Investigation by (
- Resource on whistleblowing and other ways of escalating concerns by 9 Nov 2023 19:01 UTC; 22 points) (
- Dialogue: What is the optimal frontier for due diligence? by 8 Sep 2023 18:28 UTC; 9 points) (
- 20 Dec 2023 3:41 UTC; 3 points) 's comment on Effective Aspersions: How the Nonlinear Investigation Went Wrong by (LessWrong;
On behalf of Chloe and in her own words, here’s a response that might illuminate some pieces that are not obvious from Ben’s post—as his post is relying on more factual and object-level evidence, rather than the whole narrative.
“Before Ben published, I found thinking about or discussing my experiences very painful, as well as scary—I was never sure with whom it was safe sharing any of this with. Now that it’s public, it feels like it’s in the past and I’m able to talk about it. Here are some of my experiences I think are relevant to understanding what went on. They’re harder to back up with chatlog or other written evidence—take them as you want, knowing these are stories more than clearly backed up by evidence. I think people should be able to make up their own opinion on this, and I believe they should have the appropriate information to do so.
I want to emphasize *just how much* the entire experience of working for Nonlinear was them creating all kinds of obstacles, and me being told that if I’m clever enough I can figure out how to do these tasks anyway. It’s not actually about whether I had a contract and a salary (even then, the issue wasn’t the amount or even the legality, it was that they’d be verbally unclear about what the compensation entailed, eg Emerson saying that since he bought me a laptop in January under the premise of “productivity tool”, that meant my January salary was actually higher than it would have been otherwise, even though it was never said that the laptop was considered as part of the compensation when we discussed it, and I had not initiated the purchase of it), or whether I was asked to do illegal things and what constitutes as okay illegal vs not okay illegal—it’s the fact that they threw some impossibly complex setup at us, told us we can have whatever we want, if we are clever enough with negotiating (by us I mostly mean me and Alice). And boy did we have to negotiate. I needed to run a medical errand for myself in Puerto Rico and the amount of negotiating I needed to do to get them to drive me to a different city that was a 30 min drive away was wild. I needed to go there three times, and I knew the answer of anyone driving me would be that it’s not worth their time, at the same time getting taxis was difficult while we were living in isolated mountain towns, and obviously it would have been easiest to have Drew or Emerson drive me. I looked up tourism things to do in that city, and tried to use things like “hey this city is the only one that has a store that sells Emerson’s favorite breakfast cereal and I could stock up for weeks if we could just get there somehow”. Also—this kind of going out of your way to get what you wanted or needed was rewarded with the Nonlinear team members giving you “points” or calling you a “negotiation genius”.
Of course I was excited to learn how to drive—I could finally get my tasks done and take care of myself, and have a means to get away from the team when it became too much to be around them. And this is negotiating for just going to a city that’s a 30 minute drive away—three times. Imagine how much I had to negotiate to get someone to drive me to a grocery store to do weekly groceries, and then add to that salary or compensation package negotiations and negotiate whether I could be relieved from having to learn how to buy weed for Kat in every country we went to. I’m still not sure how to concisely describe the frame they prescribed to us (here’s a great post on frame control by Aella that seems relevant https://aella.substack.com/p/frame-control ), but most saliently it included the heavy pep talk of how we could negotiate anything we wanted if we were clever enough, and if we failed—it was implied that we simply weren’t good enough. People get prescribed an hourly rate, based on how much their time is worth at Nonlinear. On the stack of who has most value, it goes Emerson, Kat, Drew, Alice, Chloe. All this in the context where we were isolated, and our finances mostly controlled by Emerson. I’ll add a few stories from my perspective, of how this plays out in practice.
Note: These stories are roughly 2 to 3 months into my job, this means 2 to 3 months of needing to find clever solutions to problems that ought to be simple, as well as ongoing negotiations with members of the Nonlinear team, to get the basics of my job done.
(⅙)”
…
“When we were flying to the Bahamas from St Martin, I was given a task of packing up all of Nonlinear’s things (mostly Kat & Emerson) into 5 suitcases. Emerson wanted the suitcases to be below the allowed limit if possible. I estimated that the physical weight of their items would exceed the weight limit of 5 suitcases. I packed and repacked the suitcases 5 to 6 times, after each time Emerson would check my work, say that the suitcases are too heavy, and teach me a new rule according to which to throw things out. Eventually I got it done to a level that Emerson was satisfied with. Him and Kat had been working outside the entire time.
In a previous packing scenario I had packed some things like charging cables and similar daily used items too fast, which Emerson did not appreciate, so this time I had left some everyday things around for him to use and grab as the last things. When I said we are packed and ready to go, he looked around the house and got angry at all the things that were lying around that he now had to pack himself—I remember him shouting in anger. I was packing up the cars and didn’t deal with him, just let him be mad in the house. This got Drew pretty frustrated as well, he had witnessed me repacking five bags 5-6 times and also tried to negotiate with Emerson about ditching some things that he refused to leave behind (we carried around 2 mountain bikes, and Emerson tasked me with packing in a beach chair as well). When we got into the car which was packed to the brim, Drew got to driving and as we drove out, he shouted really loudly out of anger. The anger was so real that I parsed it as him making a joke because I could not fathom how angry he was—my immediate response was to laugh. I quickly realized he was serious, I stopped and apologized, to which he responded with something like “no I am actually mad, and you should be too!”—related to how much we had to pack up. (2/6)“
…
“Kat had asked me to buy her a specific blonde hair coloring, at the time she told me it’s urgent since she had grown out her natural hair quite a lot. We were living in St Martin where they simply do not sell extreme blond coloring in the specific shade I needed to find, and Amazon does not deliver to St Martin. I also needed to grab this hair coloring while doing weekly groceries. One important guideline I needed to follow for groceries was that it had to be roughly a 10 min car trip but they were frequently disappointed if I didn’t get all their necessities shopped for from local stores so I naturally ventured further sometimes to make sure I got what they asked for.
I ended up spending hours looking for that blonde hair coloring in different stores, pharmacies, and beauty stores, across multiple weekly grocery trips. I kept Kat updated on this. Eventually I found the exact shade she asked for—Kat was happy to receive this but proceeded to not color her hair with it for another two weeks. Then we had to pack up to travel to the Bahamas. The packing was difficult (see previous paragraph) - we were struggling with throwing unnecessary things out. The hair color had seemed pretty important, and I thought Bahamas would also be a tricky place to buy that haircolor from, so I had packed it in. We get to the airport, waiting in the queue to check in the suitcases. Kat decides to open up the suitcases to see which last minute things we can throw out to make the suitcases lighter. She reaches for the hair color and happily throws it out. My self worth is in a place where I witness her doing this (she knows how much effort I put into finding this), and I don’t even think to say anything in protest—it just feels natural that my work hours are worth just this much. It’s depressing. (3/6)”
…
“There was a time during our stay at St Martin when I was overwhelmed from living and seeing only the same people every single day and needed a day off. Sometimes I’d become so overwhelmed I became really bad at formulating sentences and being in social contexts so I’d take a day off and go somewhere on the island where I could be on my own, away from the whole team—I’ve never before and after experienced an actual lack of being able to formulate sentences just from being around the same people for too long. This was one of these times. We had guests over and the team with the guests had decided in the morning that it’s a good vacation day for going to St Barths. I laid low because I thought since I’m also on a weekend day, it would not be mine to organize (me and Kat would take off Tuesdays and Saturdays, these were sometimes called weekend or vacation days).
Emerson approaches me to ask if I can set up the trip. I tell him I really need the vacation day for myself. He says something like “but organizing stuff is fun for you!”. I don’t know how to respond nor how to get out of it, I don’t feel like I have the energy to negotiate with him so I start work, hoping that if I get it done quickly, I can have the rest of the day for myself.
I didn’t have time to eat, had just woken up, and the actual task itself required to rally up 7 people and figure out their passport situation as well as if they want to join. St Barths means entering a different country, which meant that I needed to check in with the passport as well as covid requirements and whether all 7 people can actually join. I needed to quickly book some ferry tickets there and back for the day, rally the people to the cars and get to the ferry—all of this within less than an hour. We were late and annoyed the ferry employees—but this is one of the things generally ignored by the Nonlinear team, us being late but getting our way is a sign of our agency and how we aren’t NPCs that just follow the prescribed ferry times—they’re negotiable after all, if we can get away with getting to St Barths anyway.
I thought my work was done. We got to the island, my plan was to make the most of it and go on my own somewhere but Emerson says he wants an ATV to travel around with and without an ATV it’s a bit pointless. Everyone sits down at a lovely cafe to have coffee and chit chat, while I’m running around to car and ATV rentals to see what they have to offer. All ATVs have been rented out—it’s tourist season. I check back in, Emerson says I need to call all the places on the island and keep trying. I call all the places I can find, this is about 10 places (small island). No luck. Eventually Emerson agrees that using a moped will be okay, and that’s when I get relieved from my work tasks.
I did describe this to Kat in my next meeting with her that it’s not okay for me to have to do work tasks while I’m on my weekends, and she agreed but we struggled to figure out a solution that would make sense. It remained more of a “let’s see how this plays out”. (4/6)”
…
“One of my tasks was to buy weed for Kat, in countries where weed is illegal. When I kept not doing it and saying that it was because I didn’t know how to buy weed, Kat wanted to sit me down and teach me how to do it. I refused and asked if I could just not do it. She kept insisting that I’m saying that because I’m being silly and worry too much and that buying weed is really easy, everybody does it. I wasn’t comfortable with it and insisted on not doing this task. She said we should talk about it when I’m feeling less emotional about it. We never got to that discussion because in the next meeting I had with her I quit my job. (⅚)”
…
“The aftermath of this experience lasted for several months. Working and living with Nonlinear had me forget who I was, and lose more self worth than I had ever lost in my life. I wasn’t able to read books anymore, nor keep my focus in meetings for longer than 2 minutes, I couldn’t process my own thoughts or anything that took more than a few minutes of paying attention. I was unable to work for a few months. I was scared to share my experiences, terrified that Emerson or Kat would retaliate. While working with them I had forgotten that I used to be excited for work, and getting new tasks would spark curiosity on how to solve them best, rather than feelings of overwhelm. I stopped going for runs and whenever I did exercise I wasn’t able to finish my routine—I thought it meant I was just weak. Emerson held such a strong grasp of financial control over me that I actually forgot that I had saved up money from my previous jobs, to the extent of not even checking my bank statements. I seriously considered leaving effective altruism, as well as AI safety, if it meant that I could get away from running into them, and get away from a tolerance towards such behavior towards people.
It’s really not about the actual contracts, salaries, illegal jobs. Even with these stories, I’m only able to tell some of them that I can wrap my head around. I spent months trying to figure out how to empathize with Kat and Emerson, how they’re able to do what they’ve done, to Alice, to others they claimed to care a lot about. How they can give so much love and support with one hand and say things that even if I’d try to model “what’s the worst possible thing someone could say”, I’d be surprised how far off my predictions would be. I think the reader should make up their own mind on this. Read what Nonlinear has to say. Read what Ben says, what these comments add to it.
People trying their best can sometimes look absolutely terrifying, but actions need to have consequences nonetheless. This isn’t an effect of weird living and working conditions either, I believe it goes deeper than that—I am still happy to hear that Nonlinear has since abandoned at least that part of their “experiment”. But Nonlinear also isn’t my idea of effective altruism or doing good better and I hope we can keep this community safer than it was for me and Alice, for all the current and new members to come along in the future. (6/6) ”
I confirm that this is Chloe, who contacted me through our standard communication channels to say she was posting a comment today.
Thank you very much for sharing, Chloe.
Ben, Kat, Emerson, and readers of the original post have all noticed that the nature of Ben’s process leads to selection against positive observations about Nonlinear. I encourage readers to notice that the reverse might also be true. Examples of selection against negative information include:
Ben has reason to exclude stories that are less objective or have a less strong evidence base. The above comment is a concrete example of this.
There’s also something related here about the supposed unreliability of Alice as a source: Ben needs to include this to give a complete picture/because other people (in particular the Nonlinear co-founders) have said this. I strongly concur with Ben when he writes that he “found Alice very willing and ready to share primary sources [...] so I don’t believe her to be acting in bad faith.” Personally, my impression is that people are making an incorrect inference about Alice from her characteristics (that are perhaps correlated with source-reliability in a large population, but aren’t logically related, and aren’t relevant in this case).
To the extent that you expect other people to have been silenced (e.g. via anticipated retaliation), you might expect not to hear relevant information from them.
To the extent that you expect Alice and Chloe to have had burnout-style experiences, you might expect not to read clarifications on or news about negative experiences.
Until this post came out, this was true of ~everything in the post.
There is a reason the post was published 1.5 years after the relevant events took place—people involved in the events really do not want to spend further mental effort on this.
😬 There’s a ton of awful stuff here, but these two parts really jumped out at me. Trying to push past someone’s boundaries by imposing a narrative about the type of person they are (‘but you’re the type of person who loves doing X!’ ‘you’re only saying no because you’re the type of person who worries too much’) is really unsettling behavior.
I’ll flag that this is an old remembered anecdote, and those can be unreliable, and I haven’t heard Emerson or Kat’s version of events. But it updates me, because Chloe seems like a pretty good source and this puzzle piece seems congruent with the other puzzle pieces.
E.g., the vibe here matches something that creeped me out a lot about Kat’s text message to Alice in the OP, which is the apparent attempt to corner/railroad Alice into agreement via a bunch of threats and strongly imposed frames, followed immediately by Kat repeatedly stating as fact that Alice will of course agree with Kat: “[we] expect you will do the same moving forward”, “Sounds like you’ve come to the same conclusion”, “It sounds like we’re now on the same page about this”.
😢 Jesus.
This sounds like a terribly traumatic experience. I’m so sorry you went through this, and I hope you are in a better place and feel safer now.
Your self-worth is so, so much more than how well you can navigate what sounds like a manipulative, controlling, and abusive work environment.
It sounds like despite all of this, you’ve tried to be charitable to people who have treated you unfairly and poorly—while this speaks to your compassion, I know this line of thought can often lead to things that feel like you are gaslighting yourself, and I hope this isn’t something that has caused you too much distress.
I also hope that Effective Altruism as a community becomes a safer space for people who join it aspiring to do good, and I’m grateful for your courage in sharing your experiences, despite it (very reasonably!) feeling painful and unsafe for you.[1] All the best for whatever is next, and I hope you have access to enough support around you to help with recovering what you’ve lost.
============
[Meta: I’m aware that there will likely be claims around the accuracy of these stories, but I think it’s important to acknowledge the potential difficulty of sharing experiences of this nature with a community that rates itself highly on truth-seeking, possibly acknowledging your own lived experience as “stories” accordingly; as well as the potential anguish it might be for these experiences to have been re-lived over the past year and possibly again in the near future, if/when these claims are dissected, questioned, and contested.]
That being said, your experience would be no less valid had you chosen not to share these. And even though I’m cautiously optimistic that the EA community will benefit from you sharing these experiences, your work here is supererogatory, and improving Nonlinear’s practices or the EA community’s safety is not your burden to bear alone. In a different world it would have been totally reasonable for you to not have shared this, if that was what you needed to do for your own wellbeing. I guess this comment is more for past Chloes or other people with similar experiences who may have struggled with these kinds of decisions than it is for Chloe today, but thought it was worth mentioning.
Wow. Sincere apologies you went through that. Even if Kat and Emerson thought they were being reasonable (no comment), and/or even if bad instances were few and far between (no comment), such instances would affect me and most people I know very deeply. Probably including the multi-month hangover and residual pain today. And that matters, and is something we need managers/bosses/colleagues to consider. Even if it was only painful at the time, that would matter. Really sorry.
P.S. I previously put a “changed my mind” react to this comment, but I really meant “brought new things to mind”. Put them in other comments
I hope this doesn’t seem heartless, but: Given the degree of contested narratives in this affair, can someone not-anonymous with access to Chloe confirm that this account speaks for her?
(i think it probably does, to be clear, but also think it’s worth checking)
Confirmed, this is Chloe.
@Ben Pace
Thanks for sharing your story.
I think it’s brave for you to be coming forward and sharing your experiences. I’m really sorry this happened to you, but hopefully, we can learn from this as a community so that no one ends up in a situation like this again.
Hello Chloe, I’m sorry to hear these stories. The world is way better and cooler after encountering such bad behaviors. Your courage to share these stories is part of the steps to becoming a better person. Healing is always in the horizon, trust the process.
Very interesting how people disliked / disagreed my feedback after reading Chloe’s narration. I have been in her situation before (encountering malevolence in humans) and there is a process to recovery, probably could have phrased my comment better but I take the good and the bad of trying to do more good in this lifetime—even as little as trying to give honest feedback.
Zooming out from this particular case, I’m concerned that our community is both (1) extremely encouraging and tolerant of experimentation and poor, undefined boundaries and (2) very quick to point the finger when any experiment goes wrong. If we don’t want to have strict professional norms I think it’s unfair to put all the blame on failed experiments without updating the algorithm that allows people embark on these experiments with community approval.
To be perfectly clear, I think this community has poor professional boundaries and a poor understanding of why normie boundaries exist. I would like better boundaries all around. I don’t think we get better boundaries by acting like a failure like this is due to character or lack of integrity instead of bad engineering. If you wouldn’t have looked at it before it imploded and thought the engineering was bad, I think that’s the biggest thing that needs to change. I’m concerned that people still think that if you have good enough character (or are smart enough, etc), you don’t need good boundaries and systems.
Yep, I think this is a big problem.
More generally, I think a lot of EAs give lip service to the value of people trying weird new ambitious things, “adopt a hits-based approach”, “if you’re never failing then you’re playing it too safe”, etc.; but then we harshly punish visible failures, especially ones that are the least bit weird. In cases like those, I think the main solution is to be more forgiving of failures, rather than to give up on ambitious projects.
From my perspective, none of this is particularly relevant to what bothers me about Ben’s post and Nonlinear’s response. My biggest concern about Nonlinear is their attempt to pressure people into silence (via lawsuits, bizarre veiled threats, etc.), and “I really wish EAs would experiment more with coercing and threatening each other” is not an example of the kind of experimentalism I’m talking about when I say that EAs should be willing to try and fail at more things (!).
“Keep EA weird” does not entail “have low ethical standards”. Weirdness is not an excuse for genuinely unethical conduct.
I think the failures that seem like the biggest deal to me (Nonlinear threatening people and trying to shut down criticism and frighten people) genuinely are matters of character and lack of integrity, not matters of bad engineering. I agree that not all of the failures in Ben’s OP are necessarily related to any character/integrity issues, and I generally like the lens you’re recommending for most cases; I just don’t think it’s the right lens here.
Maybe I’m wrong— I really don’t know, and there have been a lot of “I don’t know” kind of incidents around Nonlinear, which does give me pause— but it doesn’t seem obviously unethical to me for Nonlinear to try to protect its reputation. That’s a huge rationalist no-no, to try to protect a narrative, or to try to affect what another person says about you, but I see the text where Kat is saying she could ruin Alice’s reputation as just a response to Alice’s threat to ruin Nonlinear’s reputation. What would you have thought if Nonlinear just shared, without warning Alice, that Alice was a bad employee for everyone’s information? Would Alice be bad if she tried to get them to stop?
My read on Alice’s situation was that she got into this hellish set of poor boundaries and low autonomy where she felt like a dependent servant on these people while traveling from country to country. I would have hated it, I already know. I would have hated having to fight my employer about not having to drive illegally in a foreign country. I am sure she was not wrong to hate it, but I don’t know if that’s the fault of Nonlinear except that maybe they should have predicted that was bad engineering that no one would like. Some people might have liked that situation, and it does seem valuable to be able to have unconventional arrangements.
EDIT: Sorry, it was Chloe with the driving thing.
Alice did not threaten to ruin Nonlinear’s reputation, she went ahead and shared her impressions of Nonlinear with people. If Nonlinear responded by sharing their honest opinions about Alice with people, that would be fine. In fact, they should have been doing this from the start, regardless of Alice’s actions. Instead they tried to suppress information by threatening to ruin her career. Notice how their threat reveals their dishonesty. Either Alice is a bad employee and they were painting her in a falsely positive light before, or she is a good employee and they threatened to paint her in a falsely negative light.
I think it’s totally normal and reasonable to care about your reputation, and there are tons of actions someone could take for reputational reasons (e.g., “I’ll wash the dishes so my roommate doesn’t think I’m a slob”, or “I’ll tweet about my latest paper because I’m proud of it and I want people to see what I accomplished”) that are just straightforwardly great.
I don’t think caring about your reputation is an inherently bad or corrupting thing. It can tempt you to do bad things, but lots of healthy and normal goals pose temptation risks (e.g., “I like food, so I’ll overeat” or “I like good TV shows, so I’ll stay up too late binging this one”); you can resist the temptation without stigmatizing the underlying human value.
In this case, I think the bad behavior by Nonlinear also would have been bad if it had nothing to do with “Nonlinear wants to protect its reputation”.
Like, suppose Alice honestly believed that malaria nets are useless for preventing malaria, and Alice was going around Berkeley spreading this (false) information. Kat sends Alice a text message saying, in effect, “I have lots of power over you, and dirt I could share to destroy you if you go against me. I demand that you stop telling others your beliefs about malaria nets, or I’ll leak true information that causes you great harm.”
On the face of it, this is more justifiable than “threatening Alice in order to protect my org’s reputation”. Hypothetical-Kat would be fighting for what’s true, on a topic of broad interest where she doesn’t stand to personally benefit. Yet I claim this would be a terrible text message to send, and a community where this was normalized would be enormously more toxic than the actual EA community is today.
Likewise, suppose Ben was planning to write a terrible, poorly-researched blog post called Malaria Nets Are Useless for Preventing Malaria. Out of pure altruistic compassion for the victims of malaria, and a concern for EA’s epistemics and understanding of reality, Hypothetical-Emerson digs up a law that superficially sounds like it forbids Ben writing the post, and he sends Ben an email threatening to take Ben to court and financially ruin him if he releases the post.
(We can further suppose that Hypothetical-Emerson lies in the email ‘this is a totally open-and-shut case, if this went to trial you would definitely lose’, in a further attempt to intimidate and pressure Ben. Because I’m pretty danged sure that’s what happened in real life; I would be amazed if Actual-Emerson actually believes the things he said about this being an open-and-shut libel case. I’m usually reluctant to accuse people of lying, but that just seems to be what happened here?)
Again, I’d say that this Hypothetical-Emerson (in spite of the “purer” motives) would be doing something thoroughly unethical by sending such an email, and a community where people routinely responded to good-faith factual disagreements with threatening emails, frivolous lawsuits, and lies, would be vastly more toxic and broken than the actual EA community is today.
Good points. I admit, I’m thinking more about whether it’s justifiable to punish that behavior than about whether it’s good or bad. It makes me super nervous to feel that the stakes are so high on what feels like it could be a mistake (or any given instance of which could be a mistake), which maybe makes me worse at looking at the object level offense.
I’d be happy to talk with you way more about rationalists’ integrity fastidiousness, since (a) I’d expect this to feel less scary if you have a clearer picture of rats’ norms, and (b) talking about it would give you a chance to talk me out of those norms (which I’d then want to try to transmit to the other rats), and (c) if you ended up liking some of the norms then that might address the problem from the other direction.
In your previous comment you said “it doesn’t seem obviously unethical to me for Nonlinear to try to protect its reputation”, “That’s a huge rationalist no-no, to try to protect a narrative”, and “or to try to affect what another person says about you”. But none of those three things are actually rat norms AFAIK, so it’s possible you’re missing some model that would at least help it feel more predictable what rats will get mad about, even if you still disagree with their priorities.
Also, I’m opposed to cancel culture (as I understand the term). As far as I’m concerned, the worst person in the world deserves friends and happiness, and I’d consider it really creepy if someone said “you’re an EA, so you should stop being friends with Emerson and Kat, never invite them to parties you host or discussion groups you run, etc.” It should be possible to warn people about bad behavior without that level of overreach into people’s personal lives.
(I expect others to disagree with me about some of this, so I don’t want “I’d consider it really creepy if someone did X” to shut down discussion here; feel free to argue to the contrary if you disagree! But I’m guessing that a lot of what’s scary here is the cancel-culture / horns-effect / scapegoating social dynamic, rather than the specifics of “which thing can I get attacked for?”. So I wanted to speak to the general dynamic.)
Can you give examples of EAs harshly punishing visible failures that weren’t matters of genuine unethical conduct? I can think of some pretty big visible failures that didn’t lead to any significant backlash (and actually get held up as positive examples of orgs taking responsibility). For example, Evidence Action discovering that No Lean Season didn’t work and terminating it, or GiveDirectly’s recent fraud problems after suspending some of their standard processes to get out money in a war zone. Maybe people have different standards for failure in longtermist/meta EA stuff?
To add sources to recent examples that come to mind that broadly support MHR’s point above RE: visible (ex post) failures that don’t seem to be harshly punished, (most seem somewhere between neutral to supportive, at least publicly).
Lightcone
Alvea
ALERT
AI Safety Support
EA hub
No Lean Season
Some failures that came with a larger proportion of critical feedback probably include the Carrick Flynn campaign (1, 2, 3), but even here “harshly punish” seems like an overstatement. HLI also comes to mind (and despite highly critical commentary in earlier posts, I think the highly positive response to this specific post is telling).
============
On the extent to which Nonlinear’s failures relate to integrity / engineering, I think I’m sympathetic to both Rob’s view:
As well as Holly’s:
but do not think these are necessarily mutually exclusive.
Specifically, it sounds like Rob is mainly thinking about the source of the concerns, and Holly is thinking about what to do going forwards. And it might be the case that the most helpful actionable steps going forward are things that look more like improving boundaries and systems, regardless of whether you believe failures specific to Nonlinear are caused by deficiencies in integrity or engineering.
That said, I agree with Rob’s point that the most significant allegations raised about Nonlinear quite clearly do not fit the category of ‘appropriate experimentation that the community would approve of’, under almost all reasonable perspectives.
I was thinking of murkier cases like the cancelation of Leverage and people taking small infractions on SBF’s part as foreshadowing of the fall of FTX (which I don’t think was enough of an indications), but admittedly those all involve parties that are guilty of something. Maybe I’m just trying too hard to be fair or treat people the way I want to be treated when I make a mistake.
“I’m concerned that people still think that if you have good enough character (or are smart enough, etc), you don’t need good boundaries and systems.”
I strongly agree with this.
I think EA fails to recognise that traditional professional boundaries are a safeguard against tail risks and that these tail risks still remain when people appear to be kind / altruistic / rational.
Even though I don’t think EA needs to totally replicate outside norms, I do agree that there are good reasons why quite a few norms exist.
I’d say the biggest norms from outside that EA needs to adopt are less porous boundaries on work/dating, and importantly actually having normalish pay structures/work environments.
I agree about the bad engineering. Apart from boundary norms we might also want to consider making our organizations more democratic. This kind of power abuse is a lot harder when power is more equally distributed among the workers. Bosses making money while paying employees nothing or very little occurs everywhere, but co-ops tend to have a lot less inequality within firms. They also create higher job satisfaction, life satisfaction and social trust. Furthermore, research has shown that employees getting more ownership of the company is associated with higher perception of fairness, information sharing and cooperation. It’s no wonder then that co-ops have a lower turnover rate.
EDIT: After Ben’s comment I changed ‘raking in profits’ to ‘making money’. I do think this proposal is relevant for the conversation since the low pay, bad work environment and worsening mental health are a big part of the problem described in the post.
Concerns about “bosses raking in profits” seem pretty weird to raise in a thread about a nonprofit, in a community largely comprised of nonprofits. There might be something in your proposal in general, but it doesn’t seem relevant here.
Thanks for writing this post. It looks like it took a lot of effort that could have been spent on much more enjoyable activities, including your mainline work.
This isn’t a comment on the accuracy of the post (though it was a moderate update for me). I could imagine nonlinear providing compelling counter evidence over the next few days and I’d of course try to correct my beliefs in light of new evidence.
Posts like this one are a public good. I don’t think anyone is particularly incentivised to write them, and they seem pretty uncomfortable and effortful, but I believe they serve an important function in the community by helping to root out harmful actors and disincentivising harmful acts in the first place.
This situation reminded me of this post, EA’s weirdness makes it unusually susceptible to bad behavior. Regardless of whether you believe Chloe and Alice’s allegations (which I do), it’s hard to imagine that most of these disputes would have arisen under more normal professional conditions (e.g., ones in which employees and employers don’t live together, travel the world together, and become romantically entangled). A lot of the things that (no one is disputing) happened here are professionally weird; for example, these anecdotes from Ben’s summary of Nonlinear’s response (also the linked job ad):
“Our intention wasn’t just to have employees, but also to have members of our family unit who we traveled with and worked closely together with in having a strong positive impact in the world, and were very personally close with.”
“We wanted to give these employees a pretty standard amount of compensation, but also mostly not worry about negotiating minor financial details as we traveled the world. So we covered basic rent/groceries/travel for these people.”
“The formal employee drove without a license for 1-2 months in Puerto Rico. We taught her to drive, which she was excited about. You might think this is a substantial legal risk, but basically it isn’t”
“The semi-employee was also asked to bring some productivity-related and recreational drugs over the border for us. In general we didn’t push hard on this.”
I am reminded again that, while many professional norms are stupid, a lot of them exist for good reasons. Further, I think it’s often pretty easy to disentangle the stupid professional norms from the reasonable professional norms by just thinking: “Are there good reasons this norm exists?” (E.g., “Is there a reason employees and employers shouldn’t live together?” Yes: the power dynamics inherent to the employer/employee dynamic are at odds with healthy roommate dynamics, in which people generally shouldn’t have lots of power over one another. “Is there a reason I should have to wear high heels to work in an office?” …. no.) Trying to make employees part of your family unit, not negotiating financial details with your employees, covering your employees’ rent and groceries, and being in any way involved in your employees breaking the law are all behaviors that are at odds with standard professional practices, and there are very obviously good reasons for this.
Vulnerable EAs also want to follow only good norms while disposing of the bad ones!
If you offer people the heuristic “figure out if it’s reasonable and only obey it if it is” then often they will fail.
You mention clear-cut examples, but oftentimes they will be very grey, or they will seem grey while being inside them. There may be several strong arguments why the norm isn’t a good one; the bad actor will be earnest, apologetic, and trying to let you have your norm even though they don’t believe in it. They may seem like a nice reasonable person trying to do the right thing in an awkward situation.
Following every norm would be quite bad. Socially enforced gendered cosmetics are disgusting and polyamory is pretty nifty.
Nonetheless, we must recognize that the same process that produces “polyamory is pretty nifty” will also produce in many people: “there’s no reason I can’t have a friendly relationship with my employer rather than an adversarial one” (these are the words they will use to describe the situation while living in their employer’s house) and “I can date my boss if we are both ethical about it.”
We must not look down on these people as though we’d never fall for it—everyone has things they’d fall for, no matter how smart they are.
My suggestion is to outsource. Google your situation. Read reddit threads. Talk to friends, DM people who have the same job as you (and who you are certain have zero connection to your boss) - chances are they’ll be happy to talk to someone in the same position.
A few asides, noting that these are basics and noncomplete.
If someone uses the phrase “saving the world” on any level approaching consistent, run. Legitimate people who are working on legitimate problems do not rely on this drama. The more exciting the narrative and the more prominent a role the leader plays in it, the more skeptical you should be.
(Ah, you might say, but facts can’t be too good to be true: they are simply true or false. My answer to that would be the optimizer’s curse.)
If someone compares themselves to Professor Quirrell, run. In a few years, we’ll have enough abusers who identified with him to fill a scrapbook.
If there’s a dumb enough schmuck in EA to compare themselves to Galileo/da Vinci, exit calmly while giggling.
If someone is willing to break a social contract for utilitarian benefit, assume they’ll break other social contracts for personal benefit i.e. sex.
If you are a somewhat attractive woman with unusual epistemic rigor, assume people will try to take advantage of that.
If someone wants unusual investment from you in a relationship, outsource.
If they say they’re uncomfortable with how much you talk to other people, this must be treated as an attempt to subvert you.
Expect to hear “I have a principled objection to lying and am utterly scandalized whenever someone does it” many times, and be prepared to catch that person lying.
If someone pitches you on something that makes you uncomfortable, but for which you can’t figure out your exact objection—or if their argument seems wrong but you don’t see the precise hole in their logic—it is not abandoning your rationality to listen to your instinct.
If someone says “the reputational risks to EA of you publishing this outweigh the benefits of exposing x’s bad behavior. if there’s even a 1% chance that AI risk is real, then this could be a tremendously evil thing to do”, nod sagely then publish that they said that.
Those last two points need a full essay to be conveyed well but I strongly believe them and think they’re important.
I use this phrase a lot, so if you think this phrase is a red flag, well, include me on the list of people who have that flag.
Agreed (here, and with most of your other points). Instincts like those can be wrong, but they can also be right. “Rationality” requires taking all of the data into consideration, including illegible hunches and intuitions.
Agreed!
Yeah a quick search finds 10,000+ hits for comments about “saving the world”) on this forum, many of which are by me.
I do think the phrase is a bit childish and lacks some rigor, but I’m not sure what’s a good replacement. “This project can avert 10^-9 to 10^-5 dooms defined as unendorsed human extinction or worse at 80% resilience” just doesn’t quite have the same ring to it.
I think the phrase is imprecise, relative to phrases like “prevent human extinction” or “maximize the probability that the reachable universe ends up colonized by happy flourishing civilizations”. But most of those phrases are long-winded, and it often doesn’t matter in conversation exactly which version of “saving the world” you have in mind.
(Though it does matter, if you’re working on existential risk, that people know you’re being relatively literal and serious. A lot of people talk about “saving the planet” when the outcome they’re worried about is, e.g., a 10% loss in current biodiversity, rather than the destruction of all future value in the observable universe.)
If a phrase is useful and tracks reality well, then if it sounds “childish” that’s more a credit to children than a discredit to the phrase.
And I don’t know what “lacks some rigor” means here, unless it’s referring to the imprecision.
Mostly, I like “saves the world” because it owns my weird beliefs about the situation I think we’re in, and states it bluntly so others can easily understand my view and push back against it if they disagree.
Being in a situation where you think your professional network’s actions have a high chance of literally killing every human on the planet in the next 20 years, or of preventing this from happening, is a very unusual and fucked up situation to be in. I could use language that downplays how horrifying and absurd this all is, but that would be deceiving you about what I actually think. I’d rather be open about the belief, so it can actually be talked about.
I don’t think the problem stems from how important an organization thinks their work is. Emerson’s meme company had no pretense to be world-saving, and yet had toxic dynamics as well.
The problem is that high stakes are not a reason to suspend ethical injunctions or personal boundaries; those provide more protective value when applied to something with genuinely high stakes.
My impression is that it’s very normal for employees to expense food and living costs during business travel without any negotiation, and that there exist common jobs where free room and board are a part of the compensation (e.g. working at a resort or on an oil rig).
I think it’s fairly common for companies to ask their employees to break the law. (Often a bad thing, from society’s perspective. But common.) I was asked to do it multiple times a day at a previous job. (A good job, at a well-regarded company. I’m not sure they even knew they were breaking the law until I pointed it out. Eventually they changed their practices—possibly because it made very little difference to the bottom line.)
With regard to weirdness in general: The biggest mistakes I see the EA movement making—with harms I estimate as far larger than harms in the OP—are a result of insufficient weirdness, not excess weirdness. So I don’t like to discourage weirdness in a blanket sort of way.
It’s easy with the benefit of hindsight to point out a bunch of things which might have created a bad situation. What we really need is the ability to forecast the effects of individual norms in advance.
My thoughts, for those who want them:
I don’t have much sympathy for those demanding a good reason why the post wasn’t delayed. While I’m generally quite pro sharing posts with orgs, I think it’s quite important that this doesn’t give the org the right to delay or prevent the posting. This goes double given the belief of both the author and their witnesses that Nonlinear is not acting in good faith.
There seem to be enough uncontested/incontestable claims made in this post for me to feel comfortable recommending that junior folks in the community stay away from Nonlinear. These include asking employees to carry out illegal actions they’re not comfortable with, and fairly flagrantly threatening employees with retaliation for saying bad things about them (Kat’s text screenshotted above is pretty blatant here).
Less confidently, I would be fairly surprised if I come out of the other end of this, having seen Nonlinear’s defence/evidence, and don’t continue to see the expenses-plus-tiny-salary setup as manipulative and unhealthy.
More confidently than anything on this list, Nonlinear’s threatening to sue Lightcone for Ben’s post is completely unacceptable, decreases my sympathy for them by about 98%, and strongly updates me in the direction that refusing to give in to their requested delay was the right decision. In my view, it is quite a strong update that the negative portrayal of Emerson Spartz in the OP is broadly correct. I don’t think we as a community should tolerate this, and I applaud Lightcone for refusing to give in to such heavy-handed coercion.
The reason we urge everyone to withhold judgment is because even what currently look like “uncontested/incontestable claims” are, in fact, very much contestable.
For example: “(Kat’s text screenshotted above is pretty blatant here).”
I agree that it does indeed look blatant here. But when you see the full context—the parts Alice conspicuously did not include—the meaning will change radically, to the point where you will likely question Alice’s other claims and ‘evidence’.
The problem with Kat’s text is that it’s a very thinly veiled threat to end someone’s career in an attempt to control Nonlinear’s image. There is no context that justifies such a threat.
Just for the record, I think there are totally contexts that could justify that threat. I would be surprised if one of those had occurred here, but I can totally imagine scenarios where the behavior in the screenshot is totally appropriate (or at the very least really not that bad, given the circumstances).
I really respect that even in the middle of all this you (and other members of the LW team) still team leave comments like these.
I think serious mistakes were made in how this situation was handled but I have never doubted that you guys are trying your best to help the community, and comments like this are proof of that.
Could any realistic scenario justify both the threat and not dishing on the employees bad behavior when asked about them, in order to stop the employee bad-mouthing you though? Presumably someone would have to have behaved incredibly badly before threatening their career could possibly be appropriate, but then you surely shouldn’t be prepared to tell friends and acquaintances at allied orgs that the person is fine, or keep silent about their misbehavior when asked about them.
I’m particularly interested in whether or not they were encouraged to break the law for people who had financial and professional power over them, which seems less nuanced than ‘how threatening is or isn’t this WhatsApp exchange’.
I’m not sure I’ve imagined a realistic justifying scenario yet, but in my experience it’s very easy to just fail to think of an example even though one exists. (Especially when I’m baking in some assumptions without realizing I’m baking them in.)
Could be! I might end up with egg on my face here, in which case I will do my best to admit it. That said, my most important claim is my last: if you wanted me and others to truly withhold judgement, you really shouldn’t have threatened to sue.
I appreciate your willingness to update if we provide sufficient evidence to do so!
I concur with David. Irrespective of the circumstances, the threat is unmistakably apparent. It appears that thus far, both of you have issued threats to individuals, either to tarnish their reputation or to initiate legal action against them. Regrettably, these actions are not enhancing your own reputation. In fact, they are casting a shadow of suspicion upon you.
I have no personal insight on Nonlinear, but I want to chime in to say that I’ve been in other communities/movements where I both witnessed and directly experienced the effects of defamation-focused civil litigation. It was devastating. And I think the majority of the plaintiffs, including those arguably in the right, ultimately regretted initiating litigation. I sincerely hope this does not occur in the EA community. And I hope that threats of litigation are also discontinued. There are alternatives that are dramatically less monetarily and time-intensive, and more likely to lead to productive outcomes. I think normalizing (threats of) defmation-focused civil litigation is extremely detrimental to community functioning and community health.
Can you say anything more about what the effects of this litigation were?
Could you say more about the alternatives approaches?
This comment on the LessWrong forums strikes me as a compelling rebuttal here. I don’t deny that the effects of defamation-focused civil litigation can be devastating, but the effects of defamation itself are often at least as bad. Before making this post, which caused enormous reputational damage to Nonlinear (a group I heard about only through this drama), Ben spent a total of three hours hearing their responses and refused to give them requested time to make more of their side clear in advance. Inasmuch as he got anything wrong in it, the errors are serious and have caused serious damage of precisely the sort that courts are a last-resort remedy for.
People should not feel like they have no recourse beyond submitting to the court of public opinion, with all its flaws and biases. I’m mostly an outsider to the EA community, and while I respect the people within it as clear thinkers, I don’t trust “keep it in the family” as the sole approach with EA more than I do in any other community. I believe that in a case like this, a threat of a defamation lawsuit should be seen not as a dramatic escalation, but as a predictable and proportionate response to a threat to destroy someone’s reputation within their own community, independent of the merits of either party’s claim. It’s not straightforwardly clear to me that one causes more devastation than the other.
Some thoughts on the general discussion:
(1) some people are vouching for Kat’s character. This is useful information, but it’s important to note that behaving badly is very compatible with having many strengths, treating one’s friends well, etc. Many people who have done terrible things are extremely charismatic and charming, and even well-meaning or altruistic. It’s hard to think bad things about one’s friends, but unfortunately it’s something we all need to be open to. (I’ve definitely in the past not taken negative allegations against someone as seriously as I should have, because they were my friend).
(2) I think something odd about the comments claiming that this post is full of misinformation, is that they don’t correct any of the misinformation. Like, I get that assembling receipts, evidence etc can take a while, and writing a full rebuttal of this would take a while. But if there are false claims in the post, pick one and say why it’s false.
This makes these interventions seem less sincere to me, because I think if someone posted a bunch of lies about me, in my first comments/reactions I would be less concerned about the meta appropriateness of the post having been posted, and more concerned to be like “this post says Basic Thing X but that’s completely false, actually it was Y, and A, B and C can corroborate”. On the earlier post where an anonymous account accused Nonlinear of bad behaviour, Kat’s responses actually made me update against her, because she immediately attacked the validity of even raising the critique and talked about the negative effects of gossip (on the meta level), rather than expressing concern about possible misunderstandings at NL (for example). For me, this is reminiscent of the abuse tactic DARVO (Deny, Attack, Reverse Victim and Offender): these early comments meant that much of the conversation on this post has been about the appropriateness of Ben publishing it now, or the appropriateness of Emerson threatening to sue him, rather than the object-level ‘hey apparently there are these people in our community who treated their employees really badly’.
Just to clarify, nonlinear has now picked one claim and provided screen shots relevant to it, I’m not sure if you saw that.
I also want to clarify that I gave Ben a bunch of very specific examples of information in his post that I have evidence are false (responding to the version he sent me hours before publication). He hastily attempted to adjust his post to remove or tweak some of his claims right before publishing based on my discussing these errors with him. It’s a lot easier (and vastly less time consuming) to provide those examples in a private one-on-one with Ben than to provide them publicly (where, for instance, issues of confidentially become much more complicated, and where documentation and wording need to be handled with extreme care, quite different than the norms of conversation).
The easiest to explain example is that Ben claimed a bunch of very bad sounding quotes from Glassdoor were about Emerson that clearly weren’t (he hadn’t been at the company for years when those complaints were written). Ben acknowledged somewhere in the comments that those were indeed not about Emerson and so that was indeed false information in the original version of the post.
My understanding, trying to interpret Ben’s comments on this point (if I’m mistaken, please correct me of course), is that Ben thinks it’s not a big deal that he almost included these false claims about Emerson (and would have had I not pointed it out right before publication) because he doesn’t view these as cruxy for his own personal hypotheses.
On the other hand, I view it as a very big deal to make severely negative, public, false claims about another person, and to me this one example is indicative of the process used to generate the post—a process that, from my point of view based on the evidence I’ve seen, led the post to contain a bunch of false claims.
Of course Ben didn’t purposely say anything he knew to be false, but I think Ben and I have different opinions on how bad it is to make public, false, potentially damaging claims about people, and the standard of care/evidence required before making those claims.
Nonlinear says they will provide lots more specific examples in the coming days of what they see as misinformation in the post—of course it will be up to you to judge whether you find their evidence convincing.
From my point of view, it’s best to reserve judgment until the evidence is released, assuming they do it within a reasonable time frame (e.g., a week or two—if they failed to release the evidence promptly that would be another matter).
I see no reason to jump to conclusions or take sides before we’ve seen all the evidence since it sounds like we’ll have access to it very soon.
Spencer—good reply.
The crux here is about ‘how bad it is to make public, false, potentially damaging claims about people, and the standard of care/evidence required before making those claims’.
I suspect there are two kinds of people most passionately involved in this dialogue here on EA Forum:
(1) those who have personally experienced being harmed by false, damaging claims (e.g. libel, slander) in the past (which includes me, for example) -- who tend to focus on the brutal downsides of reckless accusations that aren’t properly researched, and
(2) those who have been harmed by people who should have been called out earlier, but where nobody had the guts to be a whistle-blower before—who tend to focus on the downsides of failing to report bad behavior in a quick and effective and public way.
I think if everybody does a little soul-searching about which camp they fall into, and is a little more upfront about their possible personal biases around these issues, the quality of discourse might be higher.
Glassdoor states that 14 of the reviews were about Emerson. I’m not able to view all the reviews to verify this myself. Are you able to confirm that none of those 14 reviews were about Emerson? If that’s the case, it seems like an error that Emerson would benefit from trying to get fixed.
Hi Rebecca. To clarify: that’s not what I’m saying. What I’m saying is that in the version Ben showed me hours before publication none of the disparaging Glassdoor comments he used in the post (that he claimed were all about Emerson) were actually about Emerson. He has acknowledged this point. Based on me pointing this out, he hastily fix these mistakes before releasing the public version, hence you won’t find this error in the version of his post above. I use this as an example of just one of a number of what I see as important errors (based on the evidence I have access to) in his draft that was shared with me right before publishing, which made me fear his research was done in a biased, sloppy and/or error prone way, with (from my point of view) not enough care being taken to avoid making false harmful claims.
I agree and disagree. I agree that making false claims is serious and people should take great care to avoid it. And your ultimate conclusion that we should reserve final judgment until we see counter evidence sounds right to me.
But I disagree with holding all misconduct reports to incredibly high standards, such that in a report with as many allegations as this, people feel the report is basically wrong if it includes a few misinterpretations.
In an ideal world, yes, all summaries of patterns of misconduct would not contain any errors. But in reality, I’ve found that almost all allegations of behaviors that turn out to be—for all intents and purposes—true, contain some level of mistakes, misattributions, specific allegations that are overstated.
People who allege misconduct are under intense scrutiny. And absolutely, scrutiny is warranted. But as someone who has reported misconduct and spoken to other people that report misconduct, the expectation of perfection is, to put it mildly, chilling. It means people do not come forward, it means people who do come forward are further traumatized, it means allegations that are 80% truthful are dismissed outright.
Does a third or more of what Ben wrote comport with your general understanding? If so, these allegations are still concerning to me.
And on the Kat screenshots/food question, I do not think they delegitimize what Ben wrote here. At worst, Ben somewhat overstated the food situation. But, my overall impression from those screenshots was what Alice said was basically true. Kat’s framing of what the screenshots say make me doubt Kat’s account more, not less.
I’ll also say as someone who has experienced harassment, that people really underestimate how much bias they have towards their friends accused of misconduct. Friends of the harasser would say things to defend their friend that to most people would seem pretty obviously wrong, like “he probably wasn’t going to follow through on the threat, so him making the threat is not really an issue.”
Thanks Tiresias for your thoughtful comments. I agree with much of what you say but I seemingly have a few important differences of opinion:
I agree. I don’t think I was holding the report to an incredibly high standard though. When I read it I was immediately chagrined by the amount and severity of false information (i.e., false as far as I can tell based on the evidence I have access to). I was also distressed that Ben was not seeking out evidence he could have easily gotten from nonlinear.
Good point. I would differentiate between the standard for people privately reporting bad behavior (where I think the bar should be way lower) and large scale investigations that are made public (where I think the bar should be much higher for the claims made—e.g., that the investigator should be very careful not to credulously include damaging false information).
I think this framing doesn’t quite work because the post contains some very minor concerns and some very major ones, and I think it’s much more important whether the major concerns are accurate than that the minor concerns are accurate, so counting up the number of inaccuracies doesn’t, I think, reflect what’s important. But based on the evidence I’ve seen, some of the damning claims in his original post seemed to me to be false or missing critical context that make them very misleading.
I think people should decide for themselves what they think is true about this after reviewing the evidence. Here is a side-by-side comparison of what Ben says and what Kat says:
Ben: “Alice claims she was sick with covid in a foreign country, with only the three Nonlinear cofounders around, but nobody in the house was willing to go out and get her vegan food, so she barely ate for 2 days.”
Kat: “1. There was vegan food in the house (oatmeal, quinoa, mixed nuts, prunes, peanuts, tomatoes, cereal, oranges) which we offered to cook for her. 2. We did pick up vegan food for her.”
And here are the screenshots Kat provided to back up her account: https://forum.effectivealtruism.org/posts/5pksH3SbQzaniX96b/a-quick-update-from-nonlinear
Absolutely agreed, this is a significant issue to watch out for.
Hi Amber. We were working as fast as we could on examples of the evidence. We have since posted this comment here, demonstrating Alice claiming that nobody in the house got her vegan food when we have evidence that we did.
The claim in the post was “Alice claims she was sick with covid in a foreign country, with only the three Nonlinear cofounders around, but nobody in the house was willing to go out and get her vegan food, so she barely ate for 2 days.”. (Bolding added)
If you follow the link, you’ll see we have screenshots demonstrating that:
1. There was vegan food in the house, which we offered her.
2. I personally went out, while I was sick myself, to buy vegan food for her (mashed potatoes) and cooked it for her and brought it to her.
I have empathy for Alice. She was hungry (because of her fighting with a boyfriend [not Drew] in the morning and having a light breakfast) and sick. That sucks, and I feel for her. And that’s why I tried (and succeeded) in getting her food.
I would be fine if she told people that she was hungry when she was sick, and she felt sad and stressed. Or that she was hungry but wasn’t interested in any of the food we had in the house. But she told everybody that we didn’t get her food when we did. This made us look like uncaring people, which we are not. She even said in her texts that she felt loved and supported.
We chose this example not because it’s the most damning (although it certainly paints us in very negative and misleading light) but simply because it was the easiest claim to explain where we had extremely clear evidence without having to add a lot of context, explanation, find more evidence, etc.
Even so, it took us hours to put together and share. Both because we had to track down all of the old conversations, make sure we weren’t getting anything wrong, anonymize Alice, format the screenshots (they kept getting blurry), and importantly, write it up. Writing for the EA Forum/LessWrong is already quite a difficult thing, with it being a reasonable assumption that people will point out any little detail you get wrong. I ask to please empathize with how it must be for us now, given that so many people currently see everything we post through the lens of being unethical people.
If you’ve ever spent a long time trying to get a post just right for the EA Forum/LessWrong, I ask you to empathize with what we’re going through here.
We also had to spend time dealing with all of the other comments while trying to pull this together. My inbox is completely swamped. Not to mention trying to hold myself together when the worst thing that’s ever happened to me is happening.
We simply meant that original comment to be a placeholder while we spent time gathering and sharing the evidence. We simply wanted people to withhold judgment.
We continue to ask this. We are all working full-time on gathering, organizing, and explaining all of the evidence. We expect to put out multiple posts over the next few weeks showing more evidence like the above.
We think that if you see the evidence, a sizeable percentage of you will update and think that we are not “predators” to be avoided, but rather good but far from perfect people trying to do good. We made some mistakes, and we learned from them and set up ways to prevent them. There are also misrepresentations of the truth that are happening that we will provide evidence for. Please keep this hypothesis in mind and we’ll send you the evidence to support it as soon as we can.
Seconding this.
I would be pretty interested to read a comment from nonlinear folks listing out everything that they believe to be false in the narrative as stated, even if they can’t substantiate their counter-claims yet.
I agree that if it were just a few disputed claims that would be a a reasonable thing to do, there are so many. And there is so much nuance.
Here is one example, however. This took us hours to prepare, just to rebut a single false claim:
https://forum.effectivealtruism.org/posts/5pksH3SbQzaniX96b/a-quick-update-from-nonlinear
Crostposted from LessWong (link)
Maybe I’m missing something, but it seems like it should take less than an hour to read the post, make a note of every claim that’s not true, and then post that list of false claims, even if it would take many days to collect all the evidence that shows those points are false.
I imagine that would be helpful for you, because readers are much more likely to reserve judgement if you listed which specific things are false.
Personally, I could look over that list and say “oh yeah, number 8 [or whatever] is cruxy for me. If that turns out not to be true, I think that substantially changes my sense of the situation.”, and I would feel actively interested in what evidence you provide regarding that point later. And it would let you know which points to prioritize refuting, because you would know which things are cruxy for people reading.
In contrast, a generalized bid to reserve judgement because “many of the important claims were false or extremely misleading”...well, it just seems less credible, and so leaves me less willing to actually reserve judgement.
Indeed, deferring on producing such a list of claims-you-think-are-false suggests the possibility that you’re trying to “get your story straight.” ie that you’re taking the time now to hurriedly go through and check which facts you and others will be able to prove or disprove, so that you know which things you can safely lie or exagerate about, or what narrative paints you in the best light while still being consistent with the legible facts.
I would think you could go through the post and list out 50 bullet points of what you plan to contest in a couple of hours.
Or if it’s majority false, pick out the things you think are actually true, implying everything else you contest!
Good insights Amber. It appears that their chosen approach for responding to the matter involves focusing on even the most minor references to food-related inconveniences in an effort to question the credibility of the individuals involved. Interesting strategy indeed.
I’ve confirmed with a commenter here, whom left a comment positive of non-linear, that they were asked to leave that comment by nonlinear. I think this is low-integrity behaviour on behalf of nonlinear, and an example of brigading. I would appreciate the forum team looking into this.
Edit: I have been asked to clarify that they were encouraged to comment ’by nonlinear, rather than asked to comment positively (or anything in particular).
I think asking your friends to vouch for you is quite possibly okay, but that people should disclose there was a request.
It’s different evidence between “people who know you who saw this felt motivated to share their perspective” vs “people showed up because it was requested”.
Yeah, this seems right.
I’m not sure that should count as brigading or unethical in these circumstances as long as they didn’t ask people to vote a particular way.
Remember that even though Ben is only a single author, he spent a bunch of time gathering negative information from various sources[1]. I think that in order to be fair, we need to allow them to ask people to present the other side of the story. Also consider: if Kat or Emerson had posted a comment containing a bunch of positive comments from people, then I expect that everyone would be questioning why those people hadn’t made the comments themselves.
I think it might also be helpful to think about it from the opposite perspective. Would anyone accuse me of brigading if I theoretically knew other people who had negative experiences with Nonlinear and suggested that they might want to chime in?
If not, then we’ve created an asymmetry where people are allowed to do things in terms of criticism, but not in terms of defense, which seems like a mistake to me.
That said, it is useful for us to know that some of these comments were solicited.
Disclaimer: I formerly interned at Nonlinear. I don’t want my meta-level stance to be taken as support of the actions of Nonlinear leadership (I’m very disappointed by what they’ve admitted to in relation to these claims). Nor was I asked by them to leave any comments here. I just believe that they should be allowed to defend themselves, even though I’m not satisfied by the defense that they’ve given so far, nor do I expect their response to be satisfactory either.
I say negative information not to disparage it. I have negatively updated on this information as the information revealed is worse than the rumors I’d previously heard.
Said very quickly: I will defer to the EA Forum team on this. If anybody here was asked to comment by nonlinear, please let the forum team know so they can create decisions and norms around this. You can send a message to Lizka (choosing Lizka, because she’s the most well recognised/trusted, you can also contact other members of the moderation team).
I am not sure if you’re arguing: (1) that this is not brigading or (2) even if it is brigading, brigading is not detrimental. But I can go through both
(1) There’s been limited discussion on the EA forum about the concept of brigading, that mostly focused on “vote brigading”. But if I point to a website that’s more experienced in brigading being used to distort discussion, reddit, they consider leaving comments to be brigading:
(Bold is mine) I suspect most with reddit mod or admin experience would consider what happened to be brigading, because: A) You’re over-representing a certain opinion. One might want to use the ratio of comments to quickly determine who is right/wrong, comment brigading distorts this metric. B) You’re increasing the flow of team A people, on a team B post, leading to distorted voting, distorted replies (and as discussed, distorted comments)
You discuss whether it’s acceptable for different actors to engage in coordinated comment posting. I’d argue, and wager that the EA forum team agrees, that it’s pretty much always unacceptable to engage in concealed coordinated forum engagement.
Yes. Please disclose if you ever do anything like this. It’s absolutely brigading.
(2) I have a few key areas of disagreement with this angle
A) the positive comments being left, are largely irrelevant. If the claim is “Kat encouraged me to drive without a license” then no amount of “I have had great experiences with Kat at EAGs” is relevant.
B) Early on, these comments had the potential to set the trajectory of discussion. If you have a model of the forum, where everybody shares perspectives unbiasedly and confidently, then you might be surprised to hear this. But most users try to “read the room” before commenting, and will be less likely to comment if they’re saying something controversial.
C) Most users try to determine what’s true and what’s false by reading the overall total valance of comments. I agree, this is not a great way for determining what is and is not true, but it’s a reality. Manipulating the ratio of valanced comments is unhelpful for this reason.
Just to finish on a question, if Kat had asked N number of people to chime in, at what point would you think it’s excessive? I.e. I assume we’d agree, if she asked 400 people to chime in, that would be excessive. But what is the minimum number whereby you feel this would be excessive?
I completely agree “I have had great experiences with Kat at EAGs” is irrelevant to the claim “Kat encouraged me to drive without a license”, which is why I’ve been pretty clear in my comments that I don’t see Nonlinear leadership coming out looking good after this.
At the same time, this post isn’t narrowly focused on this issue or just a few issues, but rather seems like a summary of the negative things that Ben found out when he started investigating Nonlinear.
So I think “it’s off-topic” could have been a valid position had Ben chosen a narrower focus, but I don’t think that applies in general given this particular case. On the other hand, if you think any specific comments are a distraction, I’d encourage you to (politely!) pick one or two of them (perhaps the top-voted) and explain why the comment is a distraction from the real issues here.
I think this is an excellent point and I don’t exactly know. I think it’s made trickier by the fact that someone might send out a bunch of requests and then it’s quite variable how many people reply. For example, if you message eight people and three comment, then that seems like a reasonable number of people sharing their positive impressions, but then if all eight add a comment then that could very well have a significant distortive effect.
I’m a friend of Kat’s and spoke to her about this situation. At the end of the conversation she asked me to post a comment (which I’d been meaning to do anyways), but made it clear that I should do it only if I wanted to and that there was no pressure at all for me to do so. The way she did this was quite wholesome, to the point that it stuck with me; I could feel that she really meant what she said (i.e. she really cared that I only post if I wanted to), and I did not in fact feel any pressure in the request.
Just a note to say that we—the moderators—began looking into this two days ago. The current status is that Elliot has let the user who was allegedly asked to comment by Nonlinear know to reach out to us, if they are comfortable, so that we might determine whether what has happened here is an instance of brigading.
At this point, it seems worth mentioning that it is not the opinion of the moderators that every activity that looks like this is brigading. See the linked post for our full take on brigading: the summary is “your vote (or comment) should be your own”.
Good work. It occurred to me that this might be happening but I didn’t do the sleuth work. Thanks.
Good work Elliot. It appears they are employing a similar tactic to what they did when concerns were previously raised on the EA forum. This strategy seems to be consistent with their approach. Notably, during the prior instance, one of their advocates was Kat Friedman, a long-standing acquaintance. It seems they are repeating this strategy in the current context, possibly due to its prior success.
Hi all, I wanted to chime in because I have had conversations relevant to this post with just about all involved parties at various points. I’ve spoken to “Alice” (both while she worked at nonlinear and afterward), Kat (throughout the period when the events in the post were alleged to have happened and afterward), Emerson, Drew, and (recently) the author Ben, as well as, to a much lesser extent, “Chloe” (when she worked at nonlinear). I am (to my knowledge) on friendly terms with everyone mentioned (by name or pseudonym) in this post. I wish well for everyone involved. I also want the truth to be known, whatever the truth is.
I was sent a nearly final draft of this post yesterday (Wednesday), once by Ben and once by another person mentioned in the post.
I want to say that I find this post extremely strange for the following reasons:
(1) The nearly final draft of this post that I was given yesterday had factual inaccuracies that (in my opinion and based on my understanding of the facts) are very serious despite ~150 hours being spent on this investigation. This makes it harder for me to take at face value the parts of the post that I have no knowledge of. Why am I, an outsider on this whole thing, finding serious errors in the final hours before publication? That’s not to say everything in the post is inaccurate, just that I was disturbed to see serious inaccuracies, and I have no idea why nobody caught these (I really don’t feel like I should be the one to correct mistakes, given my lack of involvement, but it feels important to me to comment here since I know there were inaccuracies in the piece, so here we are).
(2) Nonlinear reached out to me and told me they have proof that a bunch of claims in the post are completely false. They also said that in the past day or so (upon becoming aware of the contents of the post), they asked Ben to delay his publication of this post by one week so that they could gather their evidence and show it to Ben before he publishes it (to avoid having him publish false information). However, he refused to do so.
This really confuses me. Clearly, Ben spent a huge amount of time on this post (which has presumably involved weeks or months of research), so why not wait one additional week for Nonlinear to provide what they say is proof that his post contains substantial misinformation? Of course, if the evidence provided by nonlinear is weak, he should treat it as such, but if it is strong, it should also be treated as such. I struggle to wrap my head around the decision not to look at that evidence. I am also confused why Ben, despite spending a huge amount of time on this research, apparently didn’t seek out this evidence from Nonlinear long ago.
To clarify: I think it’s very important in situations like this not to let the group being criticized have a way to delay publication indefinitely. If I were in Ben’s shoes, I believe what I would have done is say something like, “You have exactly one week to provide proof of any false claims in this post (and I’ll remove any claim you can prove is false) then I’m publishing the post no matter what at that time.” This is very similar to the policy we use for our Transparent Replications project (where we replicate psychology results of publications in top journals), and we have found it to work well. We give the original authors a specific window of time during which they can point out any errors we may have made (which is at least a week). This helps make sure our replications are accurate, fair, and correct, and yet the teams being replicated have no say over whether the replications are released (they always are released regardless of whether we get a response).
It seems to me that basic norms of good epistemics require that, on important topics, you look at all the evidence that can be easily acquired.
I also think that if you publish misinformation, you can’t just undo it by updating the post later or issuing a correction. Sadly, that’s not the way human minds/social information works. In other words, misinformation can’t be jammed back into the bottle once it is released. I have seen numerous cases where misinformation is released only later to be retracted, in which the misinformation got way more attention than the retraction, and most people came away only with the misinformation. This seems to me to provide a strong additional reason why a small delay in the publication date appears well worth it (to me, as an outsider) to help avoid putting out a post with potentially substantial misinformation. I hope that the lesswrong/EA communities will look at all the evidence once it is released, which presumably will be in the next week or so, in order to come to a fair and accurate conclusion (based on all the evidence, whatever that accurate final conclusion turns out to be) and do better than these other cases I’ve witnessed where misinformation won the day.
Of course, I don’t know Ben’s reason for jumping to publish immediately, so I can’t evaluate his reasons directly.
Disclaimer: I am friends with multiple people connected to this post. As a reminder, I wish well for everyone involved, and I wish for the truth to be known, whatever that truth happens to be. I have acted (informally) as an advisor to nonlinear (without pay) - all that means, though, is that every so often, team members there will reach out to me to ask for my advice on things.
Note: I’ve updated this comment a few times to try to make my position clearer, to add some additional context, and to fix grammatical mistakes.
(Copying over the same response I posted over on LW)
I don’t have all the context of Ben’s investigation here, but as someone who has done investigations like this in the past, here are some thoughts on why I don’t feel super sympathetic to requests to delay publication:
In this case, it seems to me that there is a large and substantial threat of retaliation. My guess is Ben’s sources were worried about Emerson hiring stalkers, calling their family, trying to get them fired from their job, or threatening legal action. Having things be out in the public can provide a defense because it is much easier to ask for help if the conflict happens in the open.
As a concrete example, Emerson has just sent me an email saying:
For the record, the threat of libel suit and use of statements like “maximum damages permitted by law” seem to me to be attempts at intimidation. Also, as someone who has looked quite a lot into libel law (having been threatened with libel suits many times over the years), describing the legal case as “unambiguous” seems inaccurate and a further attempt at intimidation.
My guess is Ben’s sources have also received dozens of calls (as have I received many in the last few hours), and I wouldn’t be surprised to hear that Emerson called up my board, or would otherwise try to find some other piece of leverage against Lightcone, Ben, or Ben’s sources if he had more time.
While I am not that worried about Emerson, I think many other people are in a much more vulnerable position and I can really resonate with not wanting to give someone an opportunity to gather their forces (and in that case I think it’s reasonable to force the conflict out in the open, which is far from an ideal arena, but does provide protection against many types of threats and adversarial action).
Separately, the time investment for things like this is really quite enormous and I have found it extremely hard to do work of this type in parallel to other kinds of work, especially towards the end of a project like this, when the information is ready for sharing, and lots of people have strong opinions and try to pressure you in various ways. Delaying by “just a week” probably translates into roughly 40 hours of productive time lost, even if there isn’t much to do, because it’s so hard to focus on other things. That’s just a lot of additional time, and so it’s not actually a very cheap ask.
Lastly, I have also found that the standard way that abuse in the extended EA community has been successfully prevented from being discovered is by forcing everyone who wants to publicize or share any information about it to jump through a large number of hoops. Calls for “just wait a week” and “just run your posts by the party you are criticizing” might sound reasonable in isolation, but very quickly multiply the cost of any information sharing, and have huge chilling effects that prevent the publishing of most information and accusations. Asking the other party to just keep doing a lot of due diligence is easy and successful and keeps most people away from doing investigations like this.
As I have written about before, I myself ended up being intimidated by this for the case of FTX and chose not to share my concerns about FTX more widely, which I continue to consider one of the worst mistakes of my career.
My current guess is that if it is indeed the case that Emerson and Kat have clear proof that a lot of the information in this post is false, then I think they should share that information publicly. Maybe on their own blog, or maybe here on LessWrong or on the EA Forum. It is also the case that rumors about people having had very bad experiences working with Nonlinear are already circulating around the community and this is already having a large effect on Nonlinear, and as such, being able to have clear false accusations to respond against should help them clear their name, if they are indeed false.
I agree that this kind of post can be costly, and I don’t want to ignore the potential costs of false accusations, but at least to me it seems like I want an equilibrium of substantially more information sharing, and to put more trust in people’s ability to update their models of what is going on, and less paternalistic “people are incapable of updating if we present proof that the accusations are false”, especially given what happened with FTX and the costs we have observed from failing to share observations like this.
A final point that feels a bit harder to communicate is that in my experience, some people are just really good at manipulation, throwing you off-balance, and distorting your view of reality, and this is a strong reason to not commit to run everything by the people you are sharing information on. A common theme that I remember hearing from people who had concerns about SBF is that people intended to warn other people, or share information, then they talked to SBF, and somehow during that conversation he disarmed them, without really responding to the essence of their concerns. This can take the form of threats and intimidation, or the form of just being really charismatic and making you forget what your concerns were, or more deeply ripping away your grounding and making you think that your concerns aren’t real, and that actually everyone is doing the thing that seems wrong to you, and you are going to out yourself as naive and gullible by sharing your perspective.
[Edit: The closest post we have to setting norms on when to share information with orgs you are criticizing is Jeff Kauffman’s post on the matter. While I don’t fully agree with the reasoning within it, in there he says:
This case seems to me to be fairly clearly covered by the second paragraph, and also, Nonlinear’s response to “I am happy to discuss your concerns publicly in the comments” was to respond with “I will sue you if you publish these concerns”, to which IMO the reasonable response is to just go ahead and publish before things escalate further. Separately, my sense is Ben’s sources really didn’t want any further interaction and really preferred having this over with, which I resonate with, and is also explicitly covered by Jeff’s post.
So in as much as you are trying to enforce some kind of existing norm that demands running posts like this by the org, I don’t think that norm currently has widespread buy-in, as the most popular and widely-quoted post on the topic does not demand that standard (I separately think the post is still slightly too much in favor of running posts by the organizations they are criticizing, but that’s for a different debate).]
I think it is reasonable to share information without providing people the opportunity to respond if you’re worried they’ll abuse your generosity (which may indeed have been Ben’s reasoning here), but I’m generally against the norm of not waiting for a response if the main reason is “can’t be bothered” *.
I guess my reasoning here is the drama can consume a community if it let it, so I would prefer that we choose our norms to minimise unnecessary drama.
(I don’t want this comment to be taken as a defense of Nonlinear leadership in relation to these claims. I want to be clear that I am very disappointed with what they have admitted even if we were to imagine that nothing else claimed is true. I should mention that I interned at Nonlinear and was subsequently invited to collaborate with them on Superlinear, but I turned this down because of some of the rumours I was hearing).
I’ll add that I’m more sympathetic to the victim of something deciding to just post it than a third party bc third parties are generally more able to bear that burden.
agreed
I don’t understand how people would be at greater risk of retaliation if the post was delayed by a week?
I also want to make sure people realise that there’s a huge difference between “I will stalk you / call your family / get you fired” and “I will sue you” in terms of what counts as threats/intimidation/retaliation [edit: e.g. if Alice had threatened to sue Nonlinear that wouldn’t be considered retaliation, whereas threatening to call their family would be worrying], so I don’t think Emerson’s email is a particularly strong confirmation that the “large and substantial threat of retaliation” is real.
If Ben is worried about losing 40 hours of productive time by responding to Nonlinear’s evidence in private, he doesn’t have to. He could just allow them to put together their side of the story, ready for publishing when he publishes his own post. Similarly, if he’s worried about them manipulating him with their charisma, he could just agree to delay and then stop reading/responding to their messages. This way readers can still read both sides of the story at once, rather than read the most juicy side, tell all their friends the latest gossip, and carry on with their lives as the post drops off the frontpage.
It is a lot easier to explain to your employer or your friends or your colleagues what is happening if you can just link them to a public post, if someone is trying to pressure you. That week in which the person you are scared of has access to the post, but the public does not, is a quite vulnerable week, in my experience.
I think threatening a libel lawsuit with the intensity that Emerson did strikes me as above “calling my family” in terms of what counts as threats/intimidation/retaliation, especially if you are someone who does not have the means for a legal defense (which would be true of Ben’s sources for this post). Libel suits are really costly, and a quite major escalation.
Emerson’s email says explicitly that if the post is published as is, that he would pursue a libel suit. This seems to rule out the option of just delaying and letting them prepare their response, and indeed demands that the original post gets changed.
I think there’s different versions of “call your family” that suggests different levels of escalation.
Tell your family compromising facts about you (bad and scary, but probably less scary than having to mount a legal defense)
Threaten your family, implicitly or explicitly (the type of thing that predictably leads people to being terrified)
Thank you for taking the time to clearly and patiently explain these dynamics. They’re not obvious to me, as someone who’s never experienced anything similar.
Does Lightcone have liability insurance? Or any kind of legal insurance or something similar that covers the litigation costs involved in defamation lawsuits? I think posts like these are important, it would be sad if there wasn’t a way to easily protect whistleblowers from having to spend a lot of money fighting a defamation case.
My current model is that we have enough money to defend against a defamation lawsuit like this. The costs are high, but we also aren’t a super small organization (we have a budget of like $3M-$4M a year), so I think we could absorb it if it happened, and my guess is we could fundraise additionally if the costs somehow ballooned above that.
I looked a bit into liability insurance but it seemed like a large pain, and not worth it given that we are probably capable of self-insuring.
I’m pretty surprised it seemed like a large pain. In my experience it’s been easy to secure.
I might be confused here, but it sure seemed easy to hand over money, but hard to verify that the insurance would actually kick in in the relevant situation, and wouldn’t end up being voided for some random reason.
Yeah, that can be significant work.
God bless your clear thinking and strong stance-taking, Habryka.
There is a reason courtrooms give both sides equal chances to make their case before they ask the jury to decide.
It is very difficult for people to change their minds later, and most people assume that if you’re on trial, you must be guilty, which is why judges remind juries about “innocent before proven guilty”.
This is one of the foundations of our legal system, something we learned over thousands of years of trying to get better at justice. You’re just assuming I’m guilty and saying that justifies not giving me a chance to present my evidence.
Also, if we post another comment thread a week later, who will see it? EAF/LW don’t have sufficient ways to resurface old but important content.
Re: “my guess is Ben’s sources have received dozens of calls”—well, your guess is wrong, and you can ask them to confirm this.
You also took my email strategically out of context to fit the Emerson-is-a-horned-CEO-villain narrative. Here’s the full one:
Trials are public as well. Indeed our justice system generally doesn’t have secret courts, so I am not sure what argument you are trying to make here. In as much as you want to make an analogue to the legal system, you now also have the ability to present your evidence, in front of an audience of your peers. The “jury” has not decided anything (and neither have I, at least regarding the dynamics and accusations listed in the post).
I am not assuming you are guilty. My current best guess is that you have done some pretty bad things, but yeah, I haven’t heard your side very much, and I will update and signal boost your evidence if you provide me with evidence that the core points of this post are wrong.
You could make a new top-level post. I expect it would get plenty of engagement. The EAF/LW seems really quite capable of resurfacing various types of community conflict, and making it the center of attention for a long time.
I shared the part that seemed relevant to me, since sharing the whole email seemed excessive. I don’t think the rest of the email changes the context of the libel suit threat you made, though readers can decide that for themselves.
Just a quick note that my read of the full email does not really change the context of the libel suit threat, or Habryka’s claims, especially given much of the missing information from the email was already in a public comment by Kat.
I also agree that if Nonlinear provided substantive evidence in a top-level post that dramatically changed the narrative of this story, e.g. change my personal credence of each of these claims to <15%, it would receive significant attention, such as staying on the front page on the community tab for at least 1 week, gain >150 karma etc.
If you can share (publicly or privately) strong evidence contradicting “claims [...] that wildly distort the true story” (emphasis mine), I pre-commit to signal boosting.
For what it’s worth, I wouldn’t be surprised if you do have strong counter-evidence to some claims (given the number of claims made, the ease with which things can be lost in translation, your writing this email, etc.). But, as of right now, I would be surprised if my understanding of the important stuff—roughly, the items covered in Ben’s epistemic state and the crossing of red lines—was wildly distorted. I hope that it is.
[EDIT, Nov 13: it sounds like the Nonlinear reply might be in the 100s of pages. This might be the right move from their point of view, but reading 3-figure pages stretches my pre-commitment above further than I would have intended at the time. I’d like to amend the commitment to “engaging with the >=20 pages-equivalent that seems most relevant to me or Nonlinear, or skimming >=50 pages-equivalent.” If people think this is breaking the spirit of my earlier commitment, I’ll seriously consider standing by the literal wording of that commitment (engaging with ~everything). Feel free to message about this.]
I’d also be pleased to find out that my understanding is wrong!
I don’t think they’re in a position to show that a lot of hurt didn’t accrue to the employees, but maybe they can show some ways in which they clearly signaled that they wouldn’t try to ruin their employees or intimidate them, such as
Texts where they told Alice/Chloe “I understand that you had a horrible experience here, and it’s totally fine for you to tell other people that this working/living environment was awful and you’ve been really hurt by it, and also here’s a way in which I’m going to make sure you’re better off for having interacted with us”
Any writing where Emerson says “I have used vicious and aggressive business tactics in the past, but to be clear if this work-slash-family situation really burns you, I will not come after you with these tactics even if I strongly disagree with your interpretations of what happened”
Or generally some “philosophy of how to behave with allies” doc written by Emerson that says something like “Here are the 48 Laws of Power, and here is my explanation why you should never use these tactics if you want to be trustworthy, and here’s why we would never use them in an EA/x-risk/etc context”
Then that could change my mind on the intimidation aspect a bunch. I think in general intimidation/fear is often indirect and implicit, so it’s going to be hard to disprove, but if there was clear evidence about why that wouldn’t happen here, then I could come to believe that Alice/Chloe/others had managed to trick themselves into being more worried than they needed to be.
For me personally, the full email appears worse in full than summarized.
Can you give some examples of the serious errors you found?
Yes, here two examples, sorry I can’t provide more detail:
-there were claims in the post made about Emerson that were not actually about Emerson at all (they were about his former company years after he left). I pointed this out to Ben hours before publication and he rushed to correct it (in my view it’s a pretty serious mistake to make false accusations about a person, I see this as pretty significant)!
-there was also a very disparaging claim made in the piece (I unfortunately can’t share the details for privacy reasons; but I assume nonlinear will later) that was quite strongly contradicted by a text message exchange I have
To confirm: I had a quickly written bit about the glassdoor reviews. It was added in without much care because it wasn’t that cruxy to me about the whole situation, just a red flag that suggested further investigation was worth it, that someone else suggested I add for completeness. The reviews I included were from after the time that Emerson’s linkedin says he was CEO, and I’m glad that Spencer corrected me.
If I’m remembering the other one, there was also a claim that I included not because it was itself obviously unethical, but because it seemed to indicate a really invasive social environment, and when I think information has been suppressed I have strong heuristics suggesting to share worrying information even if it isn’t proven or necessarily bad. Anyway, Spencer said he was confident in a very different narrative of events, so I edited it the comment to be more minor.
In general I think Spencer’s feedback on this and other points improved the post (though he also had some inaccurate information).
If the disparaging claim is in the piece, it makes no sense to me that you can’t specify which claim it is.
I think the idea is that it was in a draft but got edited out last-minute? That seems to be corroborated by Ben’s comment.
To me, this is the biggest red flag in this whole situation. My work has been written about by journalists, with both negative spins and actual factual inaccuracies and whenever this happens, my first response is: here is where you are wrong, and here is the truth. Which, I know, because it’s about me and I know what happened (at least according to me).
If someone doesn’t believe me and they want the “receipts”, I can provide these later, but I don’t need them to dispute the claim in the first place. I understand this piece has a lot of information and responding to everything can take time but again, but the broad strokes shouldn’t take this long.
In fact, it seems that Nonlinear already had a chance to dispute some of the claims when they had their lengthy interview with Ben and it seems that they did because the piece says, multiple times, that there are conflicting claims from both parties about what happened. I’m unclear Nonlinear want to clarify and prove these in their favor or if they want to dispute additional claims that they have not disputed before. Either way, the vagueness is concerning and in my experience, it is a sign of possibly buying more time to figure out a spin.
Could this be an instance of the rationalist tendency to “decouple”?
From one perspective, Ben is simply “Sharing information about nonlinear.” What’s wrong with providing additional information? It’s even caveated with a description of one’s epistemic status and instruction on how to update accordingly! Why don’t we all have such a “low bar for publicly sharing critical info about folks in the EA/x-risk/rationalist/etc ecosystem”?
From another perspective, Ben has chosen to “search for negative information about the Nonlinear cofounders” and then—without inviting or even permitting the accused party to share their side of the story in advance—share it in a public space full of agents whose tendency to gossip is far stronger than their tendency to update in an appropriately Bayesian manner (i.e. human beings).
I suspect Ben does in fact have some understanding of the political dimension of his decision to share this post, but I think his behaviour is more understandable when you consider that he’s embedded in a culture that encourages people to ignore the political consequences of what they say.
You may have missed the section where I had a 3hr call with them and summarized what they told me? It’s not everything we’d want but I think this sentence is inaccurate.
Of course I do! I thought about it a bunch and came to the conclusion that it’s best to share serious and credible accusations early and fast.
I’m confused—wouldn’t you consider the “Conversation with Nonlinear” section to be letting the accused party share their side of the story in advance?
Possibly naive question: if Non-Linear have material that undeniably rebuts these accusations and they only need to sort it out/organize for presentation, why not publish it in a disorganized/scrambled format, sort it out later and then publish clean/sorted out version? In this way, they will at least show that they’re not scheming anything and are honest about why they asked Ben to delay the post.
What am I missing?
I don’t know, but I know that negotiating confidentiality is a major part that often takes a lot of calendar time. They might have emails or text chats from people that they would like to share, but they first need to get permission to share, or at least provide adequate warning. This can definitely take a few days in my experience.
Makes sense, thanks
It might simply take a lot of time to track it all down. If there was one big Google Doc with all of the relevant information in a disorganized/scrambled format (and it didn’t have other information, such as someone’s social security number), then it might make sense to share the messy information and organize it later. But what is more likely is that there are tidbits of information scattered across dozens of email threads, chat groups, slack channels, Google Calendars, etc., and that merely copying and pasting a bunch of stuff into a messy Google Doc would take many, many hours.
That’s an interesting idea, but presenting information badly is a good way of ensuring people tune out when you present your final version.
I wonder about versions of this scheme where someone holds an unorganised version in escrow to be released alongside the organised version?
However, I’m not really sure if that would work either. Suppose you might have relevant texts where it isn’t clear whether they contain confidential information or not. Like in some cases, you may actually need to have a discussion about what is okay to share and what is not. Just quickly dumping a bunch of information out there is an easy way to accidentally do further harm.
Disclaimer: Previously interned at Nonlinear. This comment previously said that I didn’t have knowledge of what information Nonlinear is yet to release, but then I just realised that I actually do know a few things.
This is really weird to me. These allegations have been circling for over a year, and presumably Nonlinear has known about this piece for months now. Why do they still need to get their evidence together? And even if they do—just due to extraneous circumstances—why do they feel so entitled to the piece being held for a week, when they have had ample time to collect their side of the story.
To be clear I only informed them about my planned writeup on Friday.
(The rest of the time lots of other people involved were v afraid of retaliation and intimidation and I wanted to respect that while gathering evidence. I believe if I hadn’t made that commitment to people then I wouldn’t have gotten the evidence.)
Thanks—more sympathetic to the ask in that case, though I don’t think you were obliged to wait.
With regards to #2, I shared your concern, and I thought Habryka’s response didn’t justify that the cost of a brief delay was sufficient if there was a realistic chance of evidence being provided to contradict the main point of this post.
However, upon reflection, I am skeptical that such evidence will be provided. Why did Nonlinear not provide at least some of the proof they claim to have, in order to justify time for a more comprehensive rebuttal? Or at least describe the form the proof will take? That should be possible, if they have specific evidence in mind. Also, a week seems like longer time than should be needed to provide such proof, which increases my suspicion that they’re playing for time. What does delaying for a week do that a 48h delay would not?
Edit: Nonlinear has begun posting some evidence. I remain skeptical that the bulk of the evidence supports their side of the narrative, but I no longer find the lack of posting evidence as a reason for additional suspicion.
The vast majority of people should probably be withholding judgment and getting back to work for the next week until Nonlinear can respond.
I’m contributing to it now, but it’s a bit of a shame that this post has 183 comments at the time of writing when the post is not even a day old and not being on the front page. EA seems drawn to drama and controversy and it would accomplish its goals much better if it were more able to focus on more substantive posts.
For me, this is a substantive post. Nonlinear do a lot of “meta” EA work so their business practices and future matter to me
Fwiw “EA seems drawn to drama” is a take I’ve heard before and I feel like it’s kind of misleading. The truth is probably closer to “small communities are drawn to drama, EA is also drawn to drama and should (maybe) try to mitigate this”. It’s not super clear to me whether EA is worse or better than it’s reference class. Modelling the community as unusually bad is easy to say from the inside and could lead us to correct against drama in the wrong ways
I’m one of the Community Liaisons for CEA’s Community Health and Special Projects team. The information shared in this post is very troubling. There is no room in our community for manipulative or intimidating behaviour.
We were familiar with many (but not all) of the concerns raised in Ben’s post based on our own investigation. We’re grateful to Ben for spending the time pursuing a more detailed picture, and grateful to those who supported Alice and Chloe during a very difficult time.
We talked to several people currently or formerly involved in Nonlinear about these issues, and took some actions as a result of what we heard. We plan to continue working on this situation.
From the comments on this post, I’m guessing that some readers are trying to work out whether Kat and Emerson’s intentions were bad. However, for some things, intentions might not be very decision-relevant. In my opinion, meta work like incubating new charities, advising inexperienced charity entrepreneurs, and influencing funding decisions should be done by people with particularly good judgement about how to run strong organisations, in addition to having admirable intentions.
I’m looking forward to seeing what information Nonlinear shares in the coming weeks.
Re “work like incubating new charities, advising inexperienced charity entrepreneurs, and influencing funding decisions should be done by people with particularly good judgement about how to run strong organisations, in addition to having admirable intentions”, I think this is the single best sentence that has been written on this so far.
What happened as a result of this, before Ben posted?
Hey Agrippa, this comment provides a partial answer.
Hmm. I think if I had been in an abusive situation such as the ones OP describes, and I (privately) went to the Community Health team about it, and the only outcomes were what you just listed, I would have considered it a waste of my time and emotional energy.
Edit: waste of my time relative to “going public”, that is.
I assume the actions you’ve taken can’t be shared? (No pressure if it can’t).
Thanks for asking Yadav. I can confirm that:
Nonlinear has not been invited or permitted to run sessions or give talks relating to their work, or host a recruiting table at EAG and EAGx conferences this year.
Kat ran a session on a personal topic at EAG Bay Area 2023 in February. EDIT: Kat, Emerson and Drew also had a community office hour slot at that conference.
Since then we have not invited or permitted Kat or Emerson to run any type of session.
We have been considering blocking them from attending future conferences since May, and were planning on making that decision if/when Kat or Emerson applied to attend a future conference.
Are you familiar with any concerns about nonlinear not raised in Ben’s post? Ben seems particularly concerned that nonlinear creates an epistemic environment where he wouldn’t know if there was more. If there is, that seems pretty central to confirming Ben’s concerns.
Can you share what you mean by “intimidating behavior”? How does the community health team define, “intimidating behavior”?
Could you kindly provide information regarding the initial reporting of the case to the Community Health committee, along with the identity of the individual or individuals entrusted with the case’s investigation?
Is it within the realm of possibility that the relationship between Julia Wise and Kat Woods, as evidenced by the content accessible via the following link: https://juliawise.net/interview-with-kat-woods-decision-making-about-having-kids/, may have influenced the expeditiousness with which the Community Health committee executed pertinent actions?
Your assertion that, “We were familiar with many (but not all) of the concerns raised …,” piques curiosity as to which specific concerns had been previously acknowledged. Furthermore, could you elucidate the methodologies employed to ascertain their veracity?
In the spirit of transparency, and recognizing the historical underreporting tendencies of certain individuals like Julia Wise, it would be appreciated if you could enumerate the precise steps undertaken in the course of taking actions as alluded to.
Given the gravity of allegations, such as the solicitation of recreational substances from employees and the encouragement of unlicensed driving, is there not ample cause for the temporary suspension of individuals such as Emerson and Kat from participation in community events and forum activities? What threshold of misconduct would necessitate the Community Health committee to perceive such behavior as detrimental to the Effective Altruism community, thus contravening its mission and setting an undesirable precedent for newcomers?
To facilitate a clearer understanding of the investigative timeline, could you please divulge the duration of the ongoing investigation, its commencement date, and your projected timeline for the publication of conclusive findings?
No, because if nothing else you need to give them time to respond.
This sneaks in the presumption that publication is a good idea and a good use of CH time? I haven’t seen much positive evidence for this proposition, and indeed I’m seeing some negative evidence, live.
I understand your desire to know this information, Morpheus_Trinity. I’m sorry but we’re not in a position to share all that information here. This comment provides a partial answer.
I admire what your doing here overall in terms of keeping up pressure on the Community Health team to do something about bad actors and asking tough questions, but I don’t see what in that link supports the claim Kat Woods and Julia Wise are particularly close. I mean it’s reasonable to suspect that if a small blog interviews someone from a small world like EA, the interviewee is a close friend of the blogger. But it’s very far from guaranteed, and no closeness is mentioned in the blog post itself.
I think the pertinent question here is primarily not “Were Kat and Julia close”, but “What standard should we hold the Community Health team to here”. If you updated significantly negatively on Julia/the Community Health team due to recent events, you might want to hold them to a standard closer to the one Morpheus is proposing. This is especially true if you view the cause of inaction closer to some kind of deferral/information cascade (they are well-established and well-regarded members of the EA community), rather than due to Julia’s close personal relationship with the people in question. I do think this may be a good opportunity for the community health team to regain some trust though, and I would be interested in hearing more about the Community Health team’s involvement too, and whether we should be understanding this as “Ben spent his time on something that the Community Health team should have done but actively deprioritized”, or “The Community Health team played an active role in this investigation”, or something else.
For what it’s worth, I actually strongly upvoted Morpheus’ comment. I just think ‘she once interviewed her’ is a bit unfair to cite as somehow evidence of corruption, regardless of how confident people should be in Julia Wise overall.
Hello!
I’m Minh, Nonlinear intern from September 2022 to April 2023. The last time allegations of bad practices came up, I reiterated that I had a great time working at Nonlinear. Since this post is >10,000 words, I’m not able to address everything, both because:
I literally can’t write that much.
I can’t speak for interactions between Nonlinear and Alice/Chloe, because everything I’ve heard on this topic is secondhand.
I’m just sharing my own experience with Nonlinear, and interpreting specific claims made about Kat/Emerson’s character/interaction styles based on my time with Nonlinear. In fact, I’m largely assuming Alice and Chloe are telling the truth, and speaking in good faith.
Disclaimers
In the interest of transparency, I’d like to state:
I have never been approached in this investigation, nor was I aware of it. I find this odd, because … if you’re gonna interview dozens of people about a company’s unethical treatment of employees, why wouldn’t you ask the recent interns? Nonlinear doesn’t even have that many people to interview, and I was very easy to find/reach. So that’s … odd.
I was not asked to write this comment. I just felt like it. It’s been a while since I’ve written on the EA Forum. I generally don’t write, unless I have a unique perspective on the topic.
My internship was unpaid, aside from reimbursements for costs. I honestly never prioritised asking for more pay, because I always valued asking for advice/networking more. I just found that more personally useful/advantageous for my circumstances—no near-term financial pressure, being from a non-EA hub and having to actively network very far in advance to get roles I want, compared to people from EA hubs. So it was unpaid, I just … didn’t really care.
[EDIT]: I’m addressing parts of the post that imply a pattern of behaviour. Maybe it’s unintentional, but this post references a lot of extra details that make the core claims feel much more believable as a pattern of behaviour. If this post was just about “Nonlinear abused this specific employee in this specific context”, that’s one thing. But this post says “Nonlinear abused employees, and they openly brag about how cutthroat/exploitative they are, and they tell employees their problems and time and personal life don’t really matter”. Hell, I’d be convinced.
I agree this can sound suspicious, but I’ve always had the same principle. I refrain from creating negative impressions of others, because I think everyone should have a chance to make 1 good first impression. I also think it’s subjectively easier to echo negative rumours than positive rumours. All this can add up to a very warped perception, if most of what you hear about a person is secondhand.
Of course, this doesn’t extend to possible harm/abuse, so don’t take any of this as me minimising/refuting Alice or Chloe’s experiences. And, as mentioned, I don’t like assuming ill
of others based purely on secondhand information.
This is the one part I’m outright skeptical of. It sounds very out-of-character, to the point where I can’t foresee the Nonlinear team ever saying this. My experience with Kat, Drew or Emerson is that they love their families/partners a lot, and frequently communicate/visit. They are extremely intentional, scheduling in time to talk to loved ones every day. And if you think about it … why would someone visit a dozen countries a year, plus all the bureaucracy and hassle involved, if they didn’t like interacting with locals.
This daily pattern of behaviour would be really odd/suboptimal from a cold, logical, utility-maximising standpoint, as implied here.
My Occam’s Razor read is that Kat is basically externalising her internal dialogue. Kat naturally tends to procrastinate, so she uses these strategies to get herself to do work, and that self pep-talk sounds really weird when she externalises it to others. I struggle with procrastination as someone with ADHD, so my internal dialogue sounds very similar, I just don’t use that phrasing when talking to others haha.
If it means anything, when I stayed with Kat and Emerson for around 2 weeks, I never experienced anything similar. Driving, dishwashing and laundry I was never asked to do, which somehow felt weird the other way, because I thought I was doing too little.
OK, I feel this is a misinterpretation. Kat said this to me, but I had a different interpretation. Basically, I was discussing my idea of “the deferred life plan”: that, in my experience, ambitious people who want to start/build/work on meaningful things have a tendency to rationalise not doing it by simply not thinking too hard about the details.
[DISCLAIMER: Taken at face value, Kat’s response to Alice is an insensitive response to a subordinate saying they have money problems and need a pay rise. I’m suggesting that there might have been more context, based on Kat saying very similar things in a very different tone to me.]
I’m sure a lot of EAs have heard/believed some variation of “instead of trying to help the world/pursue meaningful things, you need to first go to a good college, get a good degree, build a good resume/CV, climb the corporate ladder, achieve financial independence and retire with a 3% rate of withdrawal and then do the thing you want”. My background is climate activism in Singapore, so that’s basically the only advice I’d ever heard.
I asked Kat for her opinion, because I knew she basically didn’t do any of that. She didn’t have a college degree, travelled a lot and spent years starting a nonprofit and living on very little. This topic is worth its own very long post, but essentially, she said “you need to break down and plan out exactly what your financial needs are, because intentionally maximising your runway gives you more room for risk tolerance”.
I think that makes sense. I think young EAs, and honestly, most young people, don’t confront themselves on what tradeoffs they need to have the life they want, purely out of discomfort. I’m a guy surrounded by guys from prestigious colleges, I feel all my peers just procrastinate pursuing things that matter to them by seeking high-status, high paying jobs. A lot of them have no goal of “if I make 75k a year for 3 years and rent with 3 roommates, I can pursue what I want for a decade at 25 years old”, even though that’s exactly the kind of calculation you need to … pursue meaningful things. And if you remove rent from the equation by living with parents, your risk tolerance goes way up.
Anyway, that’s my interpretation, based on hearing the exact same words from Kat, but extracting a very different meaning.
As for Emerson, he gives very transparent advice about how sharks operate, and how to survive them. He’s very perceptive (or at least, sounds perceptive) about adversarial strategies in high-growth, highly competitive spaces like online media and Web3, due to his 2 decades of experience in these fields.
Do I think Emerson can sound like a bad actor? Absolutely. It’s hard for a person to explain common manipulative strategies with sounding incredibly suspicious. However, for me, I spent the entire 2 weeks with him writing down notes on my laptop, notes I’ve found incredibly helpful as I’ve been scaling a generative AI startup for the past 4-5 months. Personally, I had to learn to deal with bad actors and adversarial business practices, either by being taught beforehand, or by experiencing it myself, and I vastly,vastly prefer being taught.
Hi Minh, Appreciate you sharing your views publicly here. I think you’re acting in good faith, and if others engage with this comment I hope they see that as well.
I think it’s very possible the Nonlinear team handled different employees very differently, especially since you started interning with them after Alice and Chloe had left and from Nonlinear’s account, it sounds like they made some changes (e.g. not having employees live and work with them).
Overall, I don’t think I’ve updated my views much as a result of this comment. I won’t cite every example in detail to save time, and won’t have capacity to do write it up publicly but happy to talk about it off the Forum, but here’s a quick overview of why I haven’t:
1) Most of the points you comment on here seem to be your interpretation of the accounts of the post. It seems like you don’t necessarily have a unique perspective on at least the first point, and overall I don’t agree with the conclusions you draw based on the evidence provided in the original post (e.g. the point about intimidation and Emerson)
I don’t think you are always engaging with all the context of the post (e.g. I think more so than talking about tactics, Emerson’s dealings with Adorian Deck seemed unnecessarily aggressive and doesn’t seem to be defensive in nature.) I don’t think this is intentional, but I think it could be helpful to engage with the substance of the points.
2) For pieces that you do have your own experiences to draw from, I think your situation was materially different from that of Alice and Chloe - namely, as I understand it you were not co-living and co-working for an extended period with the Nonlinear team, and your scope of work was different (e.g. I expect that house ops was not a part of your internship duties. I think most of the negative dynamics described resulted from the environment. I think it’s a fairly common occurrence that same thing in different contexts can have a very different meaning & consequences (e.g. the safety net point—which you also mentioned was insensitive—or the short deadlines).
It’s possible I could change some of the above views it in light of new evidence.
Oh yeah, don’t take this as a direct refutation of Alice/Chloe’s accounts. I definitely agree that the context was different. If the claims are true, then yeah, that sounds really bad.
Re: Core Claims
For the “core claims”. I have a personal opinion that these claims started from unfortunate, honest misunderstanding, and were substantively exaggerated. But those claims are specific and very sensitive. Clearly, at least one party involved is misleading people. So I’ll let Kat/Emerson represent themselves with whatever evidence they showed me.
Re: Patterns of behaviour
I’m addressing parts of the post that cite other concerning behaviour. Maybe it’s unintentional, but this post references a lot of extra details that make the core claims feel much more believable as a pattern of behaviour. If this post was just about “Nonlinear abused this specific employee in this specific context”, that’s one thing. But this post says “Nonlinear abused employees, and they openly brag about how cutthroat/exploitative they are, and they tell employees their problems and time and personal life don’t really matter”. Hell, I’d be convinced.
Also, … I’m disagreeing with this conclusion on a personal level, as the hypothetical person described in this strongly-worded appeal:
Obviously, small sample size, but I’m already 50% of the pool of people this statement applies to.
Re: Adorian Deck
Honestly, it didn’t even occur to me that Deck was worth addressing. I did a quick read of the post+public linked sources, and came to the conclusion that Deck was supposed to co-manage the account (as stated in the contract), became inactive managing it and regretted it afterwards when it became successful. It’s incredibly common, almost expected, for young content creators who go viral to become inexplicably inactive and neglect obligations, as I’ve already experienced multiple times in my generative AI venture. It happens like … 70-90% of the time you sign such deals.
Yes, maybe Emerson’s account is entirely fabricated, but I find it easier to believe that this is a very common dispute, and not “behaviour that’s like 7 standard deviations away from usual norms in this area”. I mean, mathematically, believing someone is behaving many standard deviations outside the norm is a bit harder than believing a teenager lost interest in maintaining their viral account, which happens most of the time.[1]
I can elaborate, and even cite personal examples. But point is: it’s really common in industries that work closely with content creators/affiliates, which is why it didn’t register as a red flag for me.
In any case, Emerson directly refutes he sent stalkers. This claim sounds really hard to prove/disprove given the information presented, so I … didn’t want to go down that weird rabbit hole.
Edit on Dec 26 2023: not sure it’s worth people freaking this given the new nonlinear updates. I think it makes the below comment outdated. I don’t think I would still endorse the specific claims in this comment if i came back to it.
Re patterns of behaviors—I believe I still disagree here. The way I’d summarize it (poorly) is something like: “Nonlinear have a history of negative behavior towards employees, they have continued to demonstrate some negative behaviors, and have not acknowledged that some of their behavior was harmful to others” (edited)
What I think constitutes a “pattern”:
Two employees had multiple negative experiences across a range of scenarios (e.g. financial, psychological/social, legal) over the course of 7 months.
I think they have demonstrated a consistent pattern with at the very least intimidation tactics (re their email to Ben about this post).
Based on their responses of events (over a year later), it seems like Nonlinear team does not believe they have done any wrong.For many actions which they admit to doing (e.g. the driving or drugs incidents) seem like pretty clear red flags, they don’t see anything wrong with that behavior.Edit: I no longer endorse the first sentence this based on Violet’s comment below, and agree with her overall take here. I would be keen to see what aspects the Nonlinear team believe to be mistakes and what changes they made.
For the second sentence, I still endorse it based on Nonlinear’s interview with Ben
I think the above is still consistent with current/future employees having a much more positive experience though, since as I said I think a lot of the problems were caused by the environment / co-living situation.
I do think it’s strange / unfortunate that Ben didn’t interview you given how the conclusion is stated. I still agree with the end-line conclusion though, I think it’s possible there could still be situations where others could have negative experiences.
Re Adorian Deck—I hadn’t read much about the Adorian Deck incident, based on what your summary I think it does sound less bad than I would have initially thought. I also think that including that quote about standard deviation seems a bit extreme.
I don’t quite agree with your summary.
Kat explicitly acknowledges at the end of this comment that “[they] made some mistakes … learned from them and set up ways to prevent them”, so it feels a bit unfair to say that that Non-Linear as a whole hasn’t acknowledged any wrongdoing.
OTOH, Ben’s testimony here in response to Emerson is a bit concerning, and supports your point more strongly.[1] It’s also one of the remarks I’m most curious to hear Emerson respond to. I’ll quote Ben in full because I don’t think this comment is on the EA Forum.
This is only Ben’s testimony, so take that for what it’s worth. But this context feels important, because (at least just speaking personally) genuine acknowledgment and remorse for any wrongdoing feels pretty crucial for my overall evaluation of Non-Linear going forward.
I also sympathize with the general vibe of your remark, and the threats to sue contribute to the impression of going on the defensive rather than admitting fault.
That’s fair point regarding Kat’s comment—I would be curious to know what kind of changes they made.
I hadn’t seen the testimony re Ben so thanks for sharing that, would definitely like to see response / engagement on this point from Emerson as well.
I think given what you know, your level of skepticism is reasonable here.
I mean, obviously, I’m disagreeing based on my subjective experience/knowledge. But these are reasonable concerns for an outside observer to have. My take is that how unreasonable this level of defensiveness is, would vary based on how true the actual claims are. If they’re say, 80% false, vs 80% true.
And honestly, even the most charitable interpretation states that Nonlinear team really dropped the ball on communicating to employees and frequently says a lot of weird, shady stuff. So I’m not gonna pretend like Nonlinear does nothing wrong, just because they’re “my team”.
I mean, for all I know, there’s 2 parties each claiming the other maintains a complex web of deception and lies, and I might be believing the wrong one 🤔
Guess we’ll find out.
Yeah, I hope we will! Thanks for engaging with me in a productive and open way, this conversation has been helpful.
Thank you for sharing Minh, I think this is one of the most important updates.
If our goal is (as I think it should be) only to figure out whether we want to interact with any of these people in the future, and not to exact retribution for past wrongs against third parties, then we don’t need to know exactly what happened between nonlinear and Alice and Chloe. That’s good, since we probably never will. What does seem to be the case is this. (1) Everybody involved agrees that something went badly wrong in the relationships between Kat/Emerson and Alice/Chloe, though they may dramatically disagree about what. (2) Kat/Emerson have changed their behavior in a way that prevents a repeat. Your testimony is good evidence for 2. And given that, I don’t think I will update much on whether I want to interact with them in the future. So thank you for your testimony.
(disclaimers: my past interactions with Kat have been positive but not extensive. I don’t believe I have interacted with Emerson. And I was not asked to comment by anyone involved.)
Here’s what I would need to see from Kat and Emerson to lend any credence to the idea that they’ve changed their behaviour in a way that prevents them from mistreating employees again:
1. They acknowledge the many things they did wrong described in the OP and admit that they were wrong, without trying to downplay or rationalize them.
2. They apologize for these things (and give a good apology that isn’t defensive or weaselly or victim-blaming).
3. They attempt to make amends in some way (e.g. giving a sum of money to Alice and Chloe for emotional damages).
4. They commit to changing their future behaviour in specific ways (e.g. hiring an accountant or bookkeeper for Nonlinear; paying all future employees a salary agreed to beforehand in a binding legal contract — this is just the tip of the iceberg).
Even if we assume that all of the allegations are true (which seems unwarranted when the evidence is hearsay from two anonymous sources), you seem to think that remorse is the only mental state that could cause people to change their behavior. Why do you think that?
I think even with just the behaviours that Nonlinear has publicly confirmed, there is cause for major concern.
The emotion of guilt is usually what leads to accountability and behaviour change. See e.g. this video with clinical psychologist June Tangney, co-author of the book Shame and Guilt.
Lets look at one specific claim that you pointed to—whether there was a legal contract agreed beforehand specifying a salary. Unless I’ve missed something, I don’t believe nonlinear has publicly commented on this. All I’m saying is don’t let your confidence exceed the strength of the evidence.
It is certainly one emotion that can. But your video just talks about guilt and shame, it doesn’t talk about other emotions. I would expect all emotions have the potential to change behavior under the right circumstances—otherwise, they wouldn’t serve an evolutionary purpose. I can think of instances where I’ve altered my behavior after social drama out of fear of getting hurt again, rather than guilt or shame. So when I look at someone else, I don’t need to settle on a particular explanation of why they’ve changed their behavior to accept evidence that they have.
I read your comment carefully and then went back and skimmed it, to make sure I wasn’t missing anything.
As far I can tell, this is the only new information that is both germane to the substantive points of the OP and that comes from your direct personal experience with Nonlinear:
If anything, this lends slight credence to the accounts of Emerson’s behaviour recounted in the OP. This isn’t much of an update, though, since Emerson himself already admitted to talking like this.
I have read the OP. I have skim read the replies. I’m afraid I am only making this one post because involvement with online debates is very draining for me.
My post is roughly structured along the lines of:
My relationship to Kat
My opinions about Kat’s character
My opinions about EA culture and risky weirdness
My opinions about how we go about ensuring good practices in EA
Kat is a good friend, who I trust and think highly of. I have known her personally (rather than as a loose acquaintance, as I did for years prior) since summer 2017. I do not know Emerson or his brother from Adam.
I see somebody else was asked about declaring interests when they spoke positively about Kat. I have never been employed by Kat. Back in 2017, Charity Science had some legal and operational infrastructure (e.g. charity status, tax stuff) which was hard to arrange. And during that time, .impact—which later became Rethink Charity, collaborated with Charity Science in order to shelter under that charitable status in order to be able to hire people, legally process funds and so forth. So indirectly the entity that employed me was helped out by Charity Science.
However, I never collaborated in a work sense with Charity Science. Professionally I was employed by Tom Ash to do temporary project coordination on LEAN in 2015⁄16 while I was completing my PhD. In 2017, LEAN was incorporated into SHIC and rebranded as Rethink Charity, under Tee Barnett and Baxter Bullock. (For those of you who are new, Rethink Priorities incubated with Rethink Charity at it’s inception before operational aspects were in place for it to separate, hence the similar name. .impact was also co-founded by Peter Hurford prior to his co-launch of RP). Tee and Baxter hired me as project manager for LEAN, which I did for a year and a few months, after which I product managed the first stages of the reboot of eahub.org. So basically… my opinions in this post are biased by my personal friendship with Kat, but not by any other aspects of my position.
When I met Kat she was working on/leading Charity Science: Health. We struck up a friendship and, over my 2 and a half years of living in Vancouver, we socialised regularly. Mostly one to one, going on hikes together. But also in group contexts, such as dinner parties. Since I left Canada (in August 2019) we have kept in touch through video calls every few months.
My assessment of Kat’s character is that she is very moral and honest individual. I do not recognise any characterisation of her as manipulative or passive aggressive or threatening. Kat is direct. She is highly rational and principled. She is also a very creative, out of the box thinker who sees things through a completely unique lens which makes her very radical. She is extraordinarily committed and disciplined in adhering to her values and beliefs. I consider her to be a demonstrably objective and fair person. As the author mentioned, she is very positive and high energy. I have never met a more ‘bubbly’ person. Even during times when she has been going through really awful ordeals, she has always taken a more upbeat and optimistic outlook than others in her situation would have done. Like a lot of EAs, Kat feels deeply and is very impacted by her conscientious drive to do the right thing. I have seen her go through prolonged periods of turmoil and anxiety over a difficult decision where personal needs were in conflict with what she thought she ought to do. In my observation this has often led to extreme self-exploitation in terms of how hard she has worked, including when ill, because of her commitment to the cause she was working on and the people she was trying to help.
I find that Kat is slow to criticise or judge others. There have often been times where I have needed to encourage Kat to put her own needs first, to recognise them and to take less shit from other people around her.
Kat has, herself, endured living and working arrangements that I would see as inadvisable and unacceptable. This is important, because the stuff that Kat is being accused of inflicting on others is consistent with the choices and sacrifices that she imposes on herself. To give you an example, Kat is one of the people who really earned my respect for living an ascetic life in order to give her time and money to charitable causes. For many years (I don’t know about now) she and Joey gave themselves incredibly low salaries which I would definitely not have made do with. Dig up old posts by Kat on this forum and you will see advice and write-ups on how to live on next to nothing while staying happy, because that is how she lived. They both treated their own time similarly.
When I was living in Vancouver, I lived in a shared house with some other EAs (including Tom Ash, Kieran Grieg, Mat Carpendale, Marylen Reid) who had been part of a shared working/living arrangement with Joey and Kat in the early days of Charity Science and of the Vancouver EA community. They described having taken over a set of apartments in a block of housing, and each thinking of a funny name for their apartments that symbolised something about them. The one Kat and Joey lived in was called ‘the sweatshop’ because of the insane hours they worked.
I know some former employees of Charity Science who had some grievances and criticisms of how the charity was run. I remember a lot of these were around the org’s slightly blase attitude towards legalities. As with what I said two paragraphs back, my observation was that Kat and Joey took all the risks for themselves and their own wellbeing that they asked of others. So I think you might question those risks or judgement calls, but I am sceptical that anything Kat has asked her employees to do wasn’t something that lines up entirely with what she puts herself through.
My overall interpretation of any ‘weirdness’ to do with issues such as casual contracts, terrible hours, unsafe working conditions, poor pay etc etc. is that I would not personally endorse nor work in that way. And I would personally be cautious towards an employer which had those views towards hours, pay, and legal irregularity.
I do not see Kat as at all unusual in this attitude towards risk and compliance with conventions compared to other EAs. In my experience A LOT of people in the EA and rationalist space have similar views and attitudes to a whole range of things.
In Vancouver, my sense of rationalists, and the EAs I lived with, is that people were quite keen to throw a lot of conventions and social norms in the dustbin as part of their interpretation of rationalism. This was pretty exasperating at times, but I do not put any of the individuals who I include in this generalisation into the category of ‘bad actors’ in my mind. Naive actors maybe… open minded to a fault? Socially blind? But each of them… people who are the first in line to live out the experiment that puts their rationalist beliefs to the test. I think that many people in this space have a poor grasp or appreciation of emotional and personal boundaries, but that this harms them as much as it harms others. A soft example of this in the shared house in Vancouver was when I walked into a room with a couple of my housemates and a large number of guests who were complete strangers. One of my housemates then turned to me and asked me on the spot to share the most embarrassing/irrational thing I had believed during my evangelical Christian childhood. While I understand the logic of why this kind of thing is a norm in rationalist communities, it was obviously hurtful and humiliating to me because a personally difficult time of my life was put under the spotlight in front of strangers, leading me to feel very unsafe. My housemate was doing something which they genuinely thought was promoting a good norm and a good culture, but in doing so they unintentionally hurt me. In my view many people in this thread are in that general category too, and it is somewhat of a spectrum.
You may or may not be able to tell from how I frame this, that I’m a somewhat more conservative (small c) person myself. For example, I think polyamory is a terrible idea 99.9999999% of the time and that it would probably require several significant phases of actual species evolution for that to change. I think recreational drug use is needlessly risky. I could not get far enough away from the ‘cuddle piles’ my housemates were keen on. I’m very picky about my living arrangements and food, and my performance on the OCEAN test is basically sky high conscientiousness and neuroticism, paired with introversion and low agreeability. For openness I’m very high on openness to ideas but not to experiences. All of this is to say that lots of things about how many EAs and rationalists lived and did business (including Kat) would be most definitely ‘weird’ in my books, and ‘oh hell no’ in terms of how far I’d engage.
I would also like to share that, despite the general views I’ve shared in this post, I actually did choose to work in an employment arrangement which was unorthodox when I was working for .impact under Tom Ash. When I initially worked for Tom on .impact, my pay was also very low. I was initially offered the LEAN project management job under some kind of set up where my accommodation and some food would be offered in Vancouver (living in the same house as Tom), to mitigate low pay. There was a contract (I’d never work without one), but there definitely were irregularities and short cuts that some might criticise. I am in no way mentioning this to shame or call out .impact, or Tom or any of it.
The point I’m trying to make is that if you start to go over all EAs and EA orgs with a microscope, you’ll find a lot of people have coloured outside the lines at some point in the last 15 years without anything going amiss. If I did not like my low pay, I didn’t have to take the job! If I didn’t want to live in a house with Tom, I didn’t have to. In fact, when Tee offered me a job on LEAN months after I stopped working for Tom, my first question was whether he could coordinate something similar to what I had been offered by Tom.. namely accommodation in a foreign country I quite fancied trying out, with housemates and a social network thrown in, in exchange for a modest salary. If Time magazine was doing a feature on my job I’m sure this would all have sounded twisted and wrong. And when I moved into that EA house people WERE weird did do things differently… sometimes involving substances and physical intimacy. But that was completely fine, because I simply didn’t go along with the things I didn’t like (for the record I like none of those things and gave them a grossed out facial suggestion when they were brought to my attention. Hopefully they have now forgiven me). Had I found it weird living with Tom who had previously employed me (by the time I moved there he was on the board of Rethink Charity but no longer directly working with me) I would have moved out.
Overall I think that unconventional working and living practices do come with high risks of misunderstandings. But they aren’t enough in and of themselves to condemn people who run their lives and workplaces in this way, many of who do so because they sincerely think it is the best way to optimise the good. I also think that there is a level of responsibility on all sides in such an arrangement, which includes that of the employee to own their choices regarding what contracts they sign (or don’t) and under what conditions they labour.
I don’t know the ins and outs of how Kat and Non Linear employed people. For the record, I have heard about either Alice or Chloe once from Kat before, very briefly, aside from today.
Nevertheless, a lot of things are completely off to me about this whole thread.
I disagree with the suggestion that there was something sinister about a policy of ‘we don’t talk badly about you and you don’t talk badly about us’. That is a rephrasing of a fairly standard social (and literal) contract which exists between the majority of people all around the world. As somebody who works for the government making policy, you bet I’d lose my job outright if I publicly criticised them. But I would also expect a variation of this from most employers.
On a personal and ethical level, I think it is completely unacceptable to slander or badmouth any person or organisation without first giving them a chance to resolve things and settle things privately. I have been on the receiving end, many times, of hurtful public confrontations from people who I thought I was on good terms with. In each case the first time I had any inkling at all something was amiss was when I was challenged in a public setting and completely blindsided. Those experiences were devastating and entirely unfair (none of them happened in EA… one of them was on a fan online community, and the others were at school and in my faculty while I was on my PhD. I feel pain and sorrow from these events to this day, and they have significantly affected my trust and sense of security in social environments.
Kat says that each of these things were true about how she learned about the grievances of ‘Chloe’ and ‘Alice’. As I understand it, she also had no idea this post was coming from Ben before it was made, and was not given a fair chance to provide evidence to refute a lot of the accusations made. And even when she is able to disprove them, the shape of the accusation is now out there and will have consequences and live on in peoples’ perceptions and memories.
To me, aside from how unfair and cruel I think this was to Kat, I am also very uneasy about the way the EA community seems to be approaching a lot of adjacent issues.
It seems to me that (gradually but most obviously) since FTX there was this switch to thinking that its somehow healthy or appropriate to launch a kind of autophagic self-immolation process whereby bits of the community launch toxic witchhunts against other bits. It’s not healthy. It’s not constructive. It’s not effective, and to add insult to injury—it’s not necessary.
I am completely for improving the practices, standards, conventions etc. in the EA community, but I am deeply disappointed that people think that a constructive way to do this is via call out culture. I had rather hoped that the one place on the internet I could trust people not to engage in pillory and vigilantism would be in EA! I use those words completely literally, not to be inflammatory. That is what I think this approach amounts to in practice.
All people, regardless of their position of power, privilege or status, deserve to be treated and considered innocent until proven guilty by some mechanism that is most certainly not Concerned Neighbour #5 or the equivalent. Of course we need a mechanism for people to challenge abuses of power or severe malpractice, but public speculation about specific people or organisations doesn’t constitute a professional, fair or reliable vehicle for delivering that outcome.
This is aside from the fact that Kat and Non Linear are not all-powerful in this set up. Chloe, Alice and friends have intentionally or otherwise caused huge devastation to their reputation and wellbeing. And when I opened this thread, it jumped out as unfair that the two accusors were anonymised while Kat and Non Linear were not. I believe it is out of order for Kat’s personal and professional reputation to be publicly attacked this way. This is not just because I believe that Kat is being mischaracterised. It is also a point of principle regardless of the particulars.
I also don’t want to see any other orgs or EAs subject to this either, in the future.
To be fair, this should be a fairly universal assumption for any individual considering any employment arrangement. I don’t think it is for me or for you or me to form opinions about what went down between Non Linear and it’s employees, but if Non Linear employees didn’t see to the above then I do think they exercised poor judgement.
I’ll conclude by referring back to my paragraph where I reflected on common kinds of weirdness among EAs and rationalists. I do completely trust that Kat was honest, honourable and conscientious in all of her dealings with her employees. However, I see it as quite likely that she and multitudes of others in this community have approached things in an unorthodox way which we might describe as ‘unwise’. But ‘unwise’ is a very far cry from the portrait painted in this thread, and the consequences of the maverick practices of many EAs and rationalists backfire on them just as they do on others. Conventions, regulations, codes of practice, norms… these are safety rails which live on because they produce good outcomes on average. I think many EAs and rationalists could learn from well honed systems in other industries and institutions. That’s something this debacle really highlights. However, the same point applies to dealing with disagreements and accusations in a professional, fair and systemetised manner.
As I’ve been drafting this post for 4 hours and counting, this concludes my engagement with Internet drama for 2023.
This in particular I liked:
It seems to me that (gradually but most obviously) since FTX there was this switch to thinking that its somehow healthy or appropriate to launch a kind of autophagic self-immolation process whereby bits of the community launch toxic witchhunts against other bits. It’s not healthy. It’s not constructive. It’s not effective, and to add insult to injury—it’s not necessary.
Duncan
Yes, this tends to bug me a lot. I think Ben is being different here, because
Not anonymous
More transparency about what the on the ground facts actually are as best he can tell before coming up with interpretations or judgments (than the usual “sniping from the sidelines” post)
This gave me pause for thought, so thank you for writing it. I also respect that you likely won’t engage with this response to protect your own wellbeing.
I just want to raise, however, that I think you have almost completely failed to address a) the power dynamics involved; and b) the apparently uncontroversial claim that people were asked to break laws by people who had professional and financial power over them.
It seems impossible to square the latter with being “honest, honourable and conscientious”.
There are a lot of dumb laws. Without saying it was right in this case, I don’t think that’s categorical a big red line.
Thanks, this also made me pause. I can imagine some occasions where you might encourage employees to break the law (although this still seems super ethically fraught) - for example, some direct action in e.g. animal welfare. However, the examples here are ‘to gain recreational and productivity drugs’ and to drive around doing menial tasks’.
So if you’re saying “it isn’t always unambiguously ethically wrong to encourage employees to commit crimes” then I guess yes, in some very limited cases I can see that.
But if you’re saying “in these instances it was honest, honourable and conscientious to encourage employees to break the law” then I very strongly disagree.
Yes of course there are—I don’t think anyone who has to live with them contests that!
But where this story (and other ones EA has dealt with) is that this shows a willingness to break laws if they’re deemed “stupid” or “low value” or “woke shibboleths”[1]. There are some cases where laws are worth breaking, and depending on the regime it may be morally required to do so, but the cases involved don’t seem to be like this.
What Jack is pointing to, and people like myself and lilly[2], is that often the law (or norm) breaking seems to happen in a manner which is inconsistent with the integrity that people in the EA community[3] should have—especially when they’re dealing with responsibilities such as employing others and being responsible for their income, being in a position of mentorship, being in a position to influence national or international policy, or trying to ‘save the world’
not direct quotes, just my representation of an attitude in some EA/Rationalist spaces
as far as I’ve interpreted her comments in this thread. Jack also feel free to say I’ve got your view wrong
and people in general, to be honest
I think it matters a lot to be precise with claims here. If someone believes that any case of people with power over others asking them to commit crimes is damning, then all we need to establish is that this happened. If it’s understood that whether this was bad depends on the details, then we need to get into the details. Jack’s comment was not precise so it felt important to disambiguate (and make the claim I think is correct).
Thanks, I agree with your clarification on the point I was trying to make
This is not a question for you, but the forum generally.
I agree call-out culture makes me uncomfortable and has many negative aspects. But what alternative is there to improving community health and function? Previous methods, like relying on private systems or CEA, seem to have been catastrophically ineffective. What else could people who have experienced systematically bad behavior, do? How else will we learn to work on ways to try to prevent this sort of thing?
[Edit: I now realize that this is what Spencer discussed below and other people have been discussing too. But maybe the community norms roadmap makes it seem less pie-in-the-sky]
I first had this idea about that Toby Ord post a few months back, and regret not writing it up then.
Idea: I think people who write something that could be considered a “criticism” (or worse, a “hitpiece”) should send a heads-up message to the person in question, with their finished post attached.
Example: “Hey, I have written [this post] about things I am concerned about regarding you/you project. I plan to post it on the EA Forum on [date]. I understand this is unfortunate and troubling from your perspective and you will probably want to respond. That’s why I’m letting you know my publishing date. You have until then to write a response, which I expect you will want to post shortly after I make this one go live. (optional: I am available to read your response post before I publish my piece if you wish, but I would need your retort by [earlier date] to take your response into account.)”
How it might become a norm:
Forum mods and power users could comment on critical posts that giving heads up with final drafts attached is the advised way for future criticism.
After seeing this advised for month or so, users would start actually doing it. And they would probably add transparency blurbs about it at the tops of their criticisms and responses, further educating readers on criticism norms.
After seeing this recommended and practiced for a couple more months, the cultural norm would be established.
Once the norm is established, people who don’t give warning of their criticism and time for response should be frowned upon/possibly even given a moderation warning.
Being blindsided by people posting bad stuff about you on a forum really sucks, and the ongoing threat of such a surprise is bad for the community. I think a norm like this could do a lot of good and be low effort for critics.
I think it’s probable that Ben tried to do something fair like this here by talking to Kat/Emerson, but I think that doesn’t do the full thing. For example, Kat may have felt that she had responded adequately to some of the concerns over chat, enough that they would be dropped from any final piece, and be surprised and blindsided to see some parts dug up here. [Edit: She may also be surprised by the very confidently-worded conclusion Ben made against Nonlinear.]
That’s why I think sending the actual final draft of the post with some buffer time to compose a public-facing response is much fairer. I admit that refuting things in writing can be very time consuming, so it’s still helpful and good for the critic to offer a conversation. But if a conversation occurs (as it did here), I think a final draft and chance to write a response for the forum should still be offered, in addition. There’s no replacing seeing the precise claims as-written before the rest of your community does.
I understand you’re not interested in replies to this comment, but for the sake of other readers I’ll point out parts of it that seem wrong to me:
I think it’s reasonable for an employer to no longer want to hire you or work with you if you’re saying bad things about them, but I don’t think it’s appropriate for them to try to limit what you say beyond that. I think it’s not appropriate for your employer to try to hurt you or your future career prospects at other employers because you talked about having a bad time with them.
Whistleblower protections aren’t exactly analogous, because AIUI they’re about disclosure to government authorities, rather than to the general public, and that’s a significant enough difference that it makes sense to treat them separately. But it’s nevertheless interesting to note that if you disclose certain kinds of wrongdoing in certain ways, your employer isn’t even allowed to fire you, let alone retaliate beyond that. These rules are important: it’s difficult and unpleasant to be in that situation, but if that’s where you end up, protecting the employee over the employer is IMO the right call.
I get that threats like these are very painful for the people involved. However, I don’t think there’s any real non-painful way for people to confront the realities that they’ve hurt others through mistakes they’ve made, and there’s no non-painful way to say “we, as a community, must recommend that people guard themselves against being hurt by these people”. You hint that there are other ways to handle these things, but you don’t say what they are.
I think we could probably come up with a system that’s kinder to the accused than this one. However, granting that sometimes such a system would demand that we need to warn other prospective employees and funders about what happened, there’s no world that I can see that contains no posts like this. I think it’s reasonable to believe that Kat and Nonlinear should have had more time to make their case, but ultimately, if they fail to make their case, that fact must be made public, and there’s no enjoyable way to do that.
Massive thanks to Ben for writing this report and to Alice and Chloe for sharing their stories. Both took immense bravery.
There’s a lot of discussion on the meta-level on this post. I want to say that I believe Alice and Chloe. I currently want to keep my distance from Nonlinear, Kat and Emerson, and would caution others against funding or working with them. I don’t want to be part of a community that condones this sort of thing.
I’m not and never have been super-involved in this affair, but I reached out to the former employees following the earlier vague allegations against Nonlinear on the Forum, and after someone I know mentioned they’d heard bad things. It seemed important to know about this, because I had been a remote writing intern at Nonlinear, and Kat was still an occasional mentor to me (she’d message me with advice), and I didn’t want to support NL or promote them if it turned out that they had behaved badly.
Chloe and Alice’s stories had the ring of truth about them to me, and seemed consistent with my experiences with Emerson and Kat — albeit I didn’t know either of them that well and I didn’t have any strongly negative experiences with them.
It seems relevant to mention that Chloe and Alice were initially reluctant to talk to me about any of this. This is inconsistent with the claim that they are eager to spread vicious lies about NL at any chance they get.
I’m glad this is out in the open: it felt unhygienic to have this situation where there were whisperings and rumours but no-one felt empowered to be specific about anything.
We’re coming up on two weeks now since this post was published, with no substantive response from Nonlinear (other than this). I think it would be good to get an explicit timeline from Nonlinear on when we can expect to see their promised response. It’s reasonable to ask for folks to reserve judgement for a short time, but not indefinitely. @Kat Woods @Emerson Spartz
Another two weeks later, and with no response or acknowledgement from Nonlinear (or even a statement about when they plan to give a response), I’m personally updating moderately towards the view that Nonlinear’s communications around the initial release of this post were more about FUD/DARVO than honesty. I’ve also updated further towards the position that it was right for Ben to post when he did, and that delaying would have been playing into the hands of bad actors. These remain defeasible positions, but I’m not holding my breath.
Have no fear! We are responding. We’ve been working on this full time the entire time. We have over 200 pages written so far and are in the last stages of editing to the point where we’ll be able to get feedback from friends. We’re aiming to get this done in the next few weeks because we want to be working on things that actually help with AI. However, it’s a very large doc, it’s a hostile audience, it takes way more effort to debunk something than to say something, etc. Also, man, I really hate editing, so it’s a bit of a slog.
(Obviously didn’t mean to write over 200 pages. Just Ben accused us of a lot of things and we were writing in multiple documents, so didn’t see what had happened until it was too late 😛.)
@Kat Woods and @Emerson Spartz, any update on this?
Still working on it full time! I’d guess we’ll publish in the next 1-4 weeks. I really hope it is sooner. I want to finish this more than anybody.
Sorry for it taking so long. This is the first time I’ve ever done crisis communication and if I had a time machine, I’d do things differently.
Notably, it’s now been about twice as long as Nonlinear says they originally requested Ben to give them to prepare their side of the story (a week).
Balanced against that, whatever you think about the events described, this is likely to have been a very difficult experience to go through in such a public way from their perspective—one of them described it in this thread as “the worst thing to ever happen to me”. That may have affected their ability to respond promptly.
Just want to signal my agreement with this.
My personal guess is that Kat and Emerson acted in ways that were significantly bad for the wellbeing of others. My guess is also that they did so in a manner that calls for them to take responsibility: to apologise, reflect on their behaviour, and work on changing both their environment and their approach to others to ensure this doesn’t happen again. I’d guess that they have committed a genuine wrongdoing.
I also think that Kat and Emerson are humans, and this must have been a deeply distressing experience for them. I think it’s possible to have an element of sympathy and understanding towards them, without this undermining our capacity to also be supportive of people who may have been hurt as a result of Kat and Emerson’s actions.
Showing this sort of support might require that we think about how to relate with Nonlinear in the future. It might require expressing support for those who suffered and recognising how horrible it must have been. It might require that we think less well of Kat and Emerson. But I don’t think it requires that we entirely forget that Kat and Emerson are humans with human emotions and that this must be pretty difficult.
Of course, if they don’t post a response, at a certain point people might decide they lack further energy to invest in this and might therefore update their views (while retaining some uncertainty) and not read further materials. This is a reasonable practical response that is protective of one’s own emotional resources.
But while making this practical decision based on personal wellbeing, I think it’s also possible to recognise that Kat and Emerson might not be in a place to respond as rapidly here as they might hope to (and as we might hope they would).
Stated more eloquently than I could have, SYA.
I’d also add that, were I to be offering advice to K & E, I’d probably advise taking more time. Reacting aggressively or defensively is all too human when facing the hurricane of a community’s public opinion—and that is probably not in anyone’s best interest. Taking the time to sit with the issues, and later respond more reflectively as you describe, seems advisable.
I think there are practical reasons why it might take longer to prepare a comprehensive public response than the private response they were envisaging for Ben + Lightcone. That said, I also think that there are a lot of non-comprehensive responses that would have taken less time to write while still supporting their version of events, and I think it’s reasonable to update against Nonlinear in their absence.
Thanks for pointing this out! I had the impression they wanted time to prepare a public response that could go live contemporaneously with Ben’s post, but reading the comments from Kat and Emerson it looks like you’re right!
Post on everybody who’s living together and dating each other and giving each other grants when?
Clarification: I’m just kind of surprised to see some of the things in this post portrayed as bad when they are very common in EA orgs, like living together and being open to unconventional and kind of unclear boundaries and pay arrangements and especially conflicts of interest from dating coworkers and bosses. I worry that things we’re all letting slide could be used to discredit anybody if the momentum turns against them.
The brief relationship Alice had with Drew did not feature prominently in this post and/or what I take to be the central list of complaints. Nor did the main problematic issues in that paragraph seem to be a typical example of what I consider to be abuse of power in a romantic relationship qua romantic relationship:
(Tbh I’m a bit confused about the wording of that paragraph; it’s unclear to me from the paragraph why Kat couldn’t just request that Alice and Drew not date each other because dating employees is often a bad idea. From my lights, that will be a reasonable albeit hypocritical position, much more reasonable than asking someone to change romantic preferences writ large[1]).
I can imagine something like the following message to be invasive but understandable: “I noticed that the two of you appear to be getting close to each other. I really don’t want to pry into your personal life, but from my past experiences, coworkers in small companies usually shouldn’t date each other. Nonlinear doesn’t currently have a policy against employees dating each other unless one is a direct report of the other, but we should still consider this question. This is especially the case as we are only 5 people and one of you might manage the other going forwards. I know that this is hypocritical of me as Emerson and I decided to cofound this nonprofit after dating, but realistically increasing the number of romantic entanglements in this 5-person company is a risk factor.”
I strongly think that, like with FTX, we should wait two weeks (or more) before forming opinions and reaching conclusions, as more information comes out[1]. But I want to quickly highlight footnote 7, as I think it might be relevant for some people and a bit buried
I really don’t know if the new information will be better or worse for Nonlinear, I just expect the picture to become clearer in a few weeks, and to be better to look for lessons then as opposed to now
Lorenzo, what if we view this situation as a kind of manipulation, similar to what we saw in the Kat incident when Alice’s reputation was targeted? Could it be possible that Kat has intentionally involved an employee in a romantic relationship to exert more control? While it’s a bit of a stretch, I’m curious if anyone has observed any similar behavior from Kat in the past. It might help shed light on her motivations here.
Sorry if this is repetitive, but I really mostly “strongly think that, like with FTX, we should wait two weeks (or more)” before evaluating the evidence and looking for what we can learn from this case.
I suspect that most of these behaviors are harmful for employer-employee dyads (and possibly harmful in other ways?):
living together,
being open to unconventional and kind of unclear boundaries,
being open to unconventional and kind of unclear pay arrangements,
being open to conflicts of interest from dating coworkers and bosses.
For each of these items (or for multiple items occurring together), I don’t think that they necessitate a harmful situation; I could imagine dating a co-worker or living together and it working out fine. But I do think that each of these serves as a factor which increases the probability of harm. In the ~30 seconds that I’ve spent thinking about it so far, what I come up with is “power” and “freedom.” I think that they aren’t damning in themselves, but they are like red flags: something to be extra cautious about because it often is correlated with (or causing) other problems.
I would be very happy to see someone write a post about these “together and dating each other and giving each other grants” experiences with actual examples, norms, suggestions, caveats, etc.
Or, you know, if we want to tighten that stuff up, I would also be interested. For now I think it doesn’t work well but I don’t consider it immoral, and a lot of the stuff in the post strikes me as in the same category.
Most of the allegations in the OP seem comfortably out-of-distribution to me. (Unless the distribution includes FTX & Leverage, but we know how those went.)
I think the sentiment in Lilly’s comment are read not as saying that all of these behaviours are always bad, but that they are often in contravention of professional norms and that, sometimes, there’s a reason for this.
For what it’s worth, I think romantic entanglements between employees at different power levels (especially company leaders, or those who have power to fire others) is something to be concerned about. I have similar thoughts toward romantic relationships between grantors and grantees—that should be default transparent in my opinion.
Yeah, my thought is pretty high-level, basically: a lot of professional norms exist for good reasons, and if we violate them—and especially if we violate a lot of them at the same time, as happened here—then this produces the kinds of circumstances in which these disputes tend to arise.
Certainly, there’s some cost-benefit here with respect to specific norms, and specific contexts, that could be, and I’m sure will continue to be, litigated. But everyone involved has been really harmed by this—in terms of their time being wasted, emotional energy sunk into this, and people’s reputations—and that just seems really unfortunate, given that it is not that hard to substantially reduce the risks of these kinds of things happening by adhering to standard professional norms.
Some of the things in this category seem bad, some of them good. I think if people walk away with a generic conventionality bias from reading this, then that seems like a mistake and a recipe for witch hunts. Most of the dynamics explored in this post feel predictable and the result of specific aspects of the context and environment that was set up, and I would feel pretty happy putting my predictions on the record on what kind of error modes we will see from different initial conditions, and for different existing organizations.
E.g. I will put it on the record that I don’t think Open Phil has really any of the dynamics that this post is talking about, despite me also having many disagreements with how Open Phil operates.
I’m just going to share my implicit model of this situation, which is that Lightcone is preoccupied with a particular kind of perceived integrity violation that Kat and Emerson are more okay with. I think a lot of people are more okay with cutting corners and especially with managing PR than people strongly influenced by the morality of the Sequences, but those people will read about these things that Ben thinks are bad for that reason and think they are bad because they are unconventional or not best practices at work. I think it’s easy to witch hunt on the basis of things that many EAs/rats do with tacit or explicit approval from the community without readers actually agreeing as to why you think it is bad. So I’m concerned that airing stuff like this could damage people’s reputations and not be useful enough in the way you intended—giving people information so they can decide who to trust—to justify the harm, paranoia, and witch hunting that may result.
Using “preoccupied” feels a bit strawmanny here. People using this situation as a way to enforce general conservativism in a naive way was one of the top concerns that kept coming up when I talked to Ben about the post and investigation.
The post has a lot of details that should allow people to make a more detailed model than “weird is bad”, but I don’t think it would be better for it to take a stronger stance on the causes of the problems that its providing evidence for, since getting the facts out is IMO more important.
It would seem low-integrity by my standards to decline to pursue this case because I would be worried that people would misunderstand the facts in a way that would cause inconvenient political movements for me. It seems like a lot of people have a justified interest in knowing what happened here, and I don’t want to optimize hard against that, just because they will predictably learn a different lesson than I have. The right thing to do is to argue in favor of my position after the facts are out, not to withhold information like this.
Also, I think the key components of this story are IMO mostly about the threats of retaliation and associated information control, which I think mostly comes across to readers (at least based on the comments I’ve seen so far), and also really doesn’t seem like it has much to do with general weirdness. If anything this kind of information control is more common in the broader world, and things like libel suits are more frequent.
The thing I think is potentially unfair is that ~Lightcone has its own strict morality about integrity violations. For instance, I don’t think on its face trying to control your reputation is bad in the same way that it seems to be a smoking gun to you. A lot of people reading this probably don’t either, but when it’s presented this way they see it as consistent with all the other infractions that were listed and really damning.
I think integrity violations are dangerous and corrosive, and I think it’s a good impulse to share information like that when you have it, but despite all the words here, I don’t think that info is properly contextualized for most people to use it correctly. It easily comes across as a laundry list of reasons to cancel them rather than the calibrated honest reporting it’s trying to be.
This continues to feel quite a bit too micromanagy to me. Mostly these are the complaints that seemed significant to Ben (which also roughly aligned with my assessment).
The post was already like 100+ hours of effort to write. I don’t think “more contextualizing” is a good use of at least our time (though if other people want to do this kind of job and would do more of that, then that seems maybe fine to me).
Like, again, I think if some people want to update that all weirdness is bad, then that’s up to them. It is not my job, and indeed would be a violation of what I consider cooperative behavior, to filter evidence so that the situation here only supports my (or Ben’s) position about how organizations should operate.
Yeah, I agree, I don’t think it’s worth the amount of contextualizing it would take to make this kind of info properly received and useful. I’m doubting that we can gossip in the way that, for example, you may have thought was needed for SBF, that productively in this forum. I think you and Ben have a rather complex worldview that explains why these incidents are very significant to you whereas superficially similar things that are common in EA are not (or are positive). I’m less concerned that weirdness will be discouraged and more concerned that people will be on blast in a way that seems arbitrary to them and is hard for them to predict without e.g. seeking your permission first if they don’t want to be called out on main later. Being called out is very damaging and I don’t like the whiff of “you have nothing to fear if have nothing to hide” that I’m getting in this comment section. Seems like the only defense against this kind of thing is never being successful enough to control money or hire employees.
I’m looking at maybe starting a new org in the next year and doing something that’s a little outside the Sequences morality (advocacy, involving politics and PR and being in coalitions with not-fully-aligned people). I really think it’s not only right but the best thing for me to be doing, but posts like this make me nervous that I could be subject to public shame for good faith disagreements and exploration. Feels extra shitty too because I tolerate other people doing pretty dumb things (from an organizational perspective) that I don’t do, like having sex parties with my (potential) coworkers, which for some reason are considered okay and not red flags for future misconduct in different domains.
Tbc, I think I have enough context to update usefully from this post. But I would guess less than 10% of readers do, and that a majority of readers will update somewhat negatively about Nonlinear for the bad/incomplete reasons I stated above. Your goal might not be fairness to Nonlinear, per se, and it doesn’t have to be. There are much bigger things at stake. But I think it’s harsh on them and chilling to others, and that cost should be weighed more than I think you guys are weighing it because you think you are just sharing openly (or something).
I want to try to paraphrase what I hear you saying in this comment thread, Holly. Please feel free to correct any mistakes or misframings in my paraphrase.
I hear you saying...
Lightcone culture has a relatively specific morality around integrity and transparency. Those norms are consistent, and maybe good, but they’re not necessarily shared by the EA community or the broader world.
Under those norms, actions like threatening your ex-employees’s carrer prospects to prevent them from sharing negative info about you are very bad, while in broader culture a “you don’t badmouth me, I don’t badmouth you” ceasefire is pretty normal.
In this post, Ben is accusing Nonlinear of bad behavior. In particular, he’s accusing them of acting particularly badly (compared to some baseline of EA orgs) according to the integrity norms of lightcone culture.
My understanding is that the dynamic here that Ben considers particularly egregious is that Nonlinear allegedly took actions to silence their ex-employees, and prevent negative info from propagating. If all of the same events had occurred between Nonlinear, Alice, and Chloe, except for Nonlinear suppressing info about what happened after the fact, Ben would not have prioritized this.
However, many bystanders are likely to miss that subtlety. They see Nonlinear being accused, but don’t share Lightcone’s specific norms and culture.
So many readers, tracking the social momentum, walk away with the low-dimensional bottom line conclusion “Boo Nonlinear!”, but without particularly tracking Ben’s cruxes.
eg They have the takeaway “it’s irresponsible to date or live with your coworkers, and only irresponsible people do that” instead of “Some people in the ecosystem hold that suppressing negative info about your org is a major violation.”
And importantly, it means in practice, Nonlinear is getting unfairly punished for some behaviors that are actually quite common in the EA subculture.
This creates a dynamic analogous to “There are so many laws on the book that technically everyone is a criminal. So the police/government can harass or imprison anyone they choose, by selectively punishing crimes.” If enough social momentum gets mounted against an org, they can be lambasted for things that many orgs are “guilty” of[1], while the other orgs get off scott free.
And furthermore, this creates unpredictability. People can’t tell whether their version of some behavior is objectionable or not.
So overall, Ben might be accusing Nonlinear for principled reasons, but to many bystanders, this is indistinguishable from accusing them for pretty common EA behaviors, by fiat. Which is a pretty scary precedent!
Am I understanding correctly?
“guilty” in quotes to suggest the ambiguity about whether the behaviors in question are actually bad or guiltworthy.
Yes, very good summary!
Ok. Given all that, is there particular thing that you wish Ben (or someone) had done differently here? Or are you mostly wanting to point out the dynamic?
Yes, I think a lot of commenters are almost certainly making bad updates about how to judge or how to run an EA org off of this, or are using it to support their own pre-existing ideas around this topic.
This kinda stinks, but I do think it is what happens by default. I hope the next big org founder picks up more nuance than that, from somewhere else?
That said, I don’t think “callout / inventory of grievances / complaints” and “nuanced post about how to run an org better/fix the errors of your ways” always have to be the same post. That would be a lot to take on, and Lesswrong is positioned at the periphery here, at best; doing information-gathering and sense-making from the periphery is really hard.
I feel like for the next… week to month… I view it as primarily Nonlinear’s ball (...and/or whoever it is who wants to fund them, or feels responsibility to provide oversight/rehab on them, if any do judge that worthwhile...) to shift the conversation towards “how to run things better.”
Given their currently demonstrated attitude, I am not starting out hugely optimistic here. But: I hope Nonlinear will rise to the occasion, and take the first stab at writing some soul-searching/error-analysis synthesis post that explains: “We initially tried THIS system/attitude to handle employees, in the era the complaints are from. We made the following (wrong in retrospect) assumptions. That worked out poorly. Now we try this other thing, and after trialing several things, X seems to go fine (see # other mentee/employee impressions). On further thought, we intend to make Y additional adjustment going forward. Also, we commit to avoiding situations where Z in the future. We admit that A looks sketchy to some, but we wish to signal that we intend to continue doing it, and defend that using logic B...”
I think giving Nonlinear the chance to show that they have thought through how to fix these issues/avoid generating them in the future, would be good. They are in what (should) be the best position to know what has happened or set up an investigation, and are probably the most invested in making sense of it (Emotions and motivated cognition come with that, so it’s a mixed bag, sure. I hope public scrutiny keeps them honest.). They are also probably the only ones who have the ability to enforce or monitor a within-org change in policy, and/or to undergo some personal-growth.
If Nonlinear is the one who creates it, this could be an opportunity to read a bit into how they are thinking about it, and for others to reevaluate how much they expect past behavior and mistakes to continue to accurately predict their future behavior, and judge how likely these people are to fix the genre of problems brought up here.
(If they do a bad job at this, or even just if they seem to have “missed a spot”: I do hope people will chime in at that point, with a bunch of more detailed and thoughtful models/commentary on how to run a weird experimental small EA org without this kind of problem emerging, in the comments. I think burnout is common, but experiences this bad are rare, especially as a pattern.)
((If Nonlinear fails to do this at all: Maybe it does fall to other people to… “digest some take-aways for them, on behalf of the audience, as a hypothetical exercise?” IDK. Personally, I’d like to see what they come up with first.))
...I do currently think the primary take-away that “this does not look like a good or healthy org for new EAs to do work for off-the-books, pls do not put yourself in that position” looks quite solid. In the absence of a high-level “Dialogue in the Comments: Meta Summary Post” comment: I do kinda wish Ben would elevate from the comments to a footnote, that nobody seems to have brought up any serious complaints about Drew, though.
I do not want to actually do this, because I love Lightcone and I trust you guys, but would it help you understand if a redteamer wrote a post like this about your org? Would you be fine with all the donors that were turned off and the people who didn’t want to work with you because you had the stink of drama on you?
I think it depends a lot on what you mean by “a post like this”. Like, I do think I would just really like more investigation and more airing of suspicions around, and yeah, that includes people’s concerns with Lightcone.
I could see something like that working but probably in a different format. Maybe something closer to a social credit score voting/aggregation mechanism?
Still, the most upvoted comment on this post does seem to push in the direction of “weird is bad”:
Yep, not clear what to do about that. Seems kind of sad, and I’ve strong-downvoted the relevant comment. I don’t think it’s mine or Ben’s job to micromanage people’s models of how organizations should operate.
I share Holly’s appreciation for you all, and also the concern that Lightcone’s culture and your specific views of these problem don’t necessarily scale nor translate well outside of rat spheres of influence. I agree that’s sad, but I think it’s good for people to update their own views and with that in mind.
My takeaways from all this are fairly confidently the following:*
EA orgs could do with following more “common sense” in their operations.
For example,
hire “normie” staff or contractors early on who are expected to know and enforce laws, financial regulations, and labor practices conscientiously, despite the costs of “red tape.” Build org knowledge and infrastructure for conscientious accounting, payroll, and contracting practices, like a “normal” non-profit or startup. After putting that in place, allow leaders to pushback on red tape, but expect them to justify the costs of not following any “unnecessary” rules, rather than expecting junior employees to justify the costs of following rules.
don’t frequently mention a world-saving mission when trying to convince junior staff to do things they are hesitant to do. Focus on object level tasks and clear, org-level results instead. It’s fine to believe in the world-saving mission, obviously. But when you regularly invoke the potential for astronomical impact as a way to persuade junior staff to do things, you run a very high risk of creating manipulative pressure, suppressing disagreement, and short-circuiting their own judgment.
do not live with your employees. Peers might be ok, but it’s high risk of too much entanglement for junior and senior staff to live together.
similarly, do not expect staff to be your “family” or tribe, nor treat them with familial intimacy. Expecting productivity is enough. Expect them to leave for other jobs regularly, for a lot of reasons. Wish them well, don’t take it personally.
I think these 4 guidelines would have prevented 90%+ of the problems Alice and Chloe experienced.
I expect we only agree on the 4th point?
[*I have not worked directly with anyone involved. I have, however, worked in a similar rat-based project environment that lacked ‘normal’ professional boundaries. It left me seriously hurt, bewildered, isolated, and with a deeply distressing blow to my finances and sense of self, despite everyone’s good intentions. I resonated with Alice and Chloe a lot, even without dealing with any adversarial views like those attributed to Emerson.
I think the guidelines above would have prevented about 70% of my distress.
I believe Richenda and Minh that they’ve had good experiences with Kat. I had many positive experiences too on my project. I think it’s possible to have neutral to positive experiences with someone with Kat’s views, but only with much better boundaries in place].
I get that this applies to some parts of these critiques. But to me, the allegations made here seem to be on another level, particularly asking an intern to take on unnecessary legal risk.
I agree that these things are very common in EA—but IMO the conclusion is that EA should stop letting these things slide.
Some of them are not bad! Idk I’m just saying we can’t be all about experimentation and openness and them clap down on people as if they “broke the rules”. What rules? Those need to be more clear if there is going to be community policing.
This is a good chance for people to write posts about what the rules should be.
You get there by having more discourse not less.
As one of the people Ben interviewed:
This post closely reflects my understanding of the situation. (EDIT: at this time, before engaging with Nonlinear reply myself.)
Whenever this post touches on something that I can independently corroborate (EDIT: small minority of claims), I believe it to be accurate. Whenever the post communicates something that both Ben and I have heard from Alice and Chloe (EDIT: large majority of claims), it tells their account faithfully.
I appreciate Ben’s emphasis on red lines and the experiences of Alice and Chloe. When he leaves out stories that I think we are both aware of, my guess is that he has done so because these stories aren’t super relevant to the case at hand or aren’t super objective/strongly evidenced. This makes me think more favourably of the rest of his write-up.
Sorry, maybe this is addressed elsewhere, but what relationship have you had with Nonlinear?
Nonlinear staff were participants on the FTX EA program, which I ran, and where I was in part responsible for participant welfare. Some of the important events took place in this period. This led me to start supporting Alice and Chloe. I have continued to be involved in the case on-and-off since then.
Appreciate the comment Joel :)
(And of course, if you later come to have a critical/negative opinion of parts of my post, you’re v welcome to share those too!)
Yes, I think that the post does not do enough to make it clear that the central allegations are not about Drew Spartz. Happy to expand.
That sounds quite plausible. Will do a re-read of my post (and my notes) to check what I say, and think about what edit/additions are worth making.
(Will come back for that tomorrow. I’m signing off for today and taking the evening to rest.)
Thank you Ben—please check comment mentions of Drew, too!
I looked through all the mentions of his behavior in the post. I think only one of them is plausibly misleading. I say
I only have reports of intimidating actions from Emerson and Kat, not Drew. I don’t have any reason to think he reduced the level of intimidation, but I don’t want to convey that I know of positive acts of intimidation that he took, beyond broadly participating in the dynamics set up by Emerson and Kat and being supportive of his brother. I’ve edited that bit and included it in an addendum collecting all edits.
Speaking from my perspective, not from anyone else’s (e.g. Alice’s, Chloe’s, yours) I don’t see Drew as exonerated from the dynamics at Nonlinear, even while I think that Emerson and Kat are each substantially more responsible for them.
I think the best thing to be said in his favor is that Alice felt he was the only one of the three of them to really hear her concerns (e.g. financially) and sometimes advocate for her needs.
Here’s another thing.
Let’s suppose that Nonlinear have crossed red lines, and that additional information from them won’t change this. (In reality I think that this is up in the air for the next week or so; I won’t allow my limited imagination to diminish the hope.)
Do you not believe in the possibility of rehabilitation in this case?
I haven’t read up on what norms here work well in other high-trust communities. But at least in criminal vs. society settings I would want to be a strong proponent of rehabilitation. It seems pretty plausible to me that, after thinking more about best norms in high-trust communities, I could come to think that “create horrendous work environment” and “create credible fear of severe retaliation” were things that could change (and be monitored) upon rehabilitation, and that it would be good for this to happen after X period of time.
I don’t mean to imply that I couldn’t see evidence that persuaded me that this concern had been mediated sufficiently. But silencing and intimidating into being quiet is a problem that self-reinforces — when it’s happening, it stops you from learning about it, and about anything else bad that’s happening. So I think it’s important to take a much more hardline stance against it than with other norm-violations even if the two norm-violations caused a similar amount of damage.
The people involved may deserve some sort of rehabilitation—the company should not. And then also, there’s a question of whether this process would allow them to run EA organizations and not just take part in them.
Edit: I used the word ‘company’, but I mean any organisation. I don’t know the status of NL.
Why do you distinguish between person and company in this respect?
Because there’s barely anything relevant that is common to both. We don’t have any moral obligation to companies, nor does it make the world better in my view to “rehabilitate” companies. A person has to continue existing in society even after committing a crime. A company doesn’t have to continue existing.
I think the question of what to do with wrongdoers is a complex and difficult one. I will say that I think rehabilitation of criminals is important because there’s really no alternative for them—they’re outcasts from society as a whole, so we reintegrate them or they have no life at all. By contrast, we would not be entirely destroying someone’s life by expelling them from EA funding and networking circles—if you feel like being expelled from EA would destroy your life, it’s already time to start building an independent support network IMO.
One example of the evidence we’re gathering
We are working hard on a point-by-point response to Ben’s article, but wanted to provide a quick example of the sort of evidence we are preparing to share:
Her claim: “Alice claims she was sick with covid in a foreign country, with only the three Nonlinear cofounders around, but nobody in the house was willing to go out and get her vegan food, so she barely ate for 2 days.”
The truth (see screenshots below):
There was vegan food in the house (oatmeal, quinoa, mixed nuts, prunes, peanuts, tomatoes, cereal, oranges) which we offered to cook for her.
We were picking up vegan food for her.
Months later, after our relationship deteriorated, she went around telling many people that we starved her. She included details that depict us in a maximally damaging light—what could be more abusive than refusing to care for a sick girl, alone in a foreign country? And if someone told you that, you’d probably believe them, because who would make something like that up?
Evidence
The screenshots below show Kat offering Alice the vegan food in the house (oatmeal, quinoa, cereal, etc), on the first day she was sick. Then, when she wasn’t interested in us bringing/preparing those, I told her to ask Drew to go pick up food, and Drew said yes. Kat also left the house and went and grabbed mashed potatoes for her nearby.
See more screenshots here of Drew’s conversations with her.
Initially, we heard she was telling people that she “didn’t eat for days,” but she seems to have adjusted her claim to “barely ate” for “2 days”.
It’s important to note that Alice didn’t lie about something small and unimportant. She accused of us a deeply unethical act—the kind that most people would hear and instantly think you must be a horrible human—and was caught lying.
We believe many people in EA heard this lie and updated unfavorably towards us. A single false rumor like this can unfairly damage someone’s ability to do good, and this is just one among many she told.
We have job contracts, interview recordings, receipts, chat histories, and more, which we are working full-time on preparing.
This claim was a few sentences in Ben’s article but took us hours to refute because we had to track down all of the conversations, make them readable, add context, anonymize people, check our facts, and write up an explanation that was rigorous and clear. Ben’s article is over 10,000 words and we’re working as fast as we can to respond to every point he made.
Again, we are not asking for the community to believe us unconditionally. We want to show everybody all of the evidence and also take responsibility for the mistakes we made.
We’re just asking that you not overupdate on hearing just one side, and keep an open mind for the evidence we’ll be sharing as soon as we can.
It could be that I am misreading or misunderstanding these screenshots, but having read through them a couple of times trying to parse what happened, here’s what I came away with:
On December 15, Alice states that she’d had very little to eat all day, that she’d repeatedly tried and failed to find a way to order takeout to their location, and tries to ask that people go to Burger King and get her an Impossible Burger which in the linked screenshots they decline to do because they don’t want to get fast food. She asks again about Burger King and is told it’s inconvenient to get there. Instead, they go to a different restaurant and offer to get her something from the restaurant they went to. Alice looks at the menu online and sees that there are no vegan options. Drew confirms that ‘they have some salads’ but nothing else for her. She assures him that it’s fine to not get her anything.
It seems completely reasonable that Alice remembers this as ‘she was barely eating, and no one in the house was willing to go out and get her nonvegan foods’ - after all, the end result of all of those message exchanges was no food being obtained for Alice and her requests for Burger King being repeatedly deflected with ‘we are down to get anything that isn’t fast food’ and ‘we are down to go anywhere within a 12 min drive’ and ‘our only criteria is decent vibe + not fast food’, after which she fails to find a restaurant meeting those (I note, kind of restrictive if not in a highly dense area) criteria and they go somewhere without vegan options and don’t get her anything to eat.
It also seems totally reasonable that no one at Nonlinear understood there was a problem. Alice’s language throughout emphasizes how she’ll be fine, it’s no big deal, she’s so grateful that they tried (even though they failed and she didn’t get any food out of the 12⁄15 trip, if I understand correctly). I do not think that these exchanges depict the people at Nonlinear as being cruel, insane, or unusual as people. But it doesn’t seem to me that Alice is lying to have experienced this as ‘she had covid, was barely eating, told people she was barely eating, and they declined to pick up Burger King for her because they didn’t want to go to a fast food restaurant, and instead gave her very limiting criteria and went somewhere that didn’t have any options she could eat’.
On December 16th it does look like they successfully purchased food for her.
My big takeaway from these exchanges is not that the Nonlinear team are heartless or insane people, but that this degree of professional and personal entanglement and dependence, in a foreign country, with a young person, is simply a recipe for disaster. Alice’s needs in the 12⁄15 chat logs are acutely not being met. She’s hungry, she’s sick, she conveys that she has barely eaten, she evidently really wants someone to go to BK and get an impossible burger for her, but (speculatively) because of this professional/personal entanglement, she lobbies for this only by asking a few times why they ruled out Burger King, and ultimately doesn’t protest when they instead go somewhere without food she can eat, assuring them it’s completely fine. This is also how I relate to my coworkers, tbh—but luckily, I don’t live with them and exclusively socialize with them and depend on them completely when sick!!
Given my experience with talking with people about strongly emotional events, I am inclined towards the interpretation where Alice remembers the 15th with acute distress and remembers it as ‘not getting her needs met despite trying quite hard to do so’, and the Nonlinear team remembers that they went out of their way that week to get Alice food—which is based on the logs from the 16th clearly true! But I don’t think I’d call Alice a liar based on reading this, because she did express that she’d barely eaten and request apologetically for them to go somewhere she could get vegan food (with BK the only option she’d been able to find) only for them to refuse BK because of the vibes/inconvenience.
I should also add that this (including the question of whether Alice is credible) is not very important to my overall evaluation of the situation, and I’d appreciate it if Nonlinear spent their limited resources on the claims that I think are most shocking and most important, such as the claim that Woods said “your career in EA would be over with a few DMs” to a former employee after the former employee was rumored to have complained about the company.
I agree that this is a way more important incident, but I downvoted this comment because:
I don’t want to discourage Nonlinear from nitpicking smaller claims. A lot of what worries people here is a gestalt impression that Nonlinear is callous and manipulative; if that impression is wrong, it will probably be because of systematic distortions in many claims, and it will probably be hard to un-convince people of the impression without weighing in on lots of the claims, both major and minor.
I expect some correlation between “this concern is easier to properly and fully address” and “this concern is more minor”, so I think it’s normal and to be expected that Nonlinear would start with relatively-minor stuff.
I do think it’s good to state your cruxes, but people’s cruxes will vary some; I’d rather that Nonlinear overshare and try to cover everything, and I don’t want to locally punish them for addressing a serious concern even if it’s not the top concern. “I’d appreciate if Nonlinear spent their limited resources...” makes it sound like you didn’t want Nonlinear to address the veganism thing at all, which I think would have been a mistake.
I’m generally wary of the dynamic “someone makes a criticism of Nonlinear, Nonlinear addresses it in a way that’s at least partly exculpatory, but then a third party steps in to say ‘you shouldn’t have addressed that claim, it’s not the one I care about the most’”. This (a) makes it more likely that Nonlinear will feel pushed into not-correcting-the-record on all sorts of false claims, and (b) makes it more likely that EAs will fail to properly dwell on each data point and update on it (because they can always respond to a refutation of X by saying ‘but what about Y?!’ when the list of criticisms is this danged long).
I also think it’s pretty normal and fine to need a week to properly address big concerns. Maybe you’ve forgotten a bunch of the details and need to fact-check things. Maybe you’re emotionally processing stuff and need another 24h to draft a thing that you trust to be free of motivated reasoning.
I think it’s fine to take some time, and I also think it’s fine to break off some especially-easy-to-address points and respond to those faster.
Well put, Rob—you changed my mind
Yep this changed my mind as well—thank you!
I’d have thought “Emerson boasted about paying someone to stalk an enemy” was the most shocking claim. (Not that you said otherwise.) It surprises me how little the discussion has been focused on that. Whether or not it’s worse, it is way weirder than “threatened to get an employee blacklisted for saying bad things about them”.
I find the idea of doing that absolutely awful and I’ve never done anything like that. Unfortunately, it’s a lie there is no possibility of defending myself from, since it’s hearsay from an anonymous source.
To clarify, do you mean you have never asked/recruited someone to stalk, intimidate, or harass someone else, or do you mean you have never boasted about it?
Neither!
I can tell you that someone was quite actively scared of you doing something like this, and believed you to have said it to them. I wasn’t there myself so I cannot confirm whether it’s a mishearing or whatever.
There’s a broader question that I am often confused about regarding whether it’s good or bad to think carefully about how to really deceive someone, or really hurt someone, even if it’s motivated defensively. Then people can be unsure about the boundaries of whether you’ll use it against them. If someone were to tell you that they know general skills to get people fired, or get people swatted, or get people on immigration black-lists for certain countries, this information inherently makes them a more worrying person to be in conflicts with. Even if they say they’d only do it when it was justified. It’s one reason why I find myself trying to avoid simple games of deception like Werewolf, I’d prefer to not have practiced lying in general, so that my friends and allies have less reason to think I’m good at deception.
My current guess is that you can wield some of these normally-unethical weapons if you also have sent pretty credible signals about what principles you use to decide whether to use them, and otherwise it’s not much good to figure out how you would really hurt someone, as it predictably leads to people being very scared and intimidated.
‘or get people swatted, or get people on immigration black-lists for certain countries,’
I find it pretty hard to come up with a realistic scenario where these would ever be justified.
100% agreed with this. The chat log paints a wildly different picture than what was included in Ben’s original post.
Agreed. I did update toward “there’s likely a nontrivial amount of distortion in Alice’s retelling of other things”, and toward “normal human error and miscommunication played a larger role in some of the Bad Stuff that happened than I previously expected”. (Ben’s post was still a giant negative update for me about Nonlinear, but Kat’s comment is a smaller update in the opposite direction.)
I think a crux in this is what you think the reason is for Alice being so unassertive towards Kat in the messages—was it because she’s worried, based on experience, about angering her employers and causing negative consequences for herself, e.g. them saying she’s being too difficult and refuse to help her at all, or some other, more favourable to Kat and Emerson, reason?
Yeah, though if I learned “Alice is just not the sort of person to loudly advocate for herself” it wouldn’t update me much about Nonlinear at this point, because (a) I already have a fair amount of probability on that based on the publicly shared information, and (b) my main concerns are about stuff like “is Nonlinear super cutthroat and manipulative?” and “does Nonlinear try to scare people into not criticizing Nonlinear?”.
Those concerns are less directly connected to the vegan-food thing, and might be tricky to empirically distinguish from the hypothesis “Alice and/or Chloe aren’t generally the sort of people to be loud about their preferences” if we focus on the food situation.
(Though I cared about the vegan-food thing in its own right too, and am glad Kat shared more details.)
Yeah makes sense, I meant a crux for the food thing specifically :)
We definitely did not fail to get her food, so I think there has been a misunderstanding—it says in the texts below that Alice told Drew not to worry about getting food because I went and got her mashed potatoes. Ben mentioned the mashed potatoes in the main post, but we forgot to mention it again in our comment—which has been updated
The texts involved on 12/15/21:
I also offered to cook the vegan food we had in the house for her.
I think that there’s a big difference between telling everyone “I didn’t get the food I wanted, but they did get/offer to cook me vegan food, and I told them it was ok!” and “they refused to get me vegan food and I barely ate for 2 days”.
Also, re: “because of this professional/personal entanglement”—at this point, Alice was just a friend traveling with us. There were no professional entanglements.
Agreed.
This also updates me about Kat’s take (as summarized by Ben Pace in the OP):
When I read the post, I didn’t see any particular reason for Kat to think this, and I worried it might be just be an attempt to dismiss a critic, given the aggressive way Nonlinear otherwise seems to have responded to criticisms.
With this new info, it now seems plausible to me that Kat was correct (even though I don’t think this justifies threatening Alice or Ben in the way Kat and Emerson did). And if Kat’s not correct, I still update that Kat was probably accurately stating her epistemic state, and that a lot of reasonable people might have reached the same epistemic state.
The post says
Is that incorrect? When did Alice start working for Nonlinear?
Yes, that is incorrect. One of many such factual inaccuracies and why we told Ben to give us a week. The exact date is not simple to explain, since she gradually began working with us, but we will clarify ASAP.
Yeah, I mis-wrote there, will update that line in the post (though I say it correctly a few paragraphs later[1]). They traveled together between those dates.
From my perspective it’s fairly ambiguous at what point Alice started “working” for Nonlinear.
On the call with me, Kat said (roughly verbatim) “If you asked each of me/Emerson/Drew at what point Alice became an employee, we’d each give three different answers.” Kat said that her answer was at the end of February when they claim that they started paying Alice $1k/month, and that was when Kat started giving her Nonlinear tasks and negotiated things like vacation time.
Emerson said he didn’t know that was happening and thought he was just giving Alice a gift, and Alice reports after she quit he said he’d never thought of her as an employee.
Alice herself reports that the first conversation she had with Emerson, at EAG, lasting 4 hours, they explicitly discussed working together and salary. She says she walked away from that conversation believing they’d agreed that if she were to work for him, she would need $2.5k-$3k/month in salary to make ends meet, and was generally expecting to eventually work with Nonlinear when they travelled.
So as I say, IMO it’s quite confusing when they started being an employee in different people’s minds, but my current guess would be end-of-February when Kat thinks she started managing her would be a reasonable choice.
Added: In this December 28th comment Kat Woods says that they “are also incubating a promising woman for an as-yet-unspecified charity”, which is Alice. So I think it’s accurate to think of her as in their incubator at least at that point.
I write:
I think it’s telling, that Kat thinks that the texts speak in their favor. Reading them was quite triggering for me because I see a scared person, who asks for basic things, from the only people she has around her, to help her in a really difficult situation, and is made to feel like she is asking for too much, has to repeatedly advocate for herself (while sick) and still doesn’t get her needs met. On one hand, she is encouraged by Kat to ask for help but practically it’s not happening. Especially Emerson and Drew in that second thread sounded like she is difficult and constantly pushed to ask for less or for something else than what she asked for. Seriously, it took 2.5 hours the first day to get a salad, which she didn’t want in the first place?! And the second day it’s a vegetarian, not vegan, burger.
The way Alice constantly mentioned that she doesn’t want to bother them and says that things are fine when they are clearly not, is very upsetting. I can’t speak to how Alice felt but it’s no wonder she reports this as not being helped/fed when she was sick. To me, this is accurate, whether or not she got a salad and a vegetarian burger the next day.
Honestly, the burger-gate is a bit ridiculous. Ben did report in the original article that you disputed these claims (with quite a lot of detail) so he reported it accurately. To me, that was enough to not update too much based on this. I don’t think it warranted the strongly worded letter to the Lightcone team and the subsequent dramatic claims about evidence that you want to provide to clear your name.
It sounds like you’re interpreting Nonlinear folks as saying that the burger incident was the only false claim in Ben’s piece?
My interpretation is that Nonlinear objects to many claims in the piece but published this one (“one example of the evidence we’re gathering...”) in response to encouragement that they give examples of claims they object to. Probably because this was some combination of quicker to respond to / Nonlinear thought it looked better for them / easier for outsiders to understand.
That seems fair, but I don’t have any other concrete information, so for now, that’s my position based on the information I have. It may change on whatever else becomes available but I am skeptical of the value of any additional material that Nonlinear present because it seems that they covered all their main concerns already during their call with Ben and because this attempt to “provide evidence” backfired and in my opinion gives more credibility to Alice, not less. If this is an example of a “100% provable false claim” and a reason to call Alice a “bald-faced liar”, then the letter was absolutely disproportionate.
The claim in the post was “Alice claims she was sick with covid in a foreign country, with only the three Nonlinear cofounders around, but nobody in the house was willing to go out and get her vegan food, so she barely ate for 2 days.”. (Bolding added)
If you look at the chat messages, you’ll see we have screenshots demonstrating that:
1. There was vegan food in the house, which we offered her.
2. I personally went out, while I was sick myself, to buy vegan food for her (mashed potatoes) and cooked it for her and brought it to her.
I would be fine if she told people that she was hungry when she was sick, and she felt sad and stressed. Or that she was hungry but wasn’t interested in any of the food we had in the house and we didn’t get her Burger King.
But I think that there’s a big difference between telling everyone “I didn’t get the food I wanted, but they did get/offer to cook me vegan food, and I told them it was ok!” and “they refused to get me vegan food and I barely ate for 2 days”
I have sympathy for Alice. She was hungry (because of her fighting with a boyfriend [not Drew] in the morning and having a light breakfast) and she was sick. That sucks, and I feel for her. And that’s why I tried (and succeeded) in getting her vegan food.
In summary. “Alice claims she was sick with covid in a foreign country, with only the three Nonlinear cofounders around, but nobody in the house was willing to go out and get her vegan food, so she barely ate for 2 days.”. (Bolding added) This makes us sound like terrible people.
What actually happened: she was sick and hungry, and we offered to cook or bring over the vegan options in the house, then went out and bought and cooked her vegan food. We tried to take care of our sick friend (she wasn’t working for us at the time), and we fed her while she was sick.
I encourage you to read the full post here, where I’m trying to add more details and address more points as they come up.
Each time I see this comment pasted (1, 2, 3, 4, 5) I need to figure out whether to read through it to look for changes from the other times. What would you think of linking to your existing comments instead of pasting multiple duplicate comments?
(Also this one (6, 7) but since that’s only two it’s less of an issue)
Many people like me might only read one or two of these replies, and I at least find it easier to see the full thing written out even if there’s a lot of repetition.
Minor issue but I personally prefer in most cases to see things in t he body rather than have to follow a link, then having to find my way back to the original discussion afterwards.
Unless it’s pages long then I prefer a link.
I think it might be helpful to imagine what the forum would look like if this were a typical approach?
Also, consider what it does to the tree of the discussion: every duplicate comment forks the tree. Imagine I think there is something false in the comment that I think it’s readers should know. I write it on the first copy that appears, but every later copy will not have my response. I could try and track the copy-pasting and duplicate my response as well, but that requires manual tracking and is also even more noisy.
Thanks I didn’t think of that.
Maybe a bit of inconvenience for people like me is worth it to prevent potential mess like you say.
I’d be particularly interested in evidence against claims that nonlinear staff enforced (or encouraged) policies that prevent whistleblowing or the spread of negative information about nonlinear and its staff, (though my guess is that countering these claims is going to be a lower priority than some of the other more shocking ones).
It’s worth noting that empathetic members on the forum seem to recognize that Alice, as evidenced by her messages, is someone earnestly seeking assistance while being mindful not to cause any disruptions. Regrettably, individuals who have experienced psychological abuse often tend to exhibit such behaviors—expressing excessive apologies and striving to avoid any form of disturbance or trouble. While I lack formal training in psychology, I am prompted to consider whether a qualified practitioner might discern traits indicative of co-dependency and narcissistic abuse within the communicated messages. This, naturally, remains open to interpretation.
Of more substantial note, however, is the observation that the initial responses have been directed towards seemingly inconsequential elements. This particular approach is reminiscent of tactics employed in portrayals of legal proceedings within dramatic television series, where the objective is to unveil minor falsehoods in an attempt to cast doubt upon the credibility of a witness. The implication is that if one can establish a falsehood in a trivial matter, it follows that untruths may similarly permeate more significant claims.
What the Nonlinear team may potentially underestimate is the discerning nature of the audience assembled within this forum. Comprising educated, intelligent, and seasoned individuals, they are not easily swayed by manipulative techniques. The community holds truth and high ethical standards in high regard. To maintain discourse at a level commensurate with the forum’s expectations, addressing the most substantial allegations may be a more prudent course of action.
Just to identify a crux, do you think it’s acceptable that someone in your duty of care doesn’t eat hot food for a day whilst they are sick?
Can you explain? She did eat.
On December 15, as your screenshots seem to illustrate that you were not able to provide her hot (vegan) food, despite >2 hours of text messaging with both Kat and Drew.
I can’t speak for Elliot but happy to help you dig that hole for yourself. Did she eat on the 15th? Or rather, did any of you help her eat proper nutritious meals that are appropriate for a sick vegan? On the 15, or the 16?
Again, it’s rediculous to keep discussing this as it seems not to be a crux for people but it’s so revealing that you think you are in the right here.
Some thoughts:
Vibewise, I find this unsurprising. I am not utterly shocked by any of the above and am not particularly surprised by much of it. Nonlinear does have a move fast and break things approach and I imagine they have patterns of behaviour that have predictably hurt people in the past. As evidence of this I made a market on it about it 8 months ago.
I like the nonlinear team personally and guess they do good/interesting work. I thought superlinear was a good test-case of bounty-based funding. I also use the Non-Linear library and find it valuable. I am confident that Kat, Emerson and Drew have good intentions on a deep level.
The above statements can both be true.
Thanks Ben for doing this, I think it was brave and good
I particularly like this advice “Going forward I think anyone who works with Kat Woods, Emerson Spartz, or Drew Spartz, should sign legal employment contracts, and make sure all financial agreements are written down in emails and messages that the employee has possession of. I think all people considering employment by the above people at any non-profits they run should take salaries where money is wired to their bank accounts, and not do unpaid work or work that is compensated by ways that don’t primarily include a salary being wired to their bank accounts.”
I think that EA is a very high trust ecosystem and I guess maybe Nonlinear shouldn’t be given that trust. But after reading the above, it’s up to you. I might advise the median EA not to work for them and I’d advise them not to hire people unless they are pretty hard-nosed though seemingly Kat has said she would change things anyway.
I am pretty curious about @spencerg’s statement that many simple errors were wrong with a recent draft this. That seems notable
I think the key question here is “Will such events happen again and what is the acceptable threshold for that chance”
If Nonlinear were welcomed unambiguously back into the EA fold I don’t think I’d be that confident that there wouldn’t be more stories like this in the next year. Maybe 20% that there would be.
I guess most people think that is above the acceptable tolerance.
I think it’s above tolerance for me too. It seems much higher than any other org I can think of. Notably it seems very avoidable.
I guess I do think things trade off against one another and maybe I’d like a way for us to say “this org is really effective but we don’t recommend most people work for them”. This is the sort of stance many have towards non-safety AI work as a means to upskill
I think rather than this being seen as punishment it can be seen as acceptable boundary setting—communities are allowed to want people given status and others not. This action will lower Nonlinear’s status and as a group we can choose to do that. Generally I think about things in terms of bad/unacceptable behaviour rather than bad people and I think a community gets to set the level of predictable bad behaviour it is exposed to.
What could Nonlinear do to convince me that they deserve the same levels of easy trust as other EA orgs? [1]
Provide evidence that this article is deeply flawed
Not threaten to sue Lightcone for publishing it
Acknowledge the patterns of behaviour that led to these outcomes and how they have changed—this would probably involve acknowledging that lots of these events took place
They could just decide that they don’t want this—they want to work in a different way to how EA orgs tend to, and that’s fine-adjacent, but then I would recommend they be treated differently too. I wonder sometimes whether it’s good to have, say, rationalism as a space willing to deal with less standard norms.
In my opinion too much post-FTX discussion has focused on individual EA behaviour and too little on powerful EA behavior. I think the median EA trying to avoid downside risk is overrated. As one’s influence grows the harm one does scales and it becomes better value to try to limit the bad and increase the good more. I am much more interested in a discussion of what Nonlinear should or shouldn’t do than for Catherine Richardson from Putney to worry if she she is spending too much money on rent.
Note that again, I don’t think they should close down, I’m just not sure they should be invited to present at EAGs, and I’d be happy for this to sit on a forum wiki page about them
lol
Some confidentiality constraints have been lifted in the last few days, so I’m now able to share more information from the Community Health and Special Projects team to give people a sense of how this case went from our perspective, and how we think about these things.
Previous updates:
General statement
An incomplete list of actions we’ve taken to reduce risk of other people ending up in similarly bad situations.
To give a picture of how things happened over time:
Starting mid last year, our team heard about many of the concerns mentioned in this post.
At the time of our initial conversations with former staff/associates of Nonlinear, they were understandably reluctant for us to do anything that would let on to Nonlinear that they were raising complaints. This limited our ability to hear Nonlinear’s side of the story, though members of our team did have some conversations with Kat that touched on some of these topics. It also meant that the former staff/associates did not give permission at that time for us to take some steps that we suggested. They also suggested some steps that we didn’t see as feasible for us.
At one point we discussed the possibility of the ex-staff writing a public post of some kind, but at that time they were understandably unwilling to do this. Our impression is that the impetus for that eventually coming together was Ben being willing to put in a lot of work.
Over time, confidentiality became less of a constraint. The people raising the concerns became more willing to have information shared, and some people made public comments, meaning we were able to take some more actions without compromising confidentiality. We were then able to take some steps including what we describe here, and pointing various people to the publicly available claims, to reduce the risk of other people ending up in bad situations.
We had been considering taking more steps when we heard Ben was working with theformer staff/associates on a public post. We felt that this public post might make some of those steps less necessary. We kept collecting information about Nonlinear, but did not do as much as we might have done had Ben not been working on this.
We continued to track Nonlinear and were ready to prioritise the case more highly if it seemed that the risk to others in the community was rising.
Catherine, you and Nicole are both CH team members who advise the EAIF and the LTFF. Given that CH “heard about many of the concerns mentioned in [Ben’s] post” in mid-2022, did either of you share those concerns with the EAIF team prior to that fund granting $73k to Nonlinear in 4Q22?
You’ve previously written that “meta work like incubating new charities, advising inexperienced charity entrepreneurs, and influencing funding decisions should be done by people with particularly good judgement about how to run strong organisations, in addition to having admirable intentions.” Since the grant was for work of that type (“6-12 months of salary for 3 experienced EAs to set up an EA recruiting/hiring agency”), I would think the case for raising the concerns you’d heard about with the EAIF management team would be particularly strong. If you did not share those concerns, what was the rationale?
I don’t know if the grant information is accurate (there’s a disclaimer on the page), but if it is, this is pretty shocking. I would appreciate clarification on this.
Catherine from Community Health here. I was aware of this grant application. After discussion with my colleagues in Community Health who were also aware of the same concerns about Nonlinear mentioned in this post, I decided not to advise EAIF to decline this application. Some of the reasons for that were:
The funding was for a project run by three other people (not Nonlinear staff), and I had no concerns about those people working on this project
The three people were not going to be living with Kat and Emerson, which made risks to them lower
At that stage, I had heard some but not all of the complaints listed in this post, so I didn’t have the same picture as I do now. The complaints were confidential, which constrained the possible moves I could make – I wasn’t able to get more information, and I couldn’t share information with the EAIF team that might lead to someone identifying the complainant or Nonlinear guessing that someone complaining had affected their grant decision.
I could and did put some risk mitigation measures in place, in particular, by requiring the grant to be made on the condition that they set up an incubation contract to formalise the roles, reducing the risk that the incubatees and Nonlinear would have different expectation of access to funds and ownership of the project (which was one of the problems Alice reported).
I didn’t request that EAIF send the money directly to the three people involved in the project, rather than Nonlinear, but I was pleased that it happened
Looking back, given the information and constraints I had at the time, I think this was a reasonable decision.
Just in case it wasn’t clear from Catherine’s comment, if Catherine hadn’t recommended that we require an incubation contract, it’s very unlikely that we would have asked for one. In light of Ben’s post, setting up this contract seems like a very good decision.
The EAIF did make a grant for $73k—but it was to a project that Nonlinear was incubating (not to Nonlinear themselves) - I’ll update the website to reflect this. Looking at the email thread for this grant now we actually made the grant out to a separate company (the new hiring agency) so the money never went through nonlinear and required (at com health’s advice) that they set up an incubation contract to formalise the roles, responsibilities and decision making between the founders and Nonlinear.
I’ll let the com health team speak for themselves, but I think given the information that we had the grant was reasonable and looking back I am happy with the advice com health gave us.
Thanks for clarifying that Caleb, that does seem substantially less problematic than granting to Nonlinear themselves.
Thanks for flagging the disclaimer (“Please note that this page is in beta testing and grant data may not be accurate”), I’d missed that.
In your earlier post, you write:
Nonlinear has not been invited or permitted to run sessions or give talks relating to their work, or host a recruiting table at EAG and EAGx conferences this year.
And
Kat ran a session on a personal topic at EAG Bay Area 2023 in February. EDIT: Kat, Emerson and Drew also had a community office hour slot at that conference.
Community office hours are an event that organizers invite you to sign up for (not all EAG attendees can sign up). While not as prominent as a recruiting table or talk, they still signal status to the attendees.
Given that public comments were made as early as November, it seems that there was sufficient time to ensure they were disunited from the event in February. Additionally, even if you don’t table at EAG, you can still actively recruit via 1-1 meetings.
I think the lack of acknowledgement or explanation of how this choice happened—and whether CHT sees this as a mistake—worries me, especially now that the anonymity constraints have been lifted.
I agree with all of this, and hope the CH team responds. I’d also add that the video of Kat’s talk has a prominent spot on the EAG 2023 playlist on CEA’s official youtube channel. That video has nearly 600 views.
Can you disclose the specifics of some or all of these steps and the reasons why you didn’t think they were feasible?
I have no knowledge specific to Puerto Rico, but my understanding is that by far the most important risk incurred when driving without a license is that an unlicensed driver will also almost certainly be uninsured or be in violation of the terms of their insurance such that their insurance will decline claims related to unlicensed driving they were doing, and therefore that an unlicensed driver would potentially be liable for extraordinary sums of money if they were to get into an accident for which they were at fault. Was this person insured? Did the car insurance policy allow unlicensed drivers? What would have happened if there had been an at-fault car accident with another driver?
I initially upvoted/delta’d/insightful’d this, but on looking into it further I don’t think that this concern can possibly be right. You mention “extraordinary sums of money”, but Puerto Rico only requires $3,000 dollars of liability insurance; the default liability insurance wouldn’t be relevant if “extraordinary sums of money” are involved. It’s possible that Nonlinear had better insurance for their car; but I feel like the concern here should be about them pressuring their employee to break the law, while your comment’s proposed harms would be almost equally bad if everything were legal but Nonlinear didn’t have good supplemental insurance. (In particular, your comment seems to imply that it’s significantly immortal/problematic for anybody to drive in Puerto Rico without additional insurance.)
(Regardless, I think it’s important to note that, even after receiving Nonlinear’s comments, the original post gives driving without a license as an example of something that “could have had severe personal downsides such as jail time in a foreign country”; that’s what they were presumably responding to.)
Just to clarify, from a quick Google, apparently it’s “common” for liability insurance in Puerto Rico to cover up to $300,000 for bodily injury. However, you rightfully point out it’s not legally required to have this much liability coverage.
Maybe this would be a good PSA: get good liability insurance with a lot of coverage in case you ever injure someone in a car crash!
If you crash and injure someone when driving without a license you’ll likely get a much stiffer punishment than if you did have a license.
Also—the whole point of getting a licence is to test that you can drive to a specific standard. Not just “my friends taught me how to drive so it’s fine”. The fact that the penalties for driving without a licence are small doesn’t make it good behaviour.
So, Nonlinear-affiliated people are here in the comments disagreeing, promising proof that important claims in the post are false. I fully expect that Nonlinear’s response, and much of the discussion, will be predictably shoved down the throat of my attention, so I’m not too worried about missing the rebuttals, if rebuttals are in fact coming.
But there’s a hard-won lesson I’ve learned by digging into conflicts like this one, which I want to highlight, which I think makes this post valuable even if some of the stories turn out to be importantly false:
If a story is false, the fact that the story was told, and who told it, is valuable information. Sometimes it’s significantly more valuable than if the story was true. You can’t untangle a web of lies by trying to prevent anyone from saying things that have falsehoods embedded in them. You can untangle a web of lies by promoting a norm of maximizing the available information, including indirect information like who said what.
Think of the game Werewolf, as an analogy. Some moves are Villager strategies, and some moves are Werewolf strategies, in the sense that, if you notice someone using the strategy, you should make a Bayesian update in the direction of thinking the person using that strategy is a Villager or is a Werewolf.
It sounds like you’re claiming something like “all information is valuable information, because even if the information is false you’ve learned something (e.g. that the source is untrustworthy)”. I think this is too strong of a claim. Trying to figure out what’s true amidst lots of falsehoods is very difficult and takes time. Most people in real life aren’t playing perfect Werewolf with a complex Bayesian model that encompasses all hypotheses. Quite the opposite, from what I’ve seen both in myself and others, our natural tendency is to quickly collapse on one hypothesis and then interpret everything through that lens (confirmation bias). I think this is what’s happening with a lot of the reactions to this post, and I don’t think it’s valuable.
If you’d instead said “this post is valuable if you view it as a game of Werewolf, keep your hypothesis space open and update as new evidence comes in” then I’d be more in agreement. I think this is still a very difficult task though, and I’d rather that Ben had waited for Nonlinear’s counter-evidence and taken that that into consideration instead of forcing us to play Werewolf with his post. (Basically, I’m suggesting that Ben does the hard job of playing Werewolf for us. This is explicitly not what he did, as he himself says in his disclaimer of explicitly seeking out anti-nonlinear evidence.)
Disclaimer: I am friends with Kat and know some of the counter-evidence.
Great point.
Repeating myself from when I first saw this comment:
On an earlier discussion of Nonlinear’s practices, I wrote:
I would also like to share my experience negotiating my salary with Kat when I first joined Charity Science Health, i.e., before we were friends. It was extremely positive. She was very committed to frugality, and she initially offered me the position of Associate Director at a salary of $25K/year, the bottom end of the advertised salary range. We exchanged several long emails discussing the tradeoffs in a higher or lower salary (team morale, risk of value drift, resources available for the core work, counterfactual use of funds, etc.). The correspondence felt like a genuine, collaborative search for the truth. I had concluded that I needed to make at least $45K/year to feel confident I was saving the minimum I would need in retirement, and in the end we agreed on $45K. Subsequently Kat sent me a contract for $50K, which I perceived as a goodwill gesture. My positive experience seems very different from what is reported here.
I’m unclear on how this comment speaks to the content of the post, which is compatible with Kat being a courageous, frugal, and dedicated friend and leader.
I suppose it’s relevant if you want to get a sense of the chances of ending up in a situation reminiscent of the one depicted in this post if you work for Nonlinear.
Are you implying that you don’t believe what’s reported here, because it’s very different, or something else?
Being a close friend of Kat for quite some time, do you believe that your point of view could shed some valuable light on this discussion, or is there a chance folks might see it as an effort to spruce up Kat’s or Nonlinear’s image?
In the interest of keeping things on the level, can you confirm whether you were clued in about this situation before making this post? If so, did you take it upon yourself to dig into the allegations independently? Lastly, have you received any requests or nudges from Kat or other members of the Nonlinear team to drop a favorable comment in this thread?
(crossposted from LessWrong)
This is a pretty complex epistemic/social situation. I care a lot about our community having some kind of good process of aggregating information, allowing individuals to integrate it, and update, and decide what to do with it.
I think a lot of disagreements in the comments here and on LW stem from people having an implicit assumption that the conversation here is about “should [any particular person in this article] be socially punished?”. In my preferred world, before you get to that phase there should be at least some period focused on “information aggregation and Original Seeing.”
It’s pretty tricky, since in the default, world, “social punishment?” is indeed the conversation people jump to. And in practice, it’s hard to have words just focused on epistemic-evaluation without getting into judgment, or without speech acts being “moves” in a social conflict.
But, I think it’s useful to at least (individually) inhabit the frame of “what is true, here?” without asking questions like “what do those truths imply?”.
With that in mind, some generally useful epistemic advice that I think is relevant here:
Try to have Multiple Hypotheses
It’s useful to have at least two, and preferably three, hypotheses for what’s going on in cases like this. (Or, generally whenever you’re faced with a confusing situation where you’re not sure what’s true). If you only have one hypothesis, you may be tempted to shoehorn evidence into being evidence for/against that hypothesis, and you may be anchored on it.
If you have at least two hypotheses (and, like, “real ones”, that both seem plausible to you), I find it easier to take in new bits of data, and then ask “okay, how would this fit into two different plausible scenarios”? which activates my “actually check” process.
I think three hypotheses is better than two because two can still end up in a “all the evidence ways in on a one-dimensional spectrum”. Three hypotheses a) helps you do ‘triangulation’, and b) helps remind you to actually do the “what frame should I be having here? what are other additional hypotheses that I might not have thought of yet?”
Multiple things can be going on at once
If two people have a conflict, it could be the case that one person is at-fault, or both people are at-fault, or neither (i.e. it was a miscommunication or something).
If one person does an action, it could be true, simultaneously, that:
They are somewhat motivated by [Virtuous Motive A]
They are somewhat motivated by [Suspicious Motive B]
They are motivated by [Random Innocuous Motive C]
I once was arguing with someone, and they said “your body posture tells me you aren’t even trying to listen to me or reason correctly, you’re just trying to do a status monkey smackdown and put me in my place.” And, I was like “what? No, I have good introspective access and I just checked whether I’m trying to make a reasoned argument. I can tell the difference between doing The Social Monkey thing and the “actually figure out the truth” thing.”
What I later realized is that I was, like, 65% motivated by “actually wanna figure out the truth”, and like 25% motivated by “socially punish this person” (which was a slightly different flavor of “socially punish” then, say, when I’m having a really tribally motivated facebook fight, so I didn’t recognize it as easily).
Original Seeing vs Hypothesis Evaluation vs Judgment
OODA Loops include four steps: Observe, Orient, Decide, Act
Often people skip over steps. They think they’ve already observed enough and don’t bother looking for new observations. Or it doesn’t even occur to them to do that explicitly. (I’ve noticed that I often skip to the orient step, where I figure out about “how do I organize my information? what sort of decision am I about to decide on?”, and not actually do the observe step, where I’m purely focused on gaining raw data.
When you’ve already decided on a schema-for-thinking-about-a-problem, you’re more likely to take new info that comes in and put it in a bucket you think you already understand.
Original Seeing is different from “organizing information”.
They are both different from “evaluating which hypothesis is true”
They are both different from “deciding what to do, given Hypothesis A is true”
Which is in turn different from “actually taking actions, given that you’ve decided what to do.”
I have a sort of idealistic dream that someday, a healthy rationalist/EA community could collectively be capable of raising hypotheses, without people anchoring on them, and people share information in a way you robustly trust won’t get automatically leveraged into a conflict/political move. I don’t think we’re close enough to that world to advocate for it in-the-moment, but I do think it’s still good practice for people individually to be spending at least some of their time in node the OODA loop, and tracking which node they’re currently focusing on.
I don’t think the initial goal of this discussion was to punish anyone socially. In my view, the author shared their findings because they were worried about our community’s safety. Then, people in our community formed their own opinions based on what they read.
In the comments, you can see a mix of things happening. Some people asked questions and wanted more information from both the author and the person being accused. Others defended the person being accused, and some just wanted to understand what was going on.
I didn’t see this conversation starting with most people wanting to punish someone. Instead, it seemed like most of us were trying to find out the truth. People may have strong feelings, as shown by their upvotes and downvotes, but I think it’s important to be optimistic about our community’s intentions.
Some people are worried that if we stay impartial for too long, wrongdoers might not face any consequences, which is like letting them “get away with murder,” so to speak. On the other hand, some are concerned about the idea of “cancel culture.” But overall, it seems like most people just want to keep our community safe, prevent future scandals, and uncover the truth.
Based only on the allegations which Nonlinear admits to, which I think we can assume are 100% true, I would:
a) very strongly discourage anyone from working for Emerson Spartz and Kat Woods.
and
b) very strongly encourage CEA and other EA orgs to distance themselves from Nonlinear.
“First; the formal employee drove without a license for 1-2 months in Puerto Rico. We taught her to drive, which she was excited about. You might think this is a substantial legal risk, but basically it isn’t, as you can see here, the general range of fines for issues around not-having-a-license in Puerto Rico is in the range of $25 to $500, which just isn’t that bad.”
This is illegal. Employers should not be asking employees to do things which are illegal, even if the punishment is a small fine.
“Third; the semi-employee was also asked to bring some productivity-related and recreational drugs over the border for us. In general we didn’t push hard on this. For one, this is an activity she already did (with other drugs). For two, we thought it didn’t need prescription in the country she was visiting, and when we found out otherwise, we dropped it. And for three, she used a bunch of our drugs herself, so it’s not fair to say that this request was made entirely selfishly. I think this just seems like an extension of the sorts of actions she’s generally open to.”
Employers should not be asking “semi-employees” to transport illegal drugs, regardless of context.
Saying you’re “saving the world” does not give you a free pass to ask your employees to break the law.
I really want to emphasise, particularly to younger EAs and EAs in college who want to work for EA orgs, that it is not normal for employers to ask you to do illegal things and you will almost certainly be able to very easily find impactful work with employers who are better than this.
Also, please do not work for employers who want you to live with them and be “a part of their family”—this situation is very unusual for a good reason and leaves you at very high risk of exploitation, abuse and mistreatment.
Part of being successful when running any organisation is being a good employer so you can hire and retain the best talent. Asking your employees to do illegal things is preposterously stupid and means you are an exceptionally bad employer, and means your organisation is probably not going to do a very good job of saving the world. This kind of behaviour also puts EA’s reputation at risk and makes you a massive liability to the rest of us. Please do not do this.
1. In Israel, I’m not allowed to buy melatonin without a prescription.
Also, delivery is expensive and slow from the U.S. Would you react this way if I’d ask an employee from the U.S to bring Melatonin?
(I never had employees, this is hypothetical)
2. How about crossing the road when there is a red light and no cars around, when going to eat?
Everyone does that here.
My point it: I’m guessing you don’t care about what is strictly legal or not
I’m guessing you have some other standard.
Like maybe something about abusing power dynamics, or maybe something else
what do you think?
I don’t think employers should tell employees to do illegal things, it’s about both power dynamics and legality.
I would very strongly recommend that employers do not ask employees to illegally move melatonin across borders.
Obviously jaywalking is much less bad and asking your employees to jaywalk is much less bad—but I would still recommend that employers do not ask employees to jaywalk. Generally I’d say that it’s much less bad to ask your employees to do an illegal thing that lots of people do anyway, but I would recommend that employees still do not ask employees to do them. (Jaywalking would fit into this category, moving drugs illegally across borders and driving without a license in Puerto Rico would not).
I’ll add the small caveat which is that doing illegal things where the org is liable but you personally aren’t is sometimes ok. The example in my head is that supermarkets and restaurants in my country aren’t allowed to open on some days because of mostly religious reasons, and some of them break the law and eat the fines. Of course, this doesn’t extend to hurting any person, committing fraud, or anything actually bad basically.
I don’t know if that is a great guideline. For example, should we feel obliged to condemn an EA animal welfare org if they ask their employees to violate ag-gag laws?
It seems to me that contextual information is more important than the mere fact of a law being violated. In the ag-gag example, that could be stuff like: Did the employee take the job knowing they would be asked to do this? Did their boss threaten serious retaliation if they didn’t do it?
In general the law is not necessarily well-aligned with doing the most good, or even common sense. See https://www.econlib.org/three-felonies-a-day/
[EDIT: To clarify, it seems quite plausible to me that as a community we should update away from law-breaking on the current margin. But, I think this could in principle be taken too far. I also agree power dynamics are important.]
I think people are overcomplicating this. You should generally follow the law, but to shield against risks that you are being such a stickler in unreasonable ways (trying to avoid “3 felonies a day”), you can just imagine whether uninvolved peers hearing about your actions would think a situation is obviously okay. Some potential ways to think about such peer groups:
What laws people in the country you live in think are absolutely normal and commonplace to break.
For example, bribing police officers is generally illegal, but iiuc in some countries approximately everybody bribes police officers at traffic stops
What laws people in your home country think is illegitimate and thus worth breaking
For example some countries ban homosexuality, but your typical American would not consider it blameworthy to be gay.
What laws other EAs (not affiliated in any way with your organization) think is okay to break.
So far, candidates people gave include ag-gag laws and taking stimulants for undiagnosed ADHD.
FWIW I’m not necessarily convinced that the majority of EAs agree here; I’d like to see polls.
What laws your non-EA friends think is totally okay to break
For example, most college-educated Millennials would not consider downloading papers on Sci-Hub to be blameworthy.
I think a relatively conservative organization should find it permissive to break the law if and only if every possible reasonable peer group considers it acceptable, and a relatively liberal but probably acceptable organization may consider it permissive to break laws if some reasonable peer group considers it acceptable.
In the most problematic examples in question, all the alleged law-breaking I’m aware of (driving without a license[1], pressuring someone to transport recreational drugs across borders) are not likely to be considered readily acceptable by any reasonable peer group, including other EAs.
This is similarly true for other law-breaking scandals I’ve heard about in the past, including theft, fraud, workplace sexual harassment, etc.
Puerto Rico is in the US
I would object to my employer asking me to be homosexual.
Interesting framework! I agree social norms are important.
A possible problem with this kind of conformity principle is that it can preclude moral progress. Suppose you think a popular law is unjust, and the best way to demonstrate that is to break it publicly. It could be that all the groups you name take the view that an unjust law is good. If everyone followed conformity principles, I think we would have a lot more old unjust laws on the books. (You could try using the third clause about “laws other EAs think is OK to break” as an out here, but there’s a problem—EAs in this very thread are endorsing conformity and advocating against weirdness! Maybe the EA movement was once a bastion of principled independent moral reasoning, but I get the sense that aspect of the movement is weakening.)
In general I don’t think apparent consensus views regarding which laws are OK to break are the output of a high-quality decision process. In fact, I think most people are quite conformist, and what looks like “consensus” is mostly a result of behavioral drift. E.g. if we polled people on the ethics of online music piracy, my guess is that pre-internet, lots of people would say it’s wrong (virtue is cheap). The introduction of Limewire et al. would cause people to rationalize fun and decide it’s OK. The introduction of Spotify et al. would cause an increase in people saying “if I’m paying then so should you, it’s easy”. I would guess that people who do principled moral reasoning regarding the ethics of music piracy are a minority who aren’t driving changes in consensus opinion. [Edit: Nonetheless, individual moral reasoning isn’t necessarily more reliable than consensus, especially given that individuals may be subject to self-serving motives that are nearly impossible to be aware of and take into account.]
I also think people in this thread underrate the importance of a specific individual’s context. For example, suppose I am broke, homeless, and suffering from suicidal depression. I happen to hear a song I really like, and I use Shazam to identify it. I don’t have the money to buy the song, and I think downloading it could help with my suicidal depression. Is there a sense in which pirating the song is more OK for me than it is for Bill Gates? I would say yes. (Perhaps one could even use the “context” angle to argue that the introduction of Spotify does, in fact, change the ethics of music piracy!)
Can you give a probability (conditional upon the world existing, humans being alive, etc) for the following statement
(if you don’t like the operationalization of ethical philosophers as a stand-in for moral progress, I’m happy for you to come up with a better one).
___
Taken a step back, I do want to preserve the “philosophy club” aspect of EA[1]. I think certain EA (ethics) discussions, in their naïveté, willingness to think outside the box, follow arguments to their logical conclusions, high decoupling, are at its best quite valuable and insightful in discovering moral innovations and alternative empirical worldviews.
And often you need to think pretty radical ideas before coming up with fairly sensible solutions. For example, without Brian Tomasik’s early work on Wild Animal Suffering, we would not have sensible interventions on humane pesticides and shrimp welfare. Without early figures like Bentham, Singer, Parfit, Bostrom, Yudkowsky, Shulman, etc, being very willing to our movement would be meaningfully quite different.
But I think people need to be careful about “ethical innovation” once you start having significant effects in the world, lead nonprofits, fund projects, run large fintech companies, etc.
I think Kant’s distinction between the public and private use of reason is relevant here. Namely “public use of reason” is what you advocate for or ideas you explore in the public arena. Whereas “private use of reason” is what you actually do in your own life (including both personally and professionally, but especially professionally). It’s a bit confusing because most people have very different intuitions for the relevant bits of public v private axes. But I think the core conclusion in my lights is that you can get most of the benefits[2] of ethical innovation at the discourse stage without having to bite the bullet that you also need companies, governments, etc, to be run by “ethical innovators.”
So in the alleged Nonlinear example, it is one thing for Emerson or Kat, in their public commentator capacities as (eg) EA Forum commentators or op-ed writers, to (eg) advance a position that driving license laws are immoral and classist and we should get rid of them, while being careful that they are not speaking for any organization or in a professional capacity, etc. It is quite another, in their capacities as nonprofit managers, to mandate that an employee or semi-employee to illegally drive a car without a license.
Now in the demarcation above, it will seem like I should be happy (morally and epistemically if not intellectually) with “novel” ethical positions advanced on the forum, as the EA forum is a discussion venue rather than an action venue. However, when people discuss bizarre edge cases about breaking the law without (eg) a careful public/private demarcation, in the context of a discussion where long-time forum members have been alleged to break laws in a situation that’s clearly not one of the bizarre edge cases, I worry that both the commentators and onlookers don’t in fact have a clean separation in their heads of what’s okay to say vs. do, including, unfortunately, people with quite a bit of power in absolute terms.
(Also I’m limited on free time and I can’t tell if you’re trolling etc, so I’m probably not going to comment further on this thread, sorry).
Actual philosophers, in comparison, feel relatively more sterile and uncreative at least in my discussions with them. (Though ofc I might not be clever/charismatic enough to elicit the best opinions out of them; also outgroup homogeneity is a thing). At least when I talked to analytic philosophy grad students, I’m often reminded by this quote from a profile on Parfit:
A “conformity in action but not in speech” may not be enough to prevent acute moral crises, like the Holocaust. But it’s one of the few non-technological factors that I expect to have a chance at driving steady moral progress, in the long run.
I think the probability of your “year 2300” statement is very low.
One meta-point I’m trying to make here is I don’t think we should be too hasty to derive + enforce very general ethical rules after examining a single case study. Ben’s account of Nonlinear’s behavior is troubling, and I hope the leadership takes a hard look in the mirror, but it’s important for us as a movement to learn the right lessons.
Thanks for bringing up the public vs private use of reason thing. A lot of my thinking on these questions was shaped by reading a book about the US in the antebellum + Civil War period. As I remember, in signing the Emancipation Proclamation, Abraham Lincoln was acting as an ethical innovator. (Advisors suggested he wait until right after a major Union victory to sign the proclamation, in order to better sell the North on the concept.) It does seem to me that recommending “Abraham Lincoln shouldn’t have signed the Emancipation Proclamation” is a pretty serious hit for an ethical rule to take.
(Note that an abolitionist soldier who fought well for the Union in the Civil War would be violating the deontological principle “don’t kill people” in order to produce a tiny shift in the probability of a hypothetical future benefit. And sure enough, in that soldier’s far future, we look back on that soldier as a hero. Furthermore, an analogous “year 2023″ statement would appear to miss the point—many 2023 people think that killing is generally wrong, and also that the abolitionist soldier’s actions are justified in context.)
Another case where the “leaders shouldn’t be moral innovators” principle fails by my lights: is it ethical to persuade people in AI care about animals to a greater degree than people in the general population care about animals? I would say yes.
Another point re: leaders who innovate morally—as e.g. Holly discusses in this thread, EA has a long history of encouraging weirdness and experimentation. From my perspective, freedomandutility is attempting to innovate on this by making us all sticklers for following the law. And you, as an EA leader, appear to be endorsing this innovation. You might say that innovating by advocating inaction is different than innovating by advocating action, but (a) I’m a tad skeptical of act/omission distinctions and (b) endorsing an asymmetry like this could create a ratchet where EAs act less and less, because leaders are way more comfortable advocating inaction than action.
Re: crisis decisionmaking—my sense is that many EAs feel we are in a crisis, or on the verge of a crisis. So I do think this is a good time to discuss what’s ethically acceptable in a crisis, and what ethical rules would’ve performed well in past crises. (For example, one could argue that in a time of crisis, it is especially important to support rather than undermine your friends & allies, and Nonlinear’s leadership violated this principle.)
Thanks for engaging given your limited free time. I’m eager to read pushback from people who aren’t Linch as well.
I feel like my main actual position here is something like “just be cool bro” and you’re like “what does ‘being cool’ actually mean? For that matter, what does ‘bro’ mean? Isn’t that kinda sexist?” And I’m like “okay here’s one operationalization of “being cool” And here’s an operationalization of “bro” that doesn’t have sexist connotations.” and you’re like “edge case 17 here, Steve Jobs was super cool despite wearing a black turtleneck for many years and believing in homeopathy” and my actual position is like “okay but does that even matter; like what’s your Bayesian update for “asshole” vs “actually supercool on a deep level” on someone who consistently goes around saying that being cool is for scrubs, but okay, here’s another careful attempt to define ‘being cool’ in a way that gets around that edge case” and you’re like “edge case 31” and I’m like “okay I give up.”
Like I’m often on the other side of “precision of language is important” but here I’m not even sure you believe that the disagreements are semantic. I feel like some people (fortunately a minority) in these parts think that social norms need to be given at the level of precision that’s necessary to align an AGI, and I’m like, jesus fuck this is a good way to not have any norms at all.
Sorry, I didn’t mean to antagonize you that way.
I think I’m a somewhat high-scrupulosity person. When people say “EAs should abide by deontological rule X”, I hear: “EAs could get cancelled in the future if they violate rule X” and also: “the point of this deontological rule is that you abide by it in all cases, even in cases where it seems like a bad idea for other reasons”.
Some of the deontological rules people are suggesting in this thread are rules I can think of good reasons to violate—sometimes, what seem to me like very good reasons. So I push back on them because (a) I want people to critique my thinking, so I can update away from violating the proposed rule if necessary (related to your public vs private use of reason point?) and (b) I don’t care to get cancelled in the future for violating a bad rule that EAs came to accept uncritically.
I take the task of moral philosophy as identifying and debating edge cases, and I find this rather enjoyable even though it can trigger my scrupulosity. But your point that excess debate could result in no norms at all is an interesting one. Maybe we need a concept of “soft norms” which are a “yellow flag” to trigger debate if they’re getting violated.
I appreciate the apology.
To be clear, I never said the word “deontological” in this thread before, and when I searched for it on this post, almost all references are by you, except in a single comment by freedomandutility. I think it’s possible you were overreacting to someone’s poor choice of words that I didn’t understand as literal because the literal understanding is pretty clearly silly. (On the other hand I note that this comment thread started before that comment).
I also think your threat model of what causes cancellation in this community happens to be really poor, if you think it primarily results from the breaking of specific soft taboos even for extremely reasonable and obvious-to-everyone exigencies. It’s possible I have an illusion of transparency here because I’m quite familiar with this community, and maybe you’re really new to it?[1] But I really think you’re vastly overestimating both cancellation risk in general and in this community specifically.
Why? If EAs are so rigid that they literally uncritically follow overly prescriptive rules hashed out in EA Forum comments without allowing for exceptions for extreme exigencies, and they believe this so firmly that they cancel people over it, why do you want to be in this community? To the extent that you think community EA is valuable because it helps you be a better person, have more impact etc, being cancelled from it because people are totally inept is doing you (and the world) a favor. Then you can be free to do more important things rather than be surrounded by ethically incompetent people. [edited this paragraph to tone down language slightly]
I think this is a pretty standard position among philosophically minded people. I disagree with the standard position; I think ethics is already amazingly hard in the mainline case, and longtermism even more so, there’s no reason to overcomplicate things when reality is already complicated enough. My guess is that we are nowhere near philosophically competent enough to be trying to solve the edge cases (especially in the comment threads of unrelated topics) when we don’t even have a handle on the hard problems that are practically relevant.
To be clear, all norms already work this way. Like, I view approximately all norms this way, though in some cases the flags are red rather than yellow and in some cases the debate ought to be before the action (cf your reference to killing being justified during wartime; I’d rather people not kill first and then debate the ethics of it later).
But if you’re really new to this community, why do you care about being cancelled? And also, surely other communities aren’t insanely rigidly deontological? Even religions have concepts like Pikuach nefesh, the idea that (almost) all religious taboos can be violated for sufficiently important exigencies like saving lives.
I tend to be very concerned about hidden self-serving motives in myself and other people. This was my biggest takeaway from the FTX incident.
So regarding “extremely reasonable and obvious-to-everyone exigencies”, and “being cancelled … because people are total morons”—well, it seems potentially self-serving to say
I know you work in longtermist grantmaking—I can’t speak for you, but if I was a grantmaker and someone said that to me during a call, I wouldn’t exactly consider it a good sign. Seems to betray a lack of self-skepticism if nothing else.
Regarding the cluelessness stuff, it feels entangled with the deontological stuff to me, in the sense that one argument for deontological rules is that they help protect you from your own ignorance, and lack of imagination regarding how things could go wrong.
BTW, please don’t feel obligated to continue replying to me—I get the sense that I’m still aggravating you, and I don’t have a clear model for how to not do this.
“In general the law is not necessarily well-aligned with doing the most good”
I agree with this, but I think deontological principles like “don’t ask people who you have power over to break the law” are good and should be followed, even when in specific situations, this might be misaligned with the act which generates the greatest utility in the short term.
There’s of interesting work inside and outside EA which I would recommend, on the relationship between consequentialism and deontology (including stuff about naive consequentialism, rule utilitarianism, etc).
It’s good to think carefully before violating this kind of deontological principle. But I have a hard time endorsing your statement in general.
It seems to me that in the case of e.g. an unjust law, the principle you endorse is fundamentally about reducing expected harm to your employee. If the expected harm to your employee is small relative to the harm averted for some other party, and your employee is willing to take on the risk, I don’t see a good case for sticking with the principle.
Thought experiment: Suppose you’re a wealthy abolitionist living in the northern US prior to the Civil War. You have an abolitionist friend in town who’s currently working as a clerk. They want to help fugitive slaves, and the punishment for doing so happens to be relatively light. Is it acceptable to hire them to help fugitive slaves?
With regard to Linch’s comment in this thread—let’s say that almost everyone in your state thinks helping fugitive slaves is wrong. According to them, it violates sacred property rights and the US Constitution.
An abolitionist movement which condemns you for hiring your friend strikes me as pathologically risk-averse, in a way that will cause slavery to end very slowly or not at all.
I tend to see deontological principles as opportunities to think carefully about what you are doing rather than hard rules, due to examples like the above. I am very interested to see someone make the opposing case. [Edit: An exception would be rule violations that seem suspiciously self-serving. Because it’s so hard to root out self-serving thoughts & behavior in oneself, I think a better approach here might be: Identify some rules that seem good for a 3rd party you don’t especially trust, then follow those rules yourself. My thinking is still developing on this.]
How much do you think that Drew is implicated in this? This is something that is less clear to me from the post.
Disclaimer: I previously interned at Nonlinear.
Oops sorry I’m not really sure why I put his name there, edited to remove, sorry Drew!
Nitpick, but I don’t think this is literally true and I don’t think you literally believe it, fwiw. I think transporting lifesaving drugs is probably permissible and even ethically obligatory under some ethical frameworks, and I suspect if the two of us were to carefully dialogue about edge cases, we’d agree on the acceptability of situations much weaker than “literally necessary to save someone’s life.”
FWIW my intuition is that even if it’s permissible to illegally transport life-saving medicines, you shouldn’t pressure your employee to do so. Anyway I’ve set up a twitter poll, so we’ll see what others think.
It looks like most of the polled people agree with me. :)
I think that you should add an edit removing Drew’s name, for this reason if nothing else. (Happy to expand.) Thank you.
Sorry, edited!
@Ben Pace Can you please add at the top of the post “Nonlinear disputes at least 85 of the claims in this post and intends to publish a detailed point-by-point response.
They also published this short update giving an example of the kind of evidence they plan to demonstrate.”
We keep hearing from people who don’t know this. Our comments get buried, so they think your summary at the bottom contains the entirety of our response, though it is just the tip of the iceberg. As a result, they think your post marks the end of the story, and not the opening chapter.
I look forward to reading your point-by-point response. I suspect you will convince me that some of the events described in this post were characterized inaccurately, in ways that are unflattering to Nonlinear. However, I think it is very unlikely you will convince me that Nonlinear didn’t screw up in several big, important ways, causing significant harm in the process (for reasons along these lines).
I would thus strongly encourage you to also think about what mistakes Nonlinear made, and what things it is worth apologizing for. I think this would be very good for the community, since it would help us gain better insight into what went wrong here, and how to avoid similar situations going forward.
I’ve made an edit at the top.
I appreciate the overall style and tone of the piece. I believe Ben is trying to figure out what happened. With the material he had, he could easily write a much more damaging hit piece (as has been the case with some traditional investigative journalist in the past).
Some remarks from an old fogey:
1. There are a zillion shitty jobs out there, and there is nothing wrong with either advertizing or taking a shitty job so long as everyone is aware it is shitty and the terms are clear upfront.
Against Non-Linear: It is not clear that the interns/whatever were fully aware just how shitty the jobs would be upfront, and they were probably oversold on them. This was probably seen as good business strategy. It isn’t.
Against Alice: Personally, I think these jobs would have red flags a mile away. I have known people who went to work for obvious MLMs, and I think it is ultimately the responsibility of the person signing the contract to read the fine print and be aware of what they are getting into. If you, as an organization, put out shitty terms, and someone agrees to that, it’s not your fault for making a shitty offer. If you, as an employee, accept work without a clear contract.… Don’t do that. Other people may try to exploit you, and the typical company WILL try to exploit you. That is what they do. Your only defense is to not accept it. If you accept that, you agreed to it so you have to live with the consequences.
Again, probably the shittiness of the job was somewhat covered up. To the extent to which that is the case, beyond ordinary marketing, that’s on Non-Linear. Personally, I think these jobs would be incredibly obviously shitty. And merely advertizing or filling a shitty job is not a crime.
2. Every organization eventually has someone who distorts the truth and could be a future disgruntled employee.
This point is not against Non-Linear or against Alice.
Rather, we should be careful of the low bar Ben set for publicizing this.
I agree that this would be more of a scandal if Non-Linear were bigger, and it’s great to catch this before then, for many reasons.
But if you hire enough people, you will eventually run into someone who distorts the truth. We may all have different base rates for this. On the one hand, it could be seen as costly to complain. On the other hand, work generally kind of sucks—that’s why you have to be paid to do it. (If your work doesn’t suck, consider yourself very lucky.) So people often end up in tension with their employers.
So whatever epistemic standards we set need to account for the fact that, given enough time, there will be a motivated disgruntled employee in every EA organization. The trick is how you can distinguish between the organizations that are generally rotten and the ones with a disgruntled employee or two. And I’m not sure Ben’s post really gets at this. The talk about setting a low bar is concerning because I don’t need an alarm that goes off whenever there is a story, because that will be every organization given enough time. I need an alarm that can distinguish between the signal and the noise. I care about the false negatives and the false positives.
Personally, I don’t find myself updating on the information in Ben’s post, and I had not previously heard of any issues with Non-Linear. I don’t find myself updating simply because the set-up was so obviously shitty that I’m not surprised. The founders are so clearly obsessed with themselves and brag about how little they pay people for outsourced work—how could it be a good place to work, even before getting to the shared living situation?
Shitty jobs are shitty. Water is wet. I don’t blame Non-Linear for advertizing shitty jobs (so long as they weren’t misleading about it). I think it’s a shame that young people sometimes have poor awareness regarding shitty jobs, but I don’t blame Alice for not realizing. I find it hard to blame either party here for acting in their own self-interest, but practically speaking Non-Linear should have advertized the jobs more to people for whom it would be a legitimate step up, rather than people who would only work there if they were confused about its shittiness.
weak datapoint: At one point Kat posted a “job” on FB that, near the end of the linked description, mentioned the position was unpaid. To her credit, the lack of pay was in the actual position description, and when I pushed back she changed the FB post. Her reasoning (note: this is from memory and is probably at least a year old) was that she’d never heard of a volunteer position with such specific requirements, so job felt like a better fit.
Both Kat and Emerson are claiming that there have been edits to this post.[1]
I wonder whether an appendix or summary of changes to important claims would be fair and appropriate, given the length of post and severity of allegations? It’d help readers keep up with these changes, and it is likely most time-efficient for the author making the edits to document these as they go along.
@Ben Pace
[Edit: Kat has since retracted her statement.]
I used a diff checker to find the differences between the current post and the original post. There seem to be two:
“Alice worked there from November 2021 to June 2022” became “Alice travelled with Nonlinear from November 2021 to June 2022 and started working for the org from around February”
“using Lightcone funds” became “using personal funds”
So it seems Kat’s comment is wrong and Emerson’s is misleading/wrong. They are free to point to another specific edit if it exists.
Update: Kat guesses she was thinking of changes from a near-final draft rather than changes from the first published version.
Random anecadata on a very specific instance: I worked with Drew to make a Most Important Century writing prize. I enjoyed our conversations and we worked well together — Drew was nice and ambitious. I heard concerns about Nonlinear at this time, but never had anything suspect come up.
To put it more strongly: I would like to make clear that I have never heard any claims of improper conduct by Drew Spartz (in relation to the events discussed in this post or otherwise).
Thanks, I think taking the time to make this stronger phrasing publicly is quite valuable (and seems to match what everyone else is saying so far). It’s important that we not engage in guilt-by-association.
Agreed. I would have wanted the post itself to make this more clear.
The article alleges he was dating an employee who seems to have been a subborniate, which someone might claim is improper conduct.
Repost from LW:
My understanding (definitely fallible, but I’ve been quite engaged in this case, and am one of the people Ben interviewed) has been that Alice and Chloe are not concerned about this, and in fact that they both wish to insulate Drew from any negative consequences. This seems to me like an informative and important consideration. (It also gives me reason to think that the benefits of gaining more information about this are less likely to be worth the costs.)
I certainly don’t think it suggests he’s a bad actor, but it seems reasonable to consider it improper conduct with a small organization of people living and working together—even if Alice and Chloe don’t see it as an issue. I don’t have a strong view one way or the other, but it seemed worth flagging in the context of your claim .
Appreciate you saying this, Michel. As you can imagine, it’s been rough. Perhaps accidentally, this post seems to often lump me in with situations I wasn’t really a part of.
I similarly had a positive experience with Drew creating The Econ Theory AI Alignment Prize. I was not prompted by anyone to comment on this.
I second this—I have met Drew at multiple conferences and my experiences with him have only been positive. He has always been very nice to talk to. I have also heard others in EA circles tell their anecdotes on him and nobody has anything bad to say whatsoever.
Hi Zeynep, to ensure transparency, could you kindly confirm whether you were prompted by any Nonlinear team members to make this comment or if you were made aware of this discussion by them?
Hi Morpheus, I was not asked by anyone to make the comment! I was made aware of this discussion by another EA who sent me a link to this post. I felt prompted to share my experiences as very little was written about him, yet he was still referenced.
I felt strongly that I had to respond to this given my personal experience with Nonlinear (mostly Kat & Drew) were overall positive.
I do not have much knowledge about past Nonlinear employee experience (though we helped run a hiring search for them for an assistant as our first gig with HIRe—which they ended up not hiring for anyway in the end) - but I have a highly positive opinion with Kat and Drew. All my interactions with them have been incredibly positive personally and professionally. I have only met Emerson once so I have no opinion on him related to this post.
I’ve met Kat Woods while Nonlinear was running a search for someone to lead and incubate an EA recruiting agency. I joined and reached the final round (I recall being told there were 4 finalists) but unfortunately, I wasn’t selected as the top pick. Later on, Kat shared with me and the other 2 finalists that the incubatee changed their mind to focus on another career path. While disclosing that the original funding source might not be available, she encouraged us to try taking on this challenge as she believed we would be doing a lot of good in any case.
She also provided a small amount of seed funding for our time. Seeing as we were in a good position to explore doing work to help EA orgs, the other finalists and myself joined forces to what became High Impact Recruitment (HIRe).
While there was a lower level of infrastructure support in Nonlinear than I expected as an incubator (e.g. legal support, software, fundraising support, etc. - I have provided this feedback to Kat as well for improvement), Kat has always been a great brainstorming partner, grant application reviewer, and coach—which were instrumental in our initial work. Nonlinear also provided the initial funds enough for us co-founders to be able to work together while doing proper fundraising. As we got enough resources to run the org on our own, our interactions with Nonlinear reduced.
After several months of doing the recruiting agency work together, the HIRe founders parted ways for other opportunities just before the FTX fiasco. But during the entire time, Nonlinear has been nothing but positive, taking time to catch up with every few weeks as well as get together in EA conferences. In no way were we forced to do anything, and were independent in running the project. There were also no manipulations and confusion regarding “ownership”, etc. as we were encouraged by Kat that it was ours to run (even with some disagreements with her advice.)
[Note that before we decided to take on the recruiting agency work, Kat did not have to spend time and energy encouraging us and at any point could have moved on without much issue as there were no contracts signed (not saying this is a good thing—it would have been better if there were as these are baseline practices). While there was no paperwork and proper governance provided (we were generally quite experienced and entrepreneurial so we didn’t really need that much hand holding) - I can vouch that there was no manipulation, lying, of any kind. Discussions have been genuine and positive.]
Personally, she was able to persuade me to go all in using my time and skills to support EA orgs (which still drives me today). And while that alone isn’t necessarily great moral reasoning—taken with all our interactions, results in my positive assessment of her character.
I think it’s useful to share positive experiences but I also want to note that in almost all cases of proven misconduct, you will be able to find people who had positive experiences in similar situations. People who behave very poorly are not behaving that poorly all the time.
Seems great! I am glad you had good interactions with them. They do seem overwhelmingly positive in their general demeanor, and high energy and excited about projects.
[Disclaimers: My wife Deena works with Kat as a business coach—see my wife’s comment elsewhere on this post. I briefly met Kat and Emerson while visiting in Puerto Rico and had positive interactions with them. My personality is such that I have a very strong inclination to try to see the good in others, which I am aware can bias my views.]
A few random thoughts related to this post:
1. I appreciate the concerns over potential for personal retaliation, and the other factors mentioned by @Habryka and others for why it might be good to not delay this kind of post. I think those concerns and factors are serious and should definitely not be ignored. That said, I want to point out that there’s a different type of retaliation in the other direction that posting this kind of thing without waiting for a response can cause: Reputational damage. As others have pointed out, many people seem to update more strongly on negative reports that come first and less on subsequent follow up rebuttals. If it turned out that the accusations are demonstrably false in critically important ways, then even if that comes to light later the reputational damage to Kat, Emerson, and Drew may now be irrevocable.
Reputation is important almost everywhere, but in my anecdotal experience reputation seems to be even more important in EA than in many other spheres. Many people in EA seem to have a very strong in-group bias towards favoring other “EAs” and it has long seemed to me that (for example) getting a grant from an EA organization often feels to be even more about having strong EA personal connections than for other places. (This is not to say that personal connections aren’t important for securing other types of grants or deals or the like, and it’s definitely not to say that getting an EA grant is only or even mostly about having strong EA connections. But from my own personal experience and from talking to quite a few others both in and out of EA, this is definitely how it feels to me. Note that I have received multiple EA grants in the past, and I have helped other people apply to and receive substantial EA grants.) I really don’t like this sort of dynamic and I’ve low-key complained about it for a long time—it feels unprofessional and raises all sorts of in-group bias flags. And I think a lot of EA orgs feel like they’ve gotten somewhat better about this over time. But I think it is still a factor.
Additionally, it sometimes feels to me that EA Forum dynamics tend to lead to very strongly upvoting posts and comments that are critical of people or organizations, especially if they’re more “centrally connected” in EA, while ignoring or even downvoting posts and comments in the other direction. I am not sure why the dynamic feels like this, and maybe I’m wrong about it really being a thing at all. Regardless, I strongly suspect that any subsequent rebuttal by Nonlinear would receive significantly fewer views and upvotes, even if the rebuttal were actually very strong.
Because of all this, I think that the potential for reputational harm towards Kat, Emerson, and Drew may be even greater than if this were in the business world or some other community. Even if they somehow provide unambiguous evidence that refutes almost everything in this post, I would not be terribly surprised if their potential to get EA funding going forward or to collaborate with EA orgs was permanently ended. In other words, I wouldn’t be terribly surprised if this post spelled the end of their “EA careers” even if the central claims all turned out to be false. My best guess is that this is not the most likely scenario, and that if they provide sufficiently good evidence then they’ll be most likely “restored” in the EA community for the most part, but I think there’s a significant chance (say 1%-10%) that this is basically the end of their EA careers regardless of the actual truth of the matter.
Does any of this outweigh the factors mentioned by @Habryka? I don’t know. But I just wanted to point out a possible factor in the other direction that we may want to consider, particularly if we want to set norms for how to deal with other such situations going forward.
2. I don’t have any experience with libel law or anything of the sort, but my impression is that suing for slander over this kind of piece is very much within the range of normal responses in the business world, even if in the EA world it is basically unheard of. So if your frame of reference is the world outside of EA then suing seems at least like a reasonable response, while if your frame of reference is the EA community then maybe it doesn’t. I’ll let others weigh in on whether my impressions on this are correct, but I didn’t notice others bring this up so I figured I’d mention it.
3. My general perspective on these kinds of things is that… well, people are complicated. We humans often seem to have this tendency to want our heroes to be perfect and our villains to be horrible. If we like someone we want to think they could never do anything really bad, and unless presented with extremely strong evidence to the contrary we’ll look for excuses for their behavior so that it matches our pictures of them as “good people”. And if we decide that they did do something bad, then we label them as “bad people” and retroactively reject everything about them. And if that’s hard to do we suffer from cognitive dissonance. (Cf. halo effect.)
But the reality, at least in my opinion, is that things are more complicated. It’s not just that there are shades of grey, it’s that people can simultaneously be really good people in some ways and really bad people in other ways. Unfortunately, it’s not at all a contradiction for someone to be a genuinely kind, caring, supportive, and absolutely wonderful person towards most of the people in their life, while simultaneously being a sexual predator or committing terrible crimes.
I’m not saying that any of the people mentioned in this post necessarily did anything wrong at all. My point here is mostly just to point out something that may be obvious to almost all of us, but which feels potentially relevant and probably bears repeating in any case. Personally I suspect that everybody involved was acting in what they perceived to be good faith and are / were genuinely trying to do the right thing, just that they’re looking at the situation through lenses based on very different perspectives and experiences and so coming to very different conclusions. (But see my disclaimer at the beginning of this comment about my personality bias coloring my own perspective.)
Important update. Kat has now made this post.
You can probably remove the
?fbclid=....
from the link. That’s a click id added by Facebook for its own tracking.edited per your suggestion
A note on EA posts as (amateur) investigative journalism:
When passions are running high, it can be helpful to take a step back and assess what’s going on here a little more objectively.
There are all different kinds of EA Forum posts that we evaluate using different criteria. Some posts announce new funding opportunities; we evaluate these in terms of brevity, clarity, relevance, and useful links for applicants. Some posts are introduce a new potential EA cause area; we evaluate them in terms of whether they make a good empirical case for the cause area being large-scope, neglected, and tractable. Some posts raise a theoretical issues in moral philosophy; we evaluate those in terms of technical philosophical criteria such as logical coherence.
This post by Ben Pace is very unusual, in that it’s basically investigative journalism, reporting the alleged problems with one particular organization and two of its leaders. The author doesn’t explicitly frame it this way, but in his discussion of how many people he talked to, how much time he spent working on it, and how important he believes the alleged problems are, it’s clearly a sort of investigative journalism.
So, let’s assess the post by the usual standards of investigative journalism. I don’t offer any answers to the questions below, but I’d like to raise some issues that might help us evaluate how good the post is, if taken seriously as a work of investigative journalism.
Does the author have any training, experience, or accountability as an investigative journalist, so they can avoid the most common pitfalls, in terms of journalist ethics, due diligence, appropriate degrees of skepticism about what sources say, etc?
Did the author have any appropriate oversight, in terms of an editor ensuring that they were fair and balanced, or a fact-checking team that reached out independently to verify empirical claims, quotes, and background context? Did they ‘run it by legal’, in terms of checking for potential libel issues?
Does the author have any personal relationship to any of their key sources? Any personal or professional conflicts of interest? Any personal agenda? Was their payment of money to anonymous sources appropriate and ethical?
Were the anonymous sources credible? Did they have any personal or professional incentives to make false allegations? Are they mentally healthy, stable, and responsible? Does the author have significant experience judging the relative merits of contradictory claims by different sources with different degrees of credibility and conflicts of interest?
Did the author give the key targets of their negative coverage sufficient time and opportunity to respond to their allegations, and were their responses fully incorporated into the resulting piece, such that the overall content and tone of the coverage was fair and balanced?
Does the piece offer a coherent narrative that’s clearly organized according to a timeline of events, interactions, claims, counter-claims, and outcomes? Does the piece show ‘scope-sensitivity’ in accurately judging the relative badness of different actions by different people and organizations, in terms of which things are actually trivial, which may have been unethical but not illegal, and which would be prosecutable in a court of law?
Does the piece conform to accepted journalist standards in terms of truth, balance, open-mindedness, context-sensitivity, newsworthiness, credibility of sources, and avoidance of libel? (Or is it a biased article that presupposed its negative conclusions, aka a ‘hit piece’, ‘takedown’, or ‘hatchet job’).
Would this post meet the standards of investigative journalism that’s typically published in mainstream news outlets such as the New York Times, the Washington Post, or the Economist?
I don’t know the answers to some of these, although I have personal hunches about others. But that’s not what’s important here.
What’s important is that if we publish amateur investigative journalism in EA Forum, especially when there are very high stakes for the reputations of individuals and organizations, we should try to adhere, as closely as possible, to the standards of professional investigative journalism. Why? Because professional journalists have learned, from centuries of copious, bitter, hard-won experience, that it’s very hard to maintain good epistemic standards when writing these kinds of pieces, it’s very tempting to buy into the narratives of certain sources and informants, it’s very hard to course-correct when contradictory information comes to light, and it’s very important to be professionally accountable for truth and balance.
The answer to many of your questions is no, I have little former professional experience at this sort of investigation! (I had also never run an office before Lightcone Office, never run a web forum before LessWrong, and never run a conference before EAGxOxford 2016.)
My general attitude to doing new projects that I think should be done and nobody else is doing them is captured in this quote by Eliezer Yudkowsky that I think about often:
PS for those folks who disagree-voted with my post:
My key takeaway was ‘if we publish amateur investigative journalism in EA Forum, especially when there are very high stakes for the reputations of individuals and organizations, we should try to adhere, as closely as possible, to the standards of professional investigative journalism.’
Do you disagree with that conclusion?
Or with some other specific aspect of what I wrote?
Genuinely curious.
Let me justify my complete disagreement.
I read your comment as applying insanely high quality requirements to what’s already an absolutely thankless task. The result of applying your standards would be that the OP would not get written. In a world where criticism is too expensive, it won’t get produced. This is good if the criticism is substance-less, but bad if it’s of substance.
Also, professional journalists are paid for their work. In case of posts like these, who is supposed to pay the wages and provide the manpower to fulfill requirements like “running it by legal”? Are we going to ask all EA organisations to pay into a whistleblower fund, or what?
Also, for many standards and codes of ethics, their main purpose is not to provide a public good, or to improve epistemics, but to protect the professionals themselves. (For example, I sure wish doctors would tell patients if any of their colleagues should be avoided, but this is just not done.) So unequivocally adhering to such professional standards is not the right goal to strive for.
I also read your comment as containing a bunch of leading questions that presupposed a negative conclusion. Over eight paragraphs of questions, you’re questioning the author and his sources, but the only time you question the source of the investigation is when it puts them in a positive light. Thus I found the following phrasing disingenious: “I don’t know the answers to some of these, although I have personal hunches about others. But that’s not what’s important here.”
Overall, I would be more sympathetic towards your perspective if the EA Forum was drowning in this kind of, as you call it, amateur investigative journalism. But I don’t think we suffer from an oversupply. To the contrary, we could’ve used a lot more of that before FTX blew up.
Finally, instead of the decision-making algorithm of judging by the standards of professional investigative journalism, I suggest an alternative algorithm more like “does this standard make outcomes like FTX more or less likely”. I think your suggestion makes it more likely.
This seems worth considering. Or, considering how concentrated EA funding is anyway, having an independent org funded by EA funders fulfilling this role.
I disagree with that conclusion. For example, I think it’s fine to investigate something and write up your conclusions without having training as an investigative journalist, even if your conclusions make someone else look bad.
So, you don’t think amateur investigative journalism should even try to adhere to the standards of professional investigative journalism? (That’s the crux of my argument—I’m obviously not saying that everybody needs to be a trained investigative journalist to publish these kinds of pieces on EA Forum)
That’s not what I said. I said “I think it’s fine to investigate something and write up your conclusions without having training as an investigative journalist” in response to the first thing you proposed as a way to evaluate the piece: “Does the author have any training, experience, or accountability as an investigative journalist, so they can avoid the most common pitfalls, in terms of journalist ethics, due diligence, appropriate degrees of skepticism about what sources say, etc?”
I don’t know what the standards of professional investigative journalism are, so I’m unable to say whether amateur investigative journalism should try to adhere to them.
[EDIT: I can say what I think about the standards you propose in replies to this comment]
If Ben wants to assume liability for libel lawsuits, I don’t see why he should be prevented from doing so. In the domain of professional investigative journalism, I can see why a company would have this standard, since the company may not want to be held liable for things an individual journalist rashly said, but that strikes me as inapplicable in this case.
(Incidentally, it seems like this is probably a standard of professional investigative journalism that I don’t think amateur investigative journalism should attempt to adhere to)
These seem like reasonable questions to ask. I whole-heartedly agree that such amateur journalists should only make payments that are appropriate and ethical—in fact, this strikes me as tautological.
I’m not exactly sure what this means, not being aware of what those standards are. It does strike me that IIUC those venues typically attempt to cover issues of national or international importance (or in the case of the NYT and WaPo, issues of importance to New York City or Washington, DC), and that’s probably the wrong bar for importance for whether someone should publish something on the EA forum or LessWrong.
Anyway, hope these responses satisfy your curiosity!
A version of this focusing on reliability of the investigation, quality of the evidence, etc is a much more plausible version though.
Definitely more plausible, but as a rule, “whenever you engage in some risky activity, you should do it to the standards of the top organizations who do it” doesn’t seem a priori plausible.
I think the first two questions make sense as good criteria (altho criteria that are hard to judge externally). As for the last question, I think somebody could be depressed and routinely show up late to events while still being a good anonymous source, altho for some kinds of mental unhealth, instability, and irresponsibility, I see how they could be disqualifying.
I think most of us have been in situations where different people have told us different things about some topic, and those different people have had different degrees of credibility and conflict of interest? At any rate, I’m more interested in whether the piece is right than whether the author has had experience.
I think organization is a virtue, but not a must for a piece to be accurate or worth reading.
This strikes me as a good standard.
I think it’s fine to attempt to do these sorts of things yourself, as long as you don’t make serious errors, and as long as you correct errors that pop up along the way.
As a consumer of journalism, it strikes me that different venues have different such standards, so I’m not really sure what your first question is supposed to mean. Regarding your parenthetical, I think presupposing negative (or positive!) conclusions is to be avoided, and I endorse negatively judging pieces that do that.
Given the prominence of the comments sections in the venues where this piece has been published, I’d say allowing the targets to comment satisfies the value expressed by this. At any rate, I do think it’s good to incorporate responses from the targets of the coverage (as was done here), and I think that the overall tone of the coverage should be fair. I don’t know what “balance” is supposed to convey beyond fairness: I think that responses from the targets would ideally be reported where relevant and accurate, but otherwise I don’t think that e.g. half the piece should have to be praising the targets.
I disagree-voted because:
I generally dislike “hew to The Established Code of This Profession”, as opposed to “this group thought about this a lot and underwent a lot of trial by fire and came up with these specific guidelines, and I can articulate the costs of benefits of individual rules.”
Investigative journalism doesn’t strike me as particularly ethical, so their code doesn’t seem to be working that well.
″ When passions are running high, it can be helpful to take a step back and assess what’s going on here a little more objectively” is a strong frame you haven’t earned. I would always object to “I’m the cool objective one” as a reason to believe someone absent evidence, but I especially dislike that you made such a claim implicitly.
To answer this question:
I had never heard Alice or Chloe’s names before Kat told me them the first time. I have a fairly strong second-degree connection with Chloe. I had met Kat once before she visited the Lightcone Offices, having one ~20 min conversation with her at Palmcone, an event Lightcone ran in the Bahamas. I had never met Emerson or Drew. I interviewed many people, for most of them we’d never spoken before and for some of them we had a bit. A few of them had spent time at the Lightcone Offices. Wishing them all well, I wouldn’t have described any of the interviewees as “my friend” before the call. Some chance I am forgetting someone here.
Also later on in the investigation I noticed that one friend of mine had been somewhat close to being hired at Nonlinear, and at the time I had given them some small recommendation to accept the job. I kind of wish I had done this in some valiant defense of my friend’s honor and potential harm, but I have to say I basically didn’t think about that aspect at all and had mostly forgotten that it happened.
I appreciate the frame of this post and the question it proposes, it’s worth considering. The questions I’d want to address before fully buying though is:
1) Are the standard of investigative journalism actually good for their purpose? Or they did get distorted along the way for the same reason lots of regulated/standardized things do (e.g. building codes)
2) Supposing they’re good for their purpose, does that really apply not in mainstream media, but rather a smaller community.
I think answering (2), we really do have a tricky false positive/false negative tradeoff. If you raise the bar for sharing critical information, you increase the likelihood of important info not getting shared. If you lower the bar, you increase the likelihood of false things getting out.
Currently, I think we should likely lower the bar, anyone (not saying you actually are) advocating higher levels of rigor before sharing are mistaken. EA has limited infrastructure for investigating and dealing with complaints like this (I doubt Ben/Lightcone colllectively would have consciously upfront thought it was worth 150 hours of Ben’s time, it kind of more happened/snowballed). We don’t have good mean of soliciting and propagating or getting things adjudicated. Given that, I think someone writes a blog post is pretty good, and pretty valuable.
If I’d been the one investigating and writing, I think I’d have published something much less thoroughly researched after 10-15 hours to say “I have some bad critical info I’m pretty sure of that’s worth people knowing, and I have no better way to get the right communal updates than just sharing”.
You seem to be setting the bar way too high.
I admire all of the time and effort that Ben put into writing this post. If the burden was where you suggested, then these kinds of posts would never end up being written.
I think he probably should have waited a week, but I suspect Nonlinear would have come out looking very bad regardless.
Disclaimer: Previously interned for Nonlinear.
Time and effort invested in writing a post have little bearing on the objectivity of the post, when it comes to adjudicating what’s really true in ‘he said/she said’ (or ‘she said/she said’) cases.
If people have an agenda, they might invest large amounts of time and energy into writing something. But if they’re not consciously following principles of objective reporting (eg as crystallized in the highest ideals of investigative journalism), what they write might be very unbalanced.
We are all familiar with many, many cases of this in partisan news media from the Left and the Right. Writers with an agenda routinely invest hundreds of hours into writing pieces that end up being very biased.
It reveals a lot that you ‘suspect Nonlinear would have come out looking very bad regardless’. That suggests that Ben’s initial framing of this narrative will, in fact, tend to overwhelm any counter-evidence that Nonlinear can offer—and maybe he should have waited longer, and tried harder, to incorporate their counter-evidence before publishing this.
Note that I am NOT saying that Ben definitely had a hidden agenda, or definitely was biased, or was acting in bad faith. I’m simply saying that we, as outsiders, do not know the facts of the matter yet, and we should not confuse amount of time invested in writing something with the objectively of the result.
Thanks for this writeup, still undergoing various updates based on the info above and responses from Nonlinear.
One thing I do want to comment on is this:
I agree that it was a bad message to send. I agree that people shouldn’t make it hard for others who have a stake in something to learn about bad behavior from others involved.
But I think it’s actually a bit more complex if you consider the 0 privacy norms that might naturally follow from that, and I can kind of understand where Kat is (potentially) coming from in that message. This doesn’t really apply if Nonlinear was actually being abusive, of course, only if they did things that most people would consider reasonable but which felt unfair to the recipient.
What I mean is basically that it can be tough to know how to act around people who might start shit-talking your organization when them doing so would be defecting on a peace treaty at best, and abusing good-will at worst. And it’s actually generally hard to know if they’re cognizant of that, in my experience.
This is totally independent of who’s “right” or “wrong,” and I have 0 personal knowledge of the Nonlinear stuff. But there are some people who have been to summer camps that we’ve had the opportunity to put on blast about antisocial things they’ve done that got them removed from the ecosystem, but we try to be careful to only do that when it’s *really* egregious, and so often chose not to because it would have felt like too much of an escalation for something that was contained and private...
...but if they were to shit-talk the camps or how they were treated, that would feel pretty bad from my end in the “Well, fuck, I guess this is what we get for being compassionate” sense.
Many people may think it would be a better world if they imagine everyone’s antisocial acts being immediately widely publicized, but in reality what I think would result is a default stance of “All organizations try to ruin people’s reputations if they believe they did something even slightly antisocial so that they can’t harm their reputation by telling biased stories about them first,” and I think most people would actually find themselves unhappy with that world. (I’m not actually sure about that, though it seems safer to err on the side of caution.)
It can sound sinister or be a bad power dynamic from an organization to an individual, but if an individual genuinely doesn’t seem to realize that the thing holding the org back isn’t primarily a mutual worry of negative reputation harm but something like compassion and general decency norms, it might feel necessary to make that explicit… though of course making it explicit comes off as a threat, which is worse in many ways even if it could have been implicitly understood that the threat of reputation harm existed just from the fact that the organization no longer wants you to work with them.
There are good reasons historically why public bias is in the favor of individuals speaking out against organizations, but I think most people who have worked in organizations know what a headache it can be to deal with the occasional incredibly unreasonable person (again, not saying that’s the case here, just speaking in general), and how hard it is to determine how much to communicate to the outside world when you do encounter someone you think is worse than just a “bad fit.” I think it’s hard to set a policy for that which is fair to everyone, and am generally unsure about what the best thing to do in such cases is.
This is a short response while I write up something more substantial.
The true story is very different than the one you just read.
Ben Pace purposefully posted this without seeing our evidence first, which I believe is unethical and violates important epistemic norms.
He said “I don’t believe I am beholden to give you time to prepare”
We told him we have incontrovertible proof that many of the important claims were false or extremely misleading. We told him that we were working full-time on gathering the evidence to send him.
We told him we needed a week to get it all together because there is a lot of it. Work contracts, receipts, chat histories, transcripts, etc.
Instead of waiting to see the evidence, he published. I feel like this indicates his lack of interest in truth.
He did this despite there being no time sensitivity to this question and working on it for months. Despite him saying that he would look at the evidence.
I’m having to deal with one of the worst things that’s ever happened to me. Somebody who I used to care about is telling lies about me to my professional and social community that make me seem like a monster. And I have clear evidence to show that they’re lies.
Please, if you’re reading this, before signal boosting, I beg you to please reserve judgment until we have had a chance to present our evidence.
Kat, I am really sorry about the severe emotional difficulty. It makes sense that having this post be public would be an extremely challenging thing to deal with, all the more so if you have decisive contrary evidence. I will be interested in engaging with whatever you present, once you have the opportunity.
I think it is important to say, as one of the people who Ben interviewed: my very strong impression has been that Ben is interested in the truth, and that he is acting in good faith. My guess is that if you have strong, contrary evidence regarding the most important claims, then Ben will engage with this evidence with an open mind and will signal boost if relevant.
It appears that you were well-prepared to resort to legal threats against the author. Moreover, it seems you were fully aware of the situation, as evidenced by your statements regarding damaging the employee’s reputation. If the allegations being made were indeed false at that time, why didn’t you take any action to address them back then?
Regarding your statement that “many of the important claims were false or extremely misleading,” it’s worth noting that “many” doesn’t necessarily equate to “all.” Could you please specify which claims are accurate?
As for your assertion that there was “no time sensitivity,” it’s important to acknowledge that the time sensitivity in this case revolved around publishing the article before you initiated or threatened legal action, which you ultimately did. Therefore, it appears quite reasonable for the author to publish the article on shorter notice given the circumstances.
In reference to your statement, “I’m having to deal with one of the worst things that’s ever happened to me. Somebody who I used to care about is telling lies about me to my professional and social community that make me seem like a monster,” did one of your former employees not confirm having a negative experience with you on this forum as well? It appears that more than just two individuals mentioned in the article above.
What has happened to this forum? A member of our community is dealing with one of the worst things that’s ever happened to her and is literally begging for simply the opportunity to share her side of the story and she’s getting downvoted?
I’m not trying to defend Kat. In fact, I think we generally don’t have that great an opinion of each other—I really have no vested interest in protecting her reputation. But at least let her defend herself.
No one is stopping Nonlinear from presenting any arguments. It’s a standard Forum norm that people can post at any time—there’s never been any requirement to give an organisation extra time to prepare. It can be courteous.
And fwiw, Kat, I haven’t read the post. I read the beginning and end, which I hope has given me enough context to understand what kind of post it is. I intend not to read the whole post until your response is ready.
A brief note on defamation law:
The whole point of having laws against defamation, whether libel (written defamation) or slander (spoken defamation), is to hold people to higher epistemic standards when they communicate very negative things about people or organizations—especially negative things that would stick in the readers/listeners minds in ways that would be very hard for subsequent corrections or clarifications to counter-act.
Without making any comment about the accuracy or inaccuracy of this post, I would just point out that nobody in EA should be shocked that an organization (e.g. Nonlinear) that is being libeled (in its view) would threaten a libel suit to deter the false accusations (as they see them), to nudge the author(e.g. Ben Pace) towards making sure that their negative claims are factually correct and contextually fair.
That is the whole point and function of defamation law: to promote especially high standards of research, accuracy, and care when making severe negative comments. This helps promote better epistemics, when reputations are on the line. If we never use defamation law for its intended purpose, we’re being very naive about the profound costs of libel and slander to those who might be falsely accused.
EA Forum is a very active public forum, where accusations can have very high stakes for those who have devoted their lives to EA. We should not expect that EA Forum should be completely insulated from defamation law, or that posts here should be immune to libel suits. Again, the whole point of libel suits is to encourage very high epistemic standards when people are making career-ruining and organization-ruining claims.
Whatever its legitimate uses, defamation law is also an extremely useful cudgel that bad actors can, and very frequently do, use to protect their reputations from true accusations. The cost in money, time and risk of going through a defamation trial is such that threats of such can very easily intimidate would-be truth-tellers into silence, especially when the people making the threat have a history of retaliation. Making such threats even when the case for defamation seems highly dubious (as here), should shift us toward believing that we are in the defamation-as-unscrupulous-cudgel world, and update our beliefs about Nonlinear accordingly.
Whether or not we should be shocked epistemically that Nonlinear made such threats here, I claim that we should both condemn and punish them for doing so (within the extent of the law), and create a norm that you don’t do that here. I claim this even if Nonlinear’s upcoming rebuttal proves to be very convincing.
I don’t want a community where we need extremely high burdens of proof to publish true bad things about people. That’s bad for everyone (except the offenders), but especially for the vulnerable people who fall prey to the people doing the bad things because they happen not to have access to the relevant rumor mill. It’s also toxic to our overall epistemics as a community, as it predictably and dramatically skews the available evidence we have to form opinions about people.
Agreed on all counts.
In the broader uncooperative world, you don’t generally give organizations you’re criticizing the opportunity to review a draft and prepare a response. When organizations granted this opportunity respond by threatening to sue, especially if they wouldn’t actually sue but just would prefer it not be published, that pushes authors away from offering the courtesy.
I’ll quote Emerson Spartz on this one:
I agree there are some circumstances under which libel suits are justified, but the net-effect on the availability of libel suits strikes me as extremely negative for communities like ours, and I think it’s very reasonable to have very strong norms against threatening or going through with these kinds of suits. Just because an option is legally available, doesn’t mean that a community has to be fine with that option being pursued.
This, in-particular, strikes me as completely unsupported. The law does not strike me as particularly well-calibrated about what promotes good communal epistemics, and I do not see how preventing negative evidence from being spread, which is usually the most undersupplied type of evidence already, helps “promote better epistemics”. Naively the prior should be that when you suppress information, you worsen the accuracy of people’s models of the world.
As a concrete illustration of this, libel law in the U.S. and the U.K. function very differently. It seems to me that the U.S. law has a much better effects on public discourse, by being substantially harder actually make happen. It is also very hard to sue someone in a foreign court for libel (i.e. a US citizen suing a german citizen is very hard).
This means we can’t have a norm that generically permits libel suits, since U.K. libel suits follow a very different standard than U.S. ones, and we have to decide for ourselves where our standards for information control like this is.
IMO, both U.S. and UK libel suits should both be very strongly discouraged, since I know of dozens of cases where organizations and individuals have successfully used them to prevent highly important information from being propagated, and I think approximately no case where they did something good (instead organizations that frequently have to deal with libel suits mostly just leverage loopholes in libel law that give them approximate immunity, even when making very strong and false accusations, usually with the clarity of the arguments and the transparency of the evidence taking a large hit).
Naive idea (not trying to resolve anything that already happened) :
Have people declare publicly if they want, for themselves, a norm where you don’t say bad things about them and they don’t say bad things about you.
If they say yes then you could take it into account with how you filter evidence about them.
Per https://www.dmlp.org/legal-guide/proving-fault-actual-malice-and-negligence (h/t kave):
Ben says he spent “100-200 hours” researching this post, which is way beyond the level of thoroughness we should require for criticizing an organization on LessWrong or the EA Forum. I think it’s pretty clear that the post reflects this work: whether or not the post is wrong (e.g., maybe Ben was tricked by Alice and Chloe), it’s obvious that Ben did a supererogatory amount of fact-checking, investigating, and collecting takes from multiple parties.
I think there should be a strong norm against threatening people with libel merely for saying a falsehood; the standard should at minimum be that you have good reason to think the person is deliberately lying or indulging in tabloid-newspaper levels of bullshit.
I think the standard should be way higher than that, given the chilling effect of litigiousness; but in this case I think it’s sufficient to say that Ben clearly isn’t lying or flippantly disregarding the truth (whether or not he got some of the facts wrong), so threatening him with a lawsuit is clearly inappropriate. The standard for libel lawsuits needs to be way higher than “I factually disagree with something you said (that makes me look bad)”.
It is indeed high stakes! But in my opinion they have opted in to this sort of accusation being openly stated. Many hundreds or even thousands of people have given their lives and efforts to causes and projects led by high-status people in EA, often on the grounds that it is “high trust” and the people are well-intentioned. Once you are taking those resources — for instance having a woman you talked to once at an EAG come and fly out and live with you and work for you while nomadically traveling and paying her next to nothing, or do the same via a short hiring process — then as soon as someone else in that group sees credible evidence that “Wait, these people look to me like they have taken advantage of this resource and really hurt some people and intimidated them into covering it up”, then it behooves them to say so loud and clear!
Perhaps you do not believe this is true of EA circles, and as an older person in these circles that generally correlates (quite wisely) with being on the more hesitant side of giving your full life to whatever a person currently in EA leadership randomly thinks you should do. Nonetheless I think two younger people here have been eaten up and chewed out and I think it’ll happen again, because of people resting on this inaccurately high level of trust. So I’ll say it as loud as you like :)
To some extent, I agree with this, but I also think it overlooks an important component of how defamation law is used in practice — which is not to hold people to high epistemic norms but instead to scare them out of harming your reputation regardless of the truth. This is something folks who work on corporate campaigns for farmed animal welfare run into all the time. And, because our legal system is imperfect, it often works. Brian Martin has a good write-up on the flaws in our legal system that contribute to this:
I’m not saying this is what’s happening here — I have no idea about the details of any of these allegations. But what if someone did have additional private information about Nonlinear or the folks involved? Unless they are rich or have a sophisticated understanding of the law, the scary lawyer talk from Nonlinear here might deter them from talking about it at all, and I think that’s a really bad epistemic norm. This isn’t to say “the EA Forum should be completely insulated from defamation law” or anything, but in a high-trust community where people will respond to alternatives like publicly sharing counterevidence, threatening lawsuits seems like it might hinder, rather than help, epistemics.
I want to point out that the existence of a libel law that is expensive to engage with, does practically nothing against the posting of anonymized callout posts. You can’t sue someone you can’t identify.
Love it or hate it: the more harshly libel law is enforced, the more I expect similar things to be handled through fully-anonymous or low-transparency channels, instead of high-transparency ones. And in aggregate, I expect an environment high on libel suits, to disincentivize transparent behavior or highly specific allegations (which risks de-anonymization) on the part of accusers, more strongly than it incentivizes epistemic carefulness.
This is one reason to be against encouraging highly litigious attitudes, that I haven’t yet seen mentioned, so I thought I’d briefly put it out there.
I am not a lawyer but have read about defamation law and asked lawyers questions about it. I don’t believe your description of defamation law is as clear cut as you’re making it out to be.
The standard for fault in defamation cases involving private figures is that the defamer had to be “negligent.” That is, they have to have failed to do something they were required to do. Negligence is a vague standard, and it is up to the jury to decide what that means. Framing the point of defamation law as “encourage very high epistemic standards” is just too strong a statement. A jury could interpret it that way, but a jury could interpret it as a much much lower standard.
Furthermore, in my view, basic elements of this post make it a weak case, regardless of whether the claims within it are true of false. Ben is not stating these claims as undisputed, unqualified facts. He is reporting information others have shared with him. It’s only straightforward defamation if he is just making up what people said to him. It’d be more in the defamation camp if Ben himself was saying “Nonlinear mistreated me.”
It seems absolutely inappropriate to me for Nonlinear to threaten to sue in this case. This is a tactic abusers use, and high-integrity people pursue it in much narrower cases.
No, a kneejerk threat of libel still seems to me really bad, whether it’s legally permissable or not. Seems far worse epistemic damage than the thing it seeks to mitigate.
If people do this regularly I’d give to a legal defense fund for people to be able to post criticisms of org founders without fear.
If we aren’t able to criticise non-profits without fear of libel then how exactly are we supposed to hold each other to account.
This seems particularly clear in the case of non-anonymous posts like Ben’s. Ben posted a thing that risks damaging Nonlinear’s reputation. In the process, he’s put his own reputation at risk: Nonlinear can publicly respond with information that shows Ben was wrong (and perhaps negligent, unfair, etc.), causing us to put a lot less stock in Ben’s word next time around.
Alice and Chloe are anonymous, but by having a named individual vouch for them to some degree, we create a situation where ordinary reputational costs can do a good job of incentivizing honesty on everyone’s part.
Thanks for making me (us?) a bit less confused about legal things!
To share some anecdotal data: I personally have had positive experiences doing regular coaching calls with Kat this year and feel that her input has been very helpful.
I would encourage us all to put off updating until we also get the second side of the story—that generally seems like good practice to me whenever it is possible.
Hello Alexandra, to ensure transparency, could you kindly confirm whether you were prompted by Kat or other Nonlinear team members to make this comment or if you were made aware of this discussion by them?
Under all circumstances this is just a terrible day for EA. If the accusations are even half-accurate then I am appalled. If the accusations don’t hold water, then I am also appalled.
I don’t have an informed opinion as to whether it was correct to publish without waiting for Nonlinear to prepare a response. I’m leaning towards thinking it was the right decision given the supposed threatening behavior of nonlinear leadership.
With that said, I would encourage most readers to wait with making up their mind until Nonlinear has had a chance to leave a response. Save your sanity, block this thread from your browser, and come back in a few weeks once the dust has settled.
Witch hunts do happen among well-intentioned people. I have many more thoughts I want to share, but this is not the right time or place.
I don’t really see the “terrible day for EA” part? Maybe you think Nonlinear is more integral to EA as a whole than I do. To me it seems like an allegation of bad behaviour on the part of a notable but relatively minor actor in the space, that doesn’t seem to particularly reflect a broader pattern.
I don’t necessarily disagree with you, but FWIW I think Sam Bankman-Freid and Alameda would have been honestly described as “a notable but relatively minor actor in the space” during the many years when they were building their resource base, hiring, getting funds, and during which time people knew multiple serious accusations about him/them. I am here trying to execute an algorithm that catches bad actors before they become too powerful. I think Emerson is very ambitious and would like a powerful role in EA/X-risk/etc.
I agree with this, and think it could have been a terrible day for EA if stuff like this surfaced later in a world where Nonlinear had become more influential. But thankfully* we’re not in that world.
(* Thankfully assuming the allegations are broadly true etc etc.)
Considering these accusations (in some form or another) have been out for longer than a year, and non-linear has continued to be well respected by the community, I am worried that further “deadline pushing” only serves to launder nonlinear’s reputation. I am suspicious of the idea that many who write for the need to “hear both sides” will indeed update if nonlinear’s response is uncompelling.
I think that Ben probably should have waited the week.
At the same time, I’m still expecting to have a strongly negative opinion about Nonlinear’s leadership’s actions after seeing whatever they end up publishing.
Disclaimer: I formerly interned at Nonlinear.
Here is some additional possibly relevant information:
There was a New Yorker profile on Emerson in 2014, when he was working at Dose. Now, that was 9 years ago, and I think these things often paint an inaccurate picture of a person (though Emerson’s website does lead with “Named ‘The Virologist’ by The New Yorker, Emerson Spartz is one of the world’s leading experts on internet virality …”, so I guess he does not think the article was too bad). At any rate, the profile paints the picture of someone who seems to prioritise other things over epistemics and a healthy information ecosystem.
Nonlinear is or used to be a project of Spartz Philanthropies. According to the IRS website, Spartz Philanthropies had its 501(c)(3) status revoked in 2021 since it had not filed the necessary paperwork for three years straight. Now the Nonlinear website no longer mentions Spartz Philanthropies, and I am unsure whether Nonlinear is a tax-exempt nonprofit or what legal status it has. (ETA: Nonlinear, Inc. is a new 501(c)(3) -- see Drew’s response below.)
Back in 2021, Nonlinear launched its AI safety fund with an announcement post which got some pushback/skepticism in the comments section. Does anyone know whether this fund has made any grants or seeded any new organisations? I have not managed to find any information about it on the Nonlinear website. (ETA: See Drew’s response below.)
Happy to provide more context here.
Nonlinear, Inc is a 501c3. Spartz Philanthropies was an inactive entity that Emerson set up in 2018. We were initially planning on using it as the main entity for Nonlinear. We had filed an extension for the tax returns, but somehow the IRS missed the fact we filed it, which led to tax-exempt status being automatically revoked. Our accountant said you can appeal it and were very likely to win, since it was an error on their part, and we began the appeal, but it can take years. In the meantime, we were fiscally sponsored by Rethink Charity. The IRS was taking too long to respond to the appeal, so I set up a new entity.
I’ve actually been working on a more complete list of all the projects we’ve funded and incubated! But have been very unproductive the last two months due to a combination of an extremely painful RSI and chronic nausea/gut issues. We changed our name from the Nonlinear Fund to Nonlinear. Kat made a basic list here: https://www.nonlinear.org/
Thanks! I’ve edited my original comment to point to your responses.
Does this mean that what used to be the AI safety fund is no longer focused on AI safety? I am asking because the list on the Nonlinear website seems to have mostly assorted EA meta type projects, and you mention a name change.
Appreciate the comments!
My personal context: I joined Nonlinear full-time in April 2022. We’ve gone back and forth from being AI safety-focused to more generally x-risk-focused. We removed the fund from our name because we didn’t just want to fund projects but also launch relevant ones ourselves, like the Nonlinear Library.
I’ve already written a few comments on this post, but it took me a while to figure out what I wanted to/ought to say as my main response.
I heard rumors of some of this a while back. I’m disappointed to see that the claim about asking an intern to take on legal risk was in fact true. I’m also disappointed to see that there are more worrying allegations beyond those that I’d previously heard.
Despite my object-level position, on the meta-level I think it would likely have been better for Ben to have waited a week and I also don’t find Elliot J Davies’ use of the term brigading to be helpful as I have explained in previous comments.
Whilst we have this really bad situation in front of us, it’s worth noting that the norms we create apply not only to this situation but to future ones as well. And even though I doubt I’ll be persuaded by Nonlinear leadership’s defense, I want them to have the opportunity to try[1].
I’m still really saddened by what seems to have happened here and I hope Alice and Chloe heal/have healed. It seems like Nonlinear leadership was comfortable taking absurd risks with what they asked them to do. It really shouldn’t be at all surprising that it blew up in their face, but, it easily could have ended much worse and I’m really thankful for Alice and Chloe’s sake that this didn’t happen.
Disclaimer: I was previously an intern at Nonlinear.
My expectation is that their defense will convince me that things are less bad than indicated by this post, but still really bad.
RobertM and I are having a “dialogue”[1] on LessWrong with a lot of focus on whether it was appropriate for this to be posted when it was and with info collected so far (e.g. not waiting for Nonlinear response).
What is the optimal frontier for due diligence?
Just wanted to say (without commenting on the points in the dialogue) that I appreciate you and Robert having this discussion, and I think the fact you’re having it is an example of good epistemics.
A lot of people have been angry about these texts made by Kat towards Alice:
> “Given your past behavior, your career in EA would be over in a few DMs, but we aren’t going to do that because we care about you”
> “We’re saying nice things about you publicly and expect you will do the same moving forward”
This sounds like a threat and it’s not how I would have worded it had I been in Kat’s shoes. However, I think it looks much more reasonable if you view it through the hypothesis that a) the bad things Alice is saying about Nonlinear are untrue and b) the bad things Kat has been holding off on saying about Alice are true. Basically, I think Kat’s position is that “If you [Alice] keep spreading lies about us, we will have to defend ourselves by countering with the truth, and unfortunately if these truths got out it would make you look bad (e.g. by painting you as dishonest). That’s why we’ve been trying to avoid going down this route, because we actually care about you and don’t want to hurt your reputation (so you can find jobs), so let’s both just say nice things about each other from now on and put this behind us.”. My sense is that Kat, out of fear that her reputation was being badly and unfairly damaged, emphasized the part where bad things happen to Alice in an attempt to get her to stop spreading misinformation. Again, while this isn’t how I’d have worded those messages, given this context I think it’s much more understandable than it might first seem.
Disclaimer: I’m friends with Kat and know some of her side of the story.
Influencing the creation of Professor Quirrel in HPMOR and being influenced by Professor Quirrel in HPMOR both seem to correlate with being a bad actor in EA—a potential red flag to watch out for.
Who’s the other example?
An alleged sexual abuser who’s been banned from the community for some years. Normally I’d go with “name and shame” but iirc the accusers specifically did not want that to happen. See my earlier notes here.
Confused about the downvotes.
(I gave it a small-downvote) I currently think that representation of the person in question is pretty inaccurate. I have various problems with them, one of the primary ones is that they threatened an EA community institution with a libel lawsuit, which you might have picked up I am not a huge fan of, but your comment to me seemed to be more likely to mislead (and to somewhat miasmically propagate a narrative I consider untrustworthy), and also that specific request for privacy still strikes me as illegitimate (as I have commented on the relevant posts).
Do you have a probability that the figure in question is sexually abusive? (Defined as “with full knowledge of the facts, >60% of readers of this comment would consider my original description fair, even after dropping the word ‘alleged.’”)
I didn’t look into the allegations myself because from my perspective, my opinion on that particular Voldemort figure seems more than a bit overdetermined. (But I agree that it’d be bad to falsely propagate an untrue narrative about a specific deficiency).
Fair. I’m pretty confused about the relevant norms here, I think (as I’m sure you/Ben/Lightcone have noticed) getting whistleblowers to talk is sometimes quite difficult, so respecting their wishes seems like a good heuristic/policy. But I haven’t thought about that particular policy in detail either.
Relevant clip:
Emphasis mine. How do you know someone’s a bad actor, scary to be around, psychopathic, literally Voldemort, etc? Well sometimes the call is actually pretty hard and requires a lot of detailed investigations, nuanced contextual understanding, etc. But in some other times, they’ll just tell you.
I recommend that you use a spoiler tag for that last part. Not everyone who wants to has finished the story!
Edited, thank you!
Props to Ben for the investigation. And my thoughts go out to those who’ve been affected.
As a side-note, I am skeptical of the impact of Nonlinear’s work so far in pure results terms. Their direct work seems pretty mediocre and easy, and the rest has just been throwing money at prizes that don’t seem to have had much, if any, positive impact. I’d be curious how other people evaluate Nonlinear’s work.
I think overall this discussion probably isn’t helped by criticism of their object-level work. I don’t think their work being high-impact or low-impact would make these allegations unimportant, but I can see commenters and Nonlinear themselves feeling obliged to get sucked into it. I would rather keep that discussion to other, less fraught venues.
Personally I have also been skeptical of Nonlinear’s work, BUT before anything else, I just want to say I have not carefully kept track of Nonlinear’s work and this is a pretty uninformed vague impression.
I’m not skeptical because of the prizes specifically, I just think they had ideas that sounded not particularly fruitful, or more costly than they were worth. I do think that there was a lot of theoretical discussion around that type of prize setup before Nonlinear tried it, and I respect the ethos of just try-it-and-see-what-happens for something like that, with minimal downside risk. (Notably, if nobody claims the prize, the money isn’t spent and can just be used for other work.) The best-sounding concepts in the world still have to be tested before we should build lots of infrastructure on them, so I see the prize stuff as a fairly inexpensive experiment, and I think often good coordinators are undervalued. My skepticism is more that despite Nonlinear’s high profile as coordinators, I have no evidence of Nonlinear’s impact, and I’m unconvinced that they’ve found good pressure points for coordination in general. I have also judged them more harshly for this than I otherwise would, due to what I perceive as a gimmicky and overconfident style to some of their written materials. This style unavoidably puts up my guard, but its influence on my assessment may be unfair of me!
Thank you! I am also primarily concerned with the experience of the former employees, which has been exceedingly unpleasant.
I will also say, I heard from Alice and Chloe and others that people said things like “Nonlinear is really high impact, we shouldn’t hurt them” and then also “Nonlinear is too powerful, we should try to change them to become better people”.
When I was deciding whether to invite them to the Lightcone Offices, I thought some of their prizes seemed like good ideas, and it swayed me on accepting their request.
If you’re going to lob comments like “mediocre” at an entire orgs work, the least you could do is actually write a comment on it and give me a reason to update. As it stands, you lob stones from behind a pseudonym, something that strikes me not only as cowardly but also unhelpful. Happy to engage if you take the time to make an actual argument.
This account receives and posts comment requests from people who prefer to not post under their own account (see bio).
I do think it is particularly reasonable in this case given claims of Nonlinear staff being willing to retaliate, as well as an active threat to take legal action (though I am not speaking for the person I posted on behalf of, and other justifications for pseudonymous comments exist even where litigation is not a major concern). I will also say that I asked the commentor to ensure this is a comment that they are willing to endorse personally, just not associate with publicly.
I agree that not much justification was provided about the claim of mediocrity, and I also agree with Ben Millwood’s comment that the object level discussion is likely of lower priority at this stage. This passed my vetting despite that in part because I saw this comment’s value more as a discussion prompt (“I’d be curious how other people evaluate Nonlinear’s work”) that could bring out more information, which may have been helpful in getting a better overview of Nonlinear’s practices or how their staff operated, or what it was like interacting with them (for good or bad), and thought this was positive in expectation. I see Gavriel’s comment (see below) as weakly supportive of this, and can imagine worlds where even more useful information came out as a result.
Disclaimer copied: “I have not carefully kept track of Nonlinear’s work and this is a pretty uninformed vague impression”
Ah, sorry I missed this, but have updated to agree as all your reasoning seems sound to me. I don’t know that I’d agree with posting in every instance, but it seems like a descent community service to have and given the pressures involved I think you likely made the right call in this case. Thanks for explaining, it helped change my views a bit on anon commenting :)
This was written after reading Chloe’s update
Note: I’m trying to focus on “What are good practices for EAs who want to try weird things?” rather then “Should NL/Kat/Emerson be disbanded/reprimanded” until NL posts their rebuttal.
I’m feeling concerned about some specific stuff I’d put in the “working in unusual vistas” bucket. I feel weird because Nonlinear has listed “travel” as a perk on job listings, when it can easily be more of a burden, and looks like it is for certain members of their team (while others have more of the benefits and less of the costs). As someone who lived in East Asia for 7 months across as many different islands/regions, it is really hard to be productive and happy for me in that type of nomadic life. I think it’s hard for most people. But the last ~year, seeing NL’s (and Kat’s[1]) positive posts I’d been rethinking that. Now, what I’m reading from Chloe is more in line with my memories of the hassle and why I recommend against it. So I think re: job listings:
Necessary travel should not be listed as a job perk, but a key job trait with both pros and cons and emphasis on suitability for certain less than common personalities. And hirers for travelling positions should try really hard to sort out the people who won’t like it.
It should be made clear how much of an assistant’s job description is literally “dealing with the hassles of travel and working abroad for the team”. To their credit, the NonLinear job listings do address this, but it is at the bottom and general. I mean it should be semi-granular on the listing, like “~10 hours/week”, because I think the amount it was for Chloe (sounds substantial) might have been a surprise and added to reasonable resentment and trapped and useless feelings.
I also think “helping your boss with their personal tasks” should be discussed more granularly (maybe it was in interviews IDK, but seems good to get specific early and in writing)
I additionally suspect (with low confidence because I don’t know what’s normal in business/nonprofits) that for personal tasks, accounting and pay source should be different for those hours, maybe.
Less relevantly, I have seen Kat promote working abroad and nomadic living as seemingly a really good solution to EAs who follow her on Facebook or her blog. I feel weird that I have not seen Kat post a cons list for travel, but this is a footnote because I also think it isn’t generally her responsibility in her personal social media to talk about more than what she is excited about. It is everyone’s responsibility to do their own research/not be easily influenced by what they see on social media. But personally, I think in future if I see someone recommend EAs try nomadic living/work in remote vistas, I will ask revealing questions (like “Do you have an assistant whose job it is to make this easier for you?”) even though it might feel rude.
This was written after reading Chloe’s update
Note: I’m trying to focus on “What are good practices for people trying weird things?” rather then “Should NL/Kat/Emerson be disbanded/reprimanded” until NL posts their rebuttal.
I notice I’m feeling confused that I’m not reading the type of dialogue I tend to hear in EA/rationalist spaces. Weird arrangements def cut out normy safeguards, but I feel this community does actually have tools to mitigate harm that can come with trying those weird arrangements. I mean this separate from org operational norms like compensation clearly written and signed, and good accounting. For the day-to-day stuff, it’s conversational norms that have evolved alongside the “trying of weird things”, probably for good reason. Norms like frank-but-respectful communication that cuts through discomfort and COI anxieties rather than burying or amplifying them, and tries very hard to perspective-take, check in, and prioritize the other person’s preferences.[1]
I think, eventually, the community could discuss questions like: “If any, what tools/communications norms can help mitigate risks of weird experiments” and “How can leaders who want to try weird arrangements ensure that they are personally ready to navigate a day-to-day landscape which is highly likely to feel destabilizing at times if they communicate poorly? How can CEA and funders ensure the leaders are right in their self assessments here?”
Example: In Emerson’s shoes, if I hoped for Chloe’s help with the St. Barths trip, I can think of ~4 ways he could have approached that differently, guided by rat/EA socialization and processes, that would have been way preferable, even very good on net for Chloe. I can respond with a list if people don’t get what I mean.
I’d definitely appreciate hearing the list spelled out.
Okay but, before I say, I’d like to clarify that I don’t think I’d be perfect at this, which is one reason I’m not leading weird things. But I think if you are gonna make requests like that of employees you live or travel with, you basically have to be. (because it gets so much harder then, and this type of communication that makes weirdness safe is the leader’s responsibility not the employee’s).
Okay, at risk of sounding cringe, it’s things like:
Hi Chloe, some of us were talking about going to St. Barths for the day, would you like to come? As a separate question, would you be willing to help get things ready so we can make it happen before the ferry leaves at X, and possibly be available for some ops tasks throughout the day trip too? Before you answer: I realize it is your day off today, so I wonder if there is a good solution for that? My thought was maybe you can take your day off tomorrow and I pay you overtime to make up for the surprise? I don’t know, does this appeal to you or what do you think?
Hey Chloe, the group of us would really appreciate an Ops lead for today’s trip. But before I ask, I want to clarify that there’s no pressure at all. It’s completely up to you as I bet I can wrangle people to turntake as things come up instead. If you want to take it on though, we can discuss how to make up for your day off or what overtime pay makes sense. Feeling keen or no?
Hi Chloe, asking as a friend, not your boss.. We were thinking of going to St. Barths and we wonder what might be your happy price for helping today? Or do you not even want to be engaged with in this way and want your days off to be sacrosanct? I understand that navigating the social stuff while also being employed by me is probably awkward and I’d like to give you the chance to take the lead and clarify on what feels beneficial, not just passably appropriate, for me to ask you about bonus tasks like this. [hopefully this conversation happened early on but it could happen then if not]
[After any reflective but nonconclusive response] Okay, that makes sense. Hm. I am noticing that I want you to have time to think about it, but then feel conflicted that I am on a deadline to make this St. Barth’s trip happen. So let’s get some quick takes, and if you still aren’t sure, hang back and just enjoy your day to yourself as you planned and as I’ve noticed you like to do :) Sound good? [Okay] Ok, what’s your gut take on what minimizes your expected regret? … And, gut take, what maximizes your expected joy and wellbeing?
[As a friend, relies on a jovial environ Chloe is already included in] Okay everyone, let’s put our heads together to make this St. Barths trip work! Raise your hand if you’re coming and I’ll delegate something to you, you can trade your tasks if you want. Chloe could you chime in if you think I’m missing anything? [Everyone, before they head out: Thanks Chloe, enjoy your day :) ]
[If Chloe helps, with or without bonus pay, give lots of check-ins and thanks throughout the day]
I mean, it would depend on how everybody is/the feel of the group, and how much Chloe was needed Sunday, and how much this is considered personal (make offer out of my pocket/crowdfund out of group’s personal pockets, eg “guys before I ask Chloe to work on her day off is anyone willing to put some money forward for her overtime pay? Let’s get some revealed preferences out in the open as to how much we really want this trip and need help with it) vs professional (okay it can be considered a teambuilding exercise but it forces you the leader to think how important it is compared to next week’s work because she will need more time off).
But I feel pretty confident that people can patchwork something together to make this type of thing feel happy and good. Rationalists get weird looks for speaking in this way and coming up with frank, novel solutions, but honestly it can make all the difference. Of course, you have to sort people out in interviews to make sure there is a cultural fit for this level of flexibility and frankness.
I’m curious what others would think of being approached in such a way. Especially Alice or Chloe but I can understand them not responding
(Edited to try to escape the cringe but probably failed)
Wow that’s pretty darn good nice job
I’d like to have a conversation that’s broader than just this specific allegation. There are some claims that are uncontested here, but point to a work environment highly vulnerable to misconduct. Eg, living with your employees.
While this is not the norm for EA orgs, it seems like these dynamics are also not super rare. I expect all broad communities like EA will have some level of misconduct, but we should also strive to minimize it however possible.
So, is there anything EA as a whole can do to minimize things like this? I’d also be curious to know whether CEA knew about any of this (and I’d guess they did since even I—who is not well-connected to Nonlinear—heard whispers). And if so, what steps they took to mitigate this. Or do we think this is out of the jurisdiction of the CEA community health team?
Drew is a personal friend of mine, as is a nonlinear employee not mentioned in this post, so I am saddened to read this. I am waiting to see Nonlinear’s response.
We fixed the results page on the poll so here they are.
Poll: https://viewpoints.xyz/polls/nonlinear-and-community-norms
Results: https://viewpoints.xyz/polls/nonlinear-and-community-norms/results
Surprisingly little agreement on this poll. Unlike most polls I post there is some chance of fake responses here. I dunno, but 154 is a lot of respondents.
A huge chunk of disagreements
Personally I think the answers to this question are pretty wild and deserve more scrutiny. Do 22% of us really think this?
Also a lot where people thought the question was misframed.
When Kat and Joey Savoie worked together (and she was Katherine Savoie), I worked with them extensively. What’s described above is in keeping with my experience of her, and him even more so. Joey was very like how Emerson is described.
I’m not surprised that people in the comments aren’t recognising what’s described as the very bad behaviour that it is, but that is a case of enormous blindness.
Cross-posting Linda Linsefors’ take from LessWrong:
Duncan Sabien replies:
Elizabeth van Nostrand replies:
Linda responds:
Daniel Wyrzykowski replies:
Have there been any updates from Nonlinear? Have they written a response?
They keep saying they’re working on a response. It’s probably around 500 pages by now.
Yes, see here:
https://forum.effectivealtruism.org/posts/32LMQsjEMm6NK2GTH/sharing-information-about-nonlinear?commentId=oSJh4RJvG4Gy4hQ3t
Poll: Was Emerson threatening legal action (suing for defamation) against Ben and his employer (Lightcone) for this being posted a good choice?
(agreevote if you think so, disagreevote if you don’t, comment with your reasoning if you’d like)
I think this is unacceptable, and unless serious evidence appears that Ben behaved dishonestly
in a way nobody seems to currently be claiming(e.g. if he had personally doctored the texts from Kat to add incriminating phrases), I think filing this kind of lawsuit would be cause for the EA community to permanently cut all ties with Nonlinear and with Emerson in particular. I believe this even if it turns out Nonlinear has evidence that the main claims in the post are false.[Edit 12/15/23 -- Nonlinear’s update makes stronger claims about Ben’s actions than I’d seen anyone make when I wrote this, so I’m crossing out “in a way nobody seems to currently be claiming” because it’s no longer accurate. So the applicability of this argument hinges a lot more on the “unless” clause now. ]
Reasoning: I think the question of whether Ben should have waited a week is difficult, and I have felt differently about it at different times over the past few days. But the question of whether the choice he made was justifiable is easy: the people he spoke to seem to be terrified of retaliation, and he has at least two strong pieces of direct evidence (Kat’s text, Emerson’s lawsuit threat) and several pieces of indirect evidence (Emerson’s stories about behavior that while legal strike me as highly unethical, Kat offering a very vulnerable Alice housing only under the condition that she not say mean things about Nonlinear, some of the Glassdoor comments) that these fears of retaliation are well-founded. The fear of that Emerson or Nonlinear might retaliate in some way in the intervening week to stop the post from being posted seems very reasonable to me, and acting on this fear is justifiable even if it overall turned out to be the wrong choice.
Even if you think Ben made the wrong decision (I currently think maybe he did?), the question is not whether he was correct but whether his choice was so unacceptable that it’s appropriate to respond in a way that has a high risk of direct financially ruining him (defamation lawsuits are notoriously a tool used by abusers to silence their critics because the costs of defense are so high, and based on Emerson’s business experience I am unwilling to believe he doesn’t know this.) It clearly wasn’t, and I think it’s imperative we make clear that using expensive lawsuits to win arguments is utterly unacceptable in a community like this one.
I think Habryka has mentioned that Lightcone could withstand a defamation suit, so there’s not a high chance of financially ruining him. I am tentatively in agreement otherwise though.
True! But for the record I definitely don’t have remotely enough personal wealth to cover such a suit. So if libel suits are permissible then you may only hear about credible accusations from people on teams who are willing to back the financial cost, the number of which in my estimation is currently close to 1.
Added: I don’t mean to be more pessimistic than is accurate. I am genuinely uncertain to what extent people will have my back if a lawsuit comes up (Manifold has it at 13%), and my uncertainty range does include “actually quite a lot of people are willing to spend their money to defend my and others’ ability to openly share info like this”.
Surely this would depend on what evidence is revealed in a week? It seems kind of strange to me to see how confident people are in their opinions without knowing what this evidence is (I say this as someone who has already stated that they don’t see any way that Nonlinear leadership comes out of this looking good given what they have already admitted).
Disclaimer: I previously interned at Nonlinear. This comment previously said that I didn’t have knowledge of what information Nonlinear is yet to release, but then I just realised that I actually do know a few things.
I don’t think this is right—whether it’s okay to sue Ben surely depends on the information Ben had at the time of making his decision, not information he didn’t have access to?
It doesn’t seem accurate to characterise Ben as not having access to information if they promise to send it over as soon as they can and a) they don’t unduly delay b) there is no urgent need to publish.
I guess I see Ben as making a bet that the yet-to-revealed information ends up being underwhelming and it feels to me that if he ends up being wrong then some of the downside should accrue to him.
That said, I would really rather not see anyone sue anyone here as it’d be rather damaging to the community.
At the same time, it feels a bit inconsistent to simultaneously be like “you can’t sue me, this is a high-trust community” and “I can’t be bothered waiting a week to see your evidence”. Part of the reason why this is a high-trust community is that people agree to reasonable requests when they are able to.
I would feel differently if Alice or Chloe had written the post themselves as they have direct experience of what happened, but as a third party, I think that Ben probably should have waited.
Before I agree-voted here, this comment had three disagree votes and no agree votes. Are the people who disagree-voted missing the fact that Ben had a 3h conversation with Emerson(? I think, otherwise Kat) about all the allegations? Surely, if you think something about Ben’s summary of his findings is so massively wrong that it warrants a libel lawsuit threat, it would have come up in that 3h conversation. Besides, Ben’s post already makes it clear that Emerson and Kat dispute Alice’s judgment, so it’s not like he’s knowingly lying about things with intent to do harm (requirement for a libel lawsuit to be justified). Instead, he’s just recounting things his sources said with some IMO pretty sane-seeming caveats about what seems more contentious or less contentious. Even if some of that turns out to be more misleading than the average reader would expect before Nonlinear provides more info from their side of things, that doesn’t seem anywhere close to something that warrants this sort of escalation. So, I’m taken aback by what I perceive to be some community members being taken in by cheap tactics. To think that a lawsuit threat has even just a shred of legitimacy here strikes me as totally insane. That lawsuit threat is such an obvious red flag in my view.
Quick thing to re-emphasize. It was not saying we’d sue if they posted. It was saying if they posted without giving us a week to send them the evidence that we thought would largely update the post they wrote.
Ben’s post has already been changed in many ways based on our conversation and the information we showed him on the call. It seems like basic truth-seeking behavior to hear the other side and see counter-evidence.
He sent us the draft on a day he knew we were traveling and had sketchy internet, and that one of our members was sick. He’d had months and hundreds of hours to gather evidence for and write a >10k word post and he gave us a day to respond on a day he knew we were unable to respond well.
We were traveling that entire day (which he knew) and when we asked why there was a rush, couldn’t he wait a week for more information that might sizeably update him, he said he couldn’t wait and wouldn’t tell us why.
We were not asking him not to post. We were asking him to see our evidence before posting.
I think this is a really important distinction.
We were not suppressing evidence, but trying to share it.
And they were refusing to look at it and did not care at all about the effects this would have on our ability to do good in the future. And also despite the fact that after they posted and paid the ex-employees (before they saw our evidence), this would make it psychologically impossible for them to update.
I used a diff checker to find the differences between the current post and the original post. There seem to be two:
“Alice worked there from November 2021 to June 2022” became “Alice travelled with Nonlinear from November 2021 to June 2022 and started working for the org from around February”
“using Lightcone funds” became “using personal funds”
Your claim
seems false. Possibly I made a mistake, or Ben made edits and you saw them and then Ben reverted them—if so, I encourage you/anyone to point to another specific edit, possibly on other archive.org versions.
Update: Kat guesses she was thinking of changes from a near-final draft rather than changes from the first published version.
Those do seem like significant differences, and at least the 1st one is something that the Nonlinear team were asking to be corrected as very relevant to the sick Alice situation. It is an update for me that they were correct.
The email is contradictory as to whether the desire to sue is based on the content of the post or the duration before posting, with I belief the former appearing first in the email.
We tried to make it crystal clear that it was about seeing the evidence first, rather than posting at all.
Here’s the full email.
“Importantly, we are not asking Ben to not publish, just to give until the end of next week to gather and share the evidence we have.”
The line I’m referring to is “if published as is we intend to pursue legal action”. That is consistent with being fine with him publishing at all, but not consistent with being fine if he decides to not change anything in the post after getting all the facts in a week.
Combining this line with the ones you mentioned gives the impression that the message you’re trying to convey is ‘what Ben has written is false and libellous, we have asked him to wait a week so he can correct his post before publishing, after getting all the facts. If he doesn’t do both these things, we intend to sue’, and I think it’s reasonable for anyone to have interpreted it this way, even if that’s not what you intended.
As has been said elsewhere by William Bradshaw, this threat makes me a lot (perhaps even something near his 98%) less likely to sympathize and want to support them with where they’re at.
Oliver seems to be handling the threat super well, but as he mentions, those who made this complaint likely don’t have the resources to fend off such an attack, seeming to prove Ben’s point that he had to do it, because how could anyone expect them to if Emerson is still willing to lash out and attack a respected EA org?
But I’m also biased perhaps, because I agree with Rockwell’s plea that we don’t turn into a group that uses litigation to settle our differences. Prove Ben was wrong and said this to hurt Nonlinear in a forum post, or in argument, and I’ll happily come to your cause. Use the money that you have to sue him and Lightcone “to the maximum legal amount” and you’ve given me nearly no pathway to believe in you or your cause.
Geoffrey Miller does point out that the law exists to keep good norms around sharing negative information widely, but I’m apt to agree with Oliver that this more often likely has a chilling effect, and that those norms should be incorporated at the level of the community, in the court of EA public opinion.
One thing I’m confused about: Emerson is the one making threats, so how do I update on the rest of the Nonlinear team? The negative view is that they are moving in different ways in lockstep, hoping if they attack the issue from all sides (from being nice to threatening legal action) it will be more likely to go away. The charitable view is that both Drew and Kat have gotten involved with someone who has some deep personal issues, but who likely has done much positive and good in their lives and has played a big part of it for awhile now, thus wanting to support him is a behavior any good family would undertake, working privately to reign in the more negative outbursts.
I was also uncertain about this, but Kat’s comment above seems to indicate (though not outright say) that she supports the threat to sue.
Yeah I agree. One update for me: Ben’s new post seems to imply that Drew is not implicated in most of this, and that seems in line with some of the comments, so I’m really tentative in updating at all on him and where he’s at.
I made a little poll to try and figure out the cruxes here:
Normally you can swipe to vote but that seems to be broken. I’ll try and fix it soon.
I’ll post results as we get them.
https://viewpoints.xyz/polls/nonlinear-and-community-norms
The one I found most strange is the percentage of people who disagreed that it was okay for an EA org to break minor laws. I want to flag that animal orgs doing undercover operations receive no pushback from EA as far as I can see and that these are against the law. I can think of other things as well that the EA community condones or doesn’t criticize. Not saying there isn’t a difference what is described here and the ones I listed above but it is strange to see that view.
Woah, the vast majority of undercover investigations carried out by animal advocacy organizations are legal, which is why attempts to make them illegal (such as ag-gag laws in the U.S.) receive so much attention and pushback, with many either not passing or being overturned. I would greatly caution against even casually suggesting an organization is engaged in illegal activity. That said, as you’re possibly getting at in your final sentence, there is a big difference between active, intentional civil disobedience as part of a strategic campaign effort and lax disregard for the law for personal convenience or gain.
For those downvoting, is the disagreement factual, i.e. you believe animal orgs are routinely engaging in illegal activity? Or something else?
Are they not typically committing trespass if nothing else?
It is entirely dependent on the type of investigation and the laws of the municipality. Regarding trespass, often investigators will be employed by the facility they are investigating and onsite as part of their employment, while documenting conditions on camera. Increasingly, drone footage is used.
I think it is dangerous and harmful to make a blanket and public statement accusing a large number of orgs/individuals of illegal activity.
I would have expected that becoming an employee in order to record videos and publish them would be a violation of the employment contracts. Certainly in my industry (finance) employees are not allowed to record sensitive information and publish it (though they can whistle-blow to the regulator if they want). Is there some rule or principle that makes it legal to break confidentiality clauses in these cases, or do farms just not bother to include them in contracts?
Additionally, my impression was that it often was illegal to fly drones over private property without permission, especially if you are invading privacy.
Update: the Eight Circuit has just upheld a ban in Iowa on using deception to gain employment in order to cause economic harm to the employer. So my guess is these investigations are illegal now, at least in Iowa.
At least harder. One loophole for these kinds of laws is that the intent to deceive has to be there at the time the false employment statement was made. As a commenter on the linked post noted by analogy, loan fraud exists when you never intended to repay, not when you decided not to after getting the loan.
“Illegal” can be a tricky word when talking about private-party civil liability, because it can conjure up violation of criminal or at least public-regulatory law.
Also, I wouldn’t assume all low-wage workers even have an employment contract. Often, US employers really don’t want their employee handbooks characterized as an employment contract because that would give employees more rights.
Suing an individual activist or a tiny org in tort or contract law can easily backfire via the Streisand effect. Anyone with serious money may be savvy enough not to cross the line into conduct that would impose liability on them or their org. Generally can’t go after the media for republishing the info in the US; it isn’t defamatory.
People shouldn’t be downvoting Rockwell’s comment. She’s got more experience in this field than almost everyone.
I think an even more relevant question would be “Is it OK to order your employees to break laws, when their job description did not mention civil disobedience campaigns or give any other indication that they would be required to break laws as part of their job?”
I was also surprised by this, and I wonder how many people interpreted “It is acceptable for an EA org to break minor laws” as “It is acceptable for an EA org to break laws willy-nilly as long as it feels like the laws are ‘minor’”, rather than interpreting it as “It is acceptable for an EA org to break at least one minor law ever”.
How easy is it to break literally zero laws? There are an awful lot of laws on the books in the US, many of which aren’t enforced.
I answered disagree, interpreting it in context as “is it acceptable for an EA org to break laws similar in magnitude to driving without a license and purchasing illegal drugs, for personal convenience or profit?”. (Btw I don’t think these are minor. An unprepared driver can kill someone. You can get up to 5 years in prison for possessing marijuana in Puerto Rico.) I would be OK with an EA org engaging in some forms of civil disobedience (e.g. criticizing an authoritarian government that outlawed criticism) or accidentally breaking some obscure law that everyone forgot existed.
I’m not an expert on this, but I agree with Rockwell; my impression was that EA animal orgs try pretty hard to make sure their undercover investigations are done legally, so I don’t know if that’s a good comparison case. I’d be curious to hear the others that you reference.
As for why I think EA orgs shouldn’t beak minor laws: I think following the rules of society — even the inefficent ones — is a really good heuristic to avoid accidentally causing harm, incurring unexpected penalties, or damaging your reputation. In the context of this article, an EA org admitting to knowingly breaking a low-income jurisdiction’s laws on driver licenses because the fines are correspondingly low makes it seem like they don’t care about the spirit of the law (which is to protect Puerto Ricans on the road) or the general principle that it’s good to follow the rules of the society in which you are a visitor.
It’s kind of a caricature of the meme that wealthy people are above the law because they can just pay the fines. I don’t think playing into that is a great way to build a reputatiuon as an altruist.
Thanks for doing this, Nathan! I’m finding these results interesting and informative and I expect this to elevate to the discourse.
Regarding this part: “Third; the semi-employee was also asked to bring some productivity-related and recreational drugs over the border for us. In general we didn’t push hard on this. For one, this is an activity she already did (with other drugs). For two, we thought it didn’t need prescription in the country she was visiting, and when we found out otherwise, we dropped it. And for three, she used a bunch of our drugs herself, so it’s not fair to say that this request was made entirely selfishly. I think this just seems like an extension of the sorts of actions she’s generally open to.”
Has Nonlinear has just openly admitted to providing drugs to an employee and requesting them to procure recreational substances? This situation reveals several complex aspects: the cohabitation of employees, a romantic relationship between the owner’s sibling and an employee, and the alleged distribution of recreational drugs to the workforce. We need clarity on the specific substances involved, such as cannabis or hashish, and the countries where employees were tasked with transporting these drugs. Understanding the legal context in these countries is crucial.
We must consider though, how our actions reflect our community to the young people who trust us to make a positive impact globally. Even if one believes in the decriminalization of all drugs and has well-argued views, providing recreational drugs to employees raises ethical concerns. Moreover, the practice of employees dispensing medication, especially when not a matter of life or death, warrants scrutiny.
It’s puzzling that there seems to be a lack of acknowledgment regarding the ethical boundaries crossed in this situation within the context of workplace ethics.
I have known Kat Woods for the past four years. Throughout this period, Kat has served multiple roles in my professional life—ranging from an enthusiastic cheerleader to a strategic brainstorming partner, and from a well-connected networker to an invaluable advisor. While I am not in a position to speak to the allegations mentioned previously, I can attest to the consistently exceptional nature of my personal experiences with Kat. She is a warm, empathetic, and supportive individual who is unwaveringly dedicated to making a lasting positive impact on the world.
For the sake of full transparency, could you kindly elucidate whether there exists any form of interdependency between yourself, Kat, or any other members of the Nonlinear? Moreover, it would be greatly appreciated if you could clarify whether you consider Kat a friend, as this could potentially introduce bias into your perspective. Lastly, have you been specifically requested by Kat or any other individual affiliated with the Nonlinear team to compose this particular comment?
Worst credible information about a charity that I would expect based on the following description (pulled from Google’s generative AI summary: may or may not be accurate, but seemed like the best balance to me of engaging with some information quickly):
I am not describing a charity with ideal management practices, but envisioning one with 25 employees, active for 5 years, and which has poor but not shockingly or offensively bad governance by the standards of EA orgs. Someplace where I wouldn’t be worried if a friend worked there, but I would sympathetically listen to their complaints and consider them not the best use of my marginal dollar.
Credible accusations of sexual harassment by at least one current or former employee
One or more incidents of notable financial mismanagement
Promised use of donor funds that did not materialize into a finished project (less than 10% of one year’s annual budget in scope)
Credible evidence of evading employment or tax law in some way that, when framed by a hostile observer, looks “pretty bad”: I do not expect sweatshops, but encouraging employees to violate the terms of visas or preferentially hiring donors in a way that can be made to sound scary.
Multiple stories of funding going to friends and personal contacts rather than “objectively better” candidates who did not have personal contacts.
Credible evidence that a moderately important claim they fundraised on continued to be propagated after it stopped being true or the evidence for it was much weaker than previously thought.
Maybe I am excessively cynical about what bad things happen at small charities, but this feels like a reasonable list to me. There may be other events of similar badness.
To check, I am reading you as saying that you used Google’s AI to generate that description of Nonlinear, and then you wrote down what you expected, assuming that it had 25 employees and was active for 5 years.
It does seem that the org is much smaller than you expected. Nonetheless I’d be interested to read about how surprised you are by the content of the post after stating your anticipations.
Correct: I’m vaguely aware of Kat Woods posting on FB, but haven’t investigated Nonlinear in any depth before: having an explicit definition of “what information I’m working with” seemed useful.
Yes, Nonlinear is smaller than expected.
I outlined a bad org with problems, even after adjusting for a hostile reporter and a vengeful ex-employee. I think that the evidence is somewhat weaker than what I expect, not counting that I trust you personally, and the allegations are stronger/worse. Overall, it was a negative update about Nonlinear.
Sure though there is a question of whether such behavior should have punishment in this community.
I don’t think that this is a good state of affairs. I think that the points I raise range from “this should be completely unacceptable” (4, 6) to “if this is the worst credible information that can be shared, the org is probably doing very well (3, 5)”. This is not a description of an org that I would support! But if a friend told me they were doing good work there and they felt the problems were blown out of proportion or context by a hostile critic and a vengeful ex-employee with an axe to grind, I would take them seriously and not say “you have to leave immediately. I can cover two months salary for you while you find another job, but I believe that strongly that you should not work here.”
As always, context is important: “the head of the org is a serial harasser with no effective checks” and “we fired someone when their subordinate came forward with a sexual harassment allegation that, after a one-week investigation, we validated and found credible: the victim is happily employed by us today” are very different states of affairs. If someone is sharing the worst credible information, then the difference between “we were slow to update on X” and “they knew X was false from the report by A, but didn’t change their marketing materials for another six months” can be hard to distinguish.
Running an org is complicated and hard, and I think many people underestimate how much negative spin a third party with access to full information can include. I am deliberately not modelling “Ben Pace, who I have known for almost a decade” and instead framing “hostile journalist looking for clicks”, which I think is the appropriate frame of reference.
I have had the opportunity to engage with individuals from both sides of this narrative. In terms of the individuals who are wronged, I find no reason to doubt their accounts. During my conversation with one individual, it was apparent that they harbored a deep apprehension regarding the expression of their views regarding Nonlinear. To the best of my recollection, it appears that individuals affiliated with Nonlinear made efforts to influence the funding-related processes in which they were involved, or, perhaps, made implicit threats to tarnish their reputation. There were reports of the spreading of damaging rumors by Nonlinear about this person among individuals of influence who were pertinent to their career.
Additionally, I had an encounter with a member of the Nonlinear team, and the interaction left a markedly negative impression on me. Their conduct oscillated between affectionate overtures and profoundly manipulative behavior, which gave rise to an unsettling sensation. As a mature individual, I do not easily succumb to fear or intimidation, but there was a discernible disconcerting quality in our discourse, particularly in their openness about using people for personal gain.
While I acknowledge that my impressions are inherently subjective, the depictions outlined in this article appear to closely correspond with my own experiences on both the side of the alleged victims and the accused.
I possess some additional information; however, I am currently inclined to maintain my anonymity. This inclination is driven by Emerson’s efforts to employ legal threats in an attempt to intimidate the author.
Do you have ways to substantiate these claims?
These statements primarily consist of subjective impressions regarding the characters of the individuals involved, rather than concrete claims. The responsibility of providing proof does not rest with me. Regarding potential claims, such as Nonlinear actively attempting to tarnish victims’ reputation, it may be prudent to approach the Community Health department to inquire if they have been approached by Nonlinear. Such an inquiry could substantiate the validity of these rumors.
Additionally, should further investigation be desired, it might be worthwhile to contact relevant funders in the space to ascertain if they have ever been approached regarding the victims and the nature of those interactions. It’s essential to recognize that we are currently navigating the realm of hearsay and gossip. A capable investigator, therefore, holds the key to uncovering the factual truth. The pertinent question remains as to who will assume the responsibility of delving deeper into this case, or indeed, if an investigation is warranted at all.
In light of the established facts, including instances of psychological manipulation to establish a purported “family unit,” solicitation of employees to transport recreational substances across borders, and insistence on driving without a valid license, coupled with threats to damage reputations and legal action against the author of this post – all of which have been corroborated by Nonlinear – the paramount issue emerges: How many instances of misconduct must accumulate before a decision is reached to exclude Nonlinear from this community?
If a more extensive examination is deemed necessary, it should be entrusted to the appropriate authorities. In this context, the Community Health unit should stand ready to address such matters with utmost professionalism, impartiality, and a deep commitment to the welfare of the victims of any alleged abuse.
Thanks for the questions Morpheus_Trinity. I’m sorry but we are not able to give a response to most of your questions. This comment provides a partial answer.
There’s something darkly funny about the idea that one would need to “be a shark,” “move fast and break things,” threaten and coerce employees,” “”crush enemies”...
All to… publish a podcast of already written articles? Do some career coaching?
I feel like this is a cheap shot, and don’t like seeing it on the top of this discussion.
I think it can be easy to belittle the accomplishments of basically any org. Most startups seem very unimpressive when they’re small.
A very quick review would show other initiatives they’ve worked on. Just go to their tag, for instance:
https://forum.effectivealtruism.org/topics/nonlinear-fund
(All this isn’t to say where I side on the broader discussion. I think the focus now should be on figuring out the key issues here, and I don’t think comments like this help with that. I’m fine with comments like this in smaller channels or with way fewer upvotes, but feel very awkward seeing this on top.)
I can imagine a world where someone, driven by high levels of insecurity and low self-esteem, would go to great lengths to control those around them, despite the personal cost. In a Spotify podcast about narcissism that I once listened to, the author shared stories of people who pretended to have a serious illness like cancer, even going so far as to shave their heads and induce vomiting, all in an attempt to control their romantic partners.
Have people seen this?
This was posted in the comments on Eliezer’s FB profile.
I’ve known Kat Woods for as long as Eric Chisholm has. I first met Eric several years before either of us first got involved in the EA or rationality communities. I had a phone call with him a few hours ago letting him know that this screencap was up on this forum. He was displeased you didn’t let him know yourself that you started this thread.
He is extremely busy for the rest of the month. He isn’t on the EA Forum either. Otherwise, I don’t speak for Eric. I’ve also made my own reply in the comment thread Eric started on Eliezer’s Facebook post. I’m assuming you’ll see the rest of that comment thread on Facebook too.
You can send me a private message to talk or ask me about whatever, or not, as you please. I don’t know who you are.
For anyone else curious, here is a Google Doc I’ve started writing up about the origins of the EA and rationality groups in Vancouver.
https://docs.google.com/document/d/1p8MPC5j2aZrVX_ugBSHy8-N9HSHWiulR5GHJBfKhQe8/edit?usp=sharing
I don’t personally think posting this here is particularly helpful or adds much to the conversation.
It appears that Nonlinear has reached out to several individuals, likely more than one, to request positive comments about their interactions. To maintain a balanced perspective and offer a more comprehensive view, it would be fair and valuable to share experiences from the other side of the spectrum. This would be especially beneficial for those who have only encountered positive interactions with Nonlinear and may benefit from a more well-rounded understanding.
Since Alice met Emerson at an EAG, I’d like to hear what the CEA’s response to this is? I am curious how this sort of thing could be prevented in the future. Perhaps if someone who works for or owns a company meets someone they want to recruit from an EAG, there should be some protections for the young person attending the EAG (for ex- the company supplies the CEA with data about who they recruited, how much they plan to pay them, etc). I think young people attending an EAG would assume that the more senior people attending, who may be potential employers, would have been vetted and are acting in good faith. But if that isn’t the case (which clearly wasn’t here), then there is a serious problem. This is really concerning to me being someone who is currently in university, who knows young people who are eager to or who have attended EAGs, and could fall prey to people like this.
I don’t believe CEA actually has that many resources to deeply vet organizations.
If someone were interested in donating to them enough money for them to do more vetting, I wouldn’t be surprised if they would do more.
I’d expect that the funders would have done more vetting. That said, some of the EA funders now are pretty time-constrained and don’t do very deep vetting.
My guess is that this sort of thing could be prevented with increased vetting and mentorship/oversight. In some worlds, strong managers could find ways for Nonlinear to have done good work, while discouraging/preventing the bad parts of it.
But, this is pretty expensive, and I don’t think there’s a lot of enthusiasm/interest for expanding this area soon. In fairness, note that it is difficult to set up this infrastructure, and the results are often fairly amorphous (it’s hard to detect problems that aren’t happening) and long-term.
Part of the reason I think it was worth Ben/Lightcone prioritizing this investigation is as a retro-active version of “evaluations.”
Like, it is pretty expensive to “vet” things.
But, if your org has practices that lead to people getting hurt (whether intentionally or not), and it’s reasonably likely that those will eventually come to light, orgs are more likely to proactively put more effort into avoiding this sort of outcome.
That sounds a lot like what I picture as an “evaluation”?
I agree that spending time on evaluations/investigations like this is valuable.
Generally, I agree that—the more (competent) evaluations/investigations are done, the less orgs will feel incentivized to do things that would look bad if revealed.
(I think we mainly agree, it’s just terminology here)
Thanks for the reply! I guess I thought that since the CEA already does vet people before they can attend EAG, that maybe this wouldn’t be that hard to do in practise. But I see that most people disagree with me and I appreciate your reply!
Yea. I think CEA does much less vetting than something like this would require. Ben put in hundreds of hours in this case. Maybe CEA has 10-60 minutes to decide on each organization that joins? Most of the time simple questions like, “were they funded by ea donors, and can a few EAs vouch for them” satisfy.
I think Nonlinear would seem pretty competitive with a quick review (you notice they were funded by EAs, the team is EA centric, they do projects that are used by EAs)
Yeah, that makes sense. Thanks for explaining.
Are there any conferences (regardless of field) that do this?
I have no idea because I have never gone to a conference. I would expect that at most professional conferences the senior attendees who are offering careers (maybe universities or hospitals offering research positions) would have a minimum level of professionalism in the employment opportunity they are offering the junior attendees, but I genuinely have no idea how these things work! My concern really stems from meeting a lot of highly capable, excited, intelligent, young people at my university group, and wanting to make sure that they are protected! I hope that comes across in my question. I appreciate Catherine’s response though, and I do think this is harder to do in practice than I considered.
Another idea I had is that talking to young attendees about what to look for in a an employer might be a good idea, but maybe this is already done/or it has been considered and vetoed but I don’t know!
I’m on CEA’s Community Health and Special Projects team, and I sometimes contribute to EAG and EAGx event admissions and speaker decisions. I can understand your concern Lauren Maria. I’d really like for EA events to be places where attendees can have a high level of confidence in the other attendees (especially the attendees in positions of power). CEA does a small amount of vetting of speakers and organisations attending the career fairs. We also have our regular admissions process, where we sometimes choose to reject people from attending the conference if we have reasons to think their attendance would be bad for others (the most common reason is getting complaints of poor behaviour from members of the EA community). This hopefully reduces the risk, but people will still attend who could cause harm.
My main advice is to encourage community members to not implicitly trust others at EA events. Do your own due diligence, and talk it over with trusted friends, family, or mentors before making large decisions.
Do you have plans to exclude Nonlinear from the events in the near future?
Hey Morpheus. This comment provides a partial answer to your question.
Also: “First; the formal employee drove without a license for 1-2 months in Puerto Rico. We taught her to drive, which she was excited about. You might think this is a substantial legal risk, but basically it isn’t, as you can see here, the general range of fines for issues around not-having-a-license in Puerto Rico is in the range of $25 to $500, which just isn’t that bad.”
Nonlinear appears to grapple with discerning the confluence of legality, costs, and ethical considerations in this particular scenario. It may be tempting to view this action as bearing relatively low criminal risks, given that the primary concern is likely monetary fines. However, we must also inquire whether it constitutes a judicious norm or ethical practice to enlist multiple employees in undertaking illegal activities on behalf of the organization. For, despite the ostensibly trivial nature of the potential financial consequences, the greater cost may be in terms of fostering a secure working environment—one in which employees can thrive, experience validation, and be treated with the utmost respect.
The delineation of boundaries within utilitarian decision-making often proves intricate and context-dependent, as is characteristic of ethical deliberations. Yet, one might reasonably posit that, given the financial means available to Emerson, acquiring a necessary license examination for an employee or personally obtaining recreational drugs should not pose an insurmountable challenge.
Central to this discourse is an examination of the organizational environment under consideration and how it presents itself to the global community.
48 Laws of Power sounds like quite the red flag of a book! It’s usually quite hard to know if someone begrudgingly takes on zero-sum worldviews for tactical reasons or if they’re predisposed / looking for an excuse to be cunning, but an announcement like this (in the form of just being excited about this book) seems like a clear surrender of anyone’s obligation to act cooperatively toward you.
48 Laws of Power, along with Robert Greene’s other books, are fairly popular, especially around some entrepreneur / hustle culture circles.
https://en.wikipedia.org/wiki/Robert_Greene_(American_author)
Many top books deal with controversial topics. For example, The Prince, or Lolita.
I’d be hesitant to update much on someone’s reading choices like this.
Yes! To be clear, reading or many forms of recommending is not the red flag, the curiosity or DADA-like view of the value prop of books like that make sense to me. The specific way it comes across in the passage on the Adorian Deck saga definitely makes hiding behind “defensive cynicism” very weak and sounds almost dishonest. The broader view is more charitable toward Emerson in this particular way (see this subthread).
This was twisted to make me seem like a villain. I recommended it as a book specifically to read to be able to defend against unethical people who use those tactics offensively—Defense Against the Dark Arts.
My comment was still when I was mid reading OP. Earlier in the essay there’s an account of the Adorian Deck situation, then the excerpts from the book, which is as far as I got before I wrote this comment. Later in OP does the case for that Emerson is interested in literature like this for DADA reasons become clearer and defensible.
For commenting before I got to the end of the post, I apologize.
Thanks for updating this! This points at something that concerned me about the structure of the original post—Alice or Chloe accuse me of something, but (in the event it was actually covered in my one conversation with Ben) my response to it (or, rather, Ben’s paraphrase) might only be included 8,000 words later, and still likely missing important context I would want to add.
‘use those tactics offensively’ Did you intend to imply it’s sometimes okay to use these tactics defensively, and you should learn how to do so?
There are so many things wrong with this post that I’m not entirely sure where to start. Here are a few key thoughts on this:
-EA preaches rationalism. As part of rationalism, to understand something truly, you need to investigate both sides of the argument. Yet the author specifically decided to only look at one side of the argument. How can that possibly be a rationalist approach to truth-seeking? If you’re going to write a defamation article about someone, especially in EA, please make sure to go about it with the logical rigor you would give any issue.
-I’ve been working with Kat and Nonlinear for years now and I heard about the hiring process, the employment issues, and the nasty separation. I can guarantee you from my perspective as a coach that a good number of the items mentioned here are abjectly false. I think the worst mistake Kat made was to not have a contract in writing with both of her employees (Chloe’s agreement was in writing) detailing the terms of their work engagement.
-I’m not seeing information collected from other Nonlinear employees, which makes me wonder why there’s a biased sample data here. Again, if you’re spending the amount of time and effort as was put into this post to defame someone, choose an appropriate data sample.
-Have you ever been through or seen people go through a divorce? Nasty splits happen all the time, and the anger can cloud retrospective judgment. Yet when we hear someone complain about how bad their ex was, we take it with a grain of salt and assume that personal prejudice is clouding their impression of the person (which is usually true). Why isn’t that factor taken into account?
-In general, I think it’s not a good idea to live with the people you work with. It destroys relationships. So it probably wasn’t a good position to start with. I’m not surprised it went sour—how often do people not have great relationships with their roommates? And when you compound that with a built-in hierarchy of employee and boss, it can make it more challenging. It’s possible Alice and Chloe didn’t know what they were getting themselves into. But that brings me back to Kat’s mistake of not getting it in writing for both of them. Their mistake does not give them an excuse for libel.
Honestly, I’m very disappointed in the author for writing a non-rigorous, slanderous accusation of an organization that does a whole lot of good, especially when I know firsthand that it’s false. It makes me lose faith in the integrity of the rationalist community.
This point, and similar arguments made by other commenters above, seems to not fully grasp:
1) These complaints were made over a year ago
2) Non-linear were given time to respond prior to publication (just not an indefinite amount of time)
3) Non-linear can respond by commenting themselves at any point
Would you be able to give some specific examples of claims you believe are false, and why they are false?
What are you accusing of being slanderous?