How do we prevent the methodology of exclusively seeking and publishing negative information, without fact checking, from becoming an acceptable norm?
Re: Checking that claims are true
Adding on as former Nonlinear intern who was aware of a “falling out” between Alice and Nonlinear for almost a year now:
To my knowledge, Nonlinear was given very few/practically no opportunities to respond to the many claims made in “Sharing Information About Nonlinear” before they were posted, despite repeatedly communicating for several months that this counter-evidence was available to Ben and some CEA employees.
I understand that the power asymmetry, high-trust environment and ethical standards within EA makes this complicated to resolve. However, my issue is that the vast majority of the claims made were easily verifiable/falsifiable. Things like payment/lack of payment, delivery orders, messages, receipts, who stayed where etc. all have paper trails. If it’s so trivially easy to verify, there is a responsibility to verify!
I’m not against Ben and Alice choosing to post this. I believe we should normalise people exercising their option to speak out publicly. The alternative is being silenced by massive power asymmetry.
What I am against, is the way these allegations were made, which did not prioritise verifying allegations/claims when repeatedly presented with significant, factual counter-evidence.
Why was Nonlinear not given some chance to present counterevidence? It’s clear the initial investigation took months to gather; only a few days (two days, I think) before posting were Kat and Emerson presented with this, after reaching out to Ben several times! Even granting Nonlinear a day to submit an official refutation of the top 5-10 claims for review would have made a difference.[1] And that’s before factoring in the asymmetry required to refute these allegations with evidence vs making the initial allegations.
I think the handling of this community issue was not healthy for EA/longtermism. Fewer people will read this post than the initial allegations, and Nonlinear’s reputation has definitely been harmed. At best, future whistleblowers are less likely to be believed. I don’t see this as a win for anyone.
Personal Story: How unverified allegations cause harm to real people
Throughout this discussion, there was this undertone that over-weighting Alice’s claims justified the increased reputational risk to Nonlinear, because Kat and Emerson are “better-off” than Alice, so harming them is a more “acceptable” risk because Kat and Emerson will still do fine, whereas Alice is new and less established in EA.
I’d like to say that these allegations don’t just affect Emerson and Kat. It affects the many independent AI Safety researchers Nonlinear helps fund.[2] It also affects Nonlinear’s other employees. It has personally affected me. I am from Southeast Asia, where it’s much harder to find work in EA/longtermism than in EA hubs. Nonlinear was the first (and currently only) EA org I’ve interned at.
Nonlinear had formally stopped hiring interns when I applied, due to the incidents mentioned above. I contributed to the Superlinear bounty platform as a remote volunteer, without knowing it was owned by Nonlinear, or what Nonlinear was. I had spent so much time trying to contribute to EA part-time, that I wanted to make the experience easier for others.
When I was hired as an intern, I texted my friend “What’s Nonlinear? Are they … like, a big deal?”. My friend explained that having Nonlinear as a reference would help me gain admission to EA conferences, and be taken seriously for EA job applications.
Now that Nonlinear’s reputation within EA has been seriously harmed, I’ve been very concerned about how this affects my ability to contribute within EA. Should I add Nonlinear/Kat as references and risk very negative associations, or omit them and risk being overlooked in favour of other applicants who do have references from prominent EAs? It means a lot to me because, as a non-US/EU/UK citizen, I know I’m always applying at a significant disadvantage.[3] I will always have fewer opportunities than an EA born in London who goes to a prestigious UK college with an active EA chapter and many EA internship options, who doesn’t have additional Visa requirements. And if I get rejected for a role, I often don’t get to know why.
I didn’t mention this before, because I cared about whether Alice was actually abused. I had a hunch they were making false claims, but I didn’t want to invalidate victims who might be telling the truth. As of now, this seems … less likely.
These allegations do cause harm: to me, to other Nonlinear employees trying to contribute to EA and the people Nonlinear helps through our work.
In the future, please verify these more seriously. Thank you.
The first time I asked Nonlinear about the allegations, it took me maybe 5-10 minutes to figure out there were multiple misleading statements, since I was shown message logs.
In fundraising, reputation matters. Serious, public allegations of abuse means funders are (rightfully) hesitant, and less funding goes to researchers.
If you are reading this and trying to get into AI Safety/longtermism from a non EA hub, do reach out and I’ll try to reply when I can! We gotta support each other >:)
Re: Checking that claims are true
Adding on as former Nonlinear intern who was aware of a “falling out” between Alice and Nonlinear for almost a year now:
To my knowledge, Nonlinear was given very few/practically no opportunities to respond to the many claims made in “Sharing Information About Nonlinear” before they were posted, despite repeatedly communicating for several months that this counter-evidence was available to Ben and some CEA employees.
I understand that the power asymmetry, high-trust environment and ethical standards within EA makes this complicated to resolve. However, my issue is that the vast majority of the claims made were easily verifiable/falsifiable. Things like payment/lack of payment, delivery orders, messages, receipts, who stayed where etc. all have paper trails. If it’s so trivially easy to verify, there is a responsibility to verify!
I’m not against Ben and Alice choosing to post this. I believe we should normalise people exercising their option to speak out publicly. The alternative is being silenced by massive power asymmetry.
What I am against, is the way these allegations were made, which did not prioritise verifying allegations/claims when repeatedly presented with significant, factual counter-evidence.
Why was Nonlinear not given some chance to present counterevidence? It’s clear the initial investigation took months to gather; only a few days (two days, I think) before posting were Kat and Emerson presented with this, after reaching out to Ben several times! Even granting Nonlinear a day to submit an official refutation of the top 5-10 claims for review would have made a difference.[1] And that’s before factoring in the asymmetry required to refute these allegations with evidence vs making the initial allegations.
I think the handling of this community issue was not healthy for EA/longtermism. Fewer people will read this post than the initial allegations, and Nonlinear’s reputation has definitely been harmed. At best, future whistleblowers are less likely to be believed. I don’t see this as a win for anyone.
Personal Story: How unverified allegations cause harm to real people
Throughout this discussion, there was this undertone that over-weighting Alice’s claims justified the increased reputational risk to Nonlinear, because Kat and Emerson are “better-off” than Alice, so harming them is a more “acceptable” risk because Kat and Emerson will still do fine, whereas Alice is new and less established in EA.
I’d like to say that these allegations don’t just affect Emerson and Kat. It affects the many independent AI Safety researchers Nonlinear helps fund.[2] It also affects Nonlinear’s other employees. It has personally affected me. I am from Southeast Asia, where it’s much harder to find work in EA/longtermism than in EA hubs. Nonlinear was the first (and currently only) EA org I’ve interned at.
Nonlinear had formally stopped hiring interns when I applied, due to the incidents mentioned above. I contributed to the Superlinear bounty platform as a remote volunteer, without knowing it was owned by Nonlinear, or what Nonlinear was. I had spent so much time trying to contribute to EA part-time, that I wanted to make the experience easier for others.
When I was hired as an intern, I texted my friend “What’s Nonlinear? Are they … like, a big deal?”. My friend explained that having Nonlinear as a reference would help me gain admission to EA conferences, and be taken seriously for EA job applications.
Now that Nonlinear’s reputation within EA has been seriously harmed, I’ve been very concerned about how this affects my ability to contribute within EA. Should I add Nonlinear/Kat as references and risk very negative associations, or omit them and risk being overlooked in favour of other applicants who do have references from prominent EAs? It means a lot to me because, as a non-US/EU/UK citizen, I know I’m always applying at a significant disadvantage.[3] I will always have fewer opportunities than an EA born in London who goes to a prestigious UK college with an active EA chapter and many EA internship options, who doesn’t have additional Visa requirements. And if I get rejected for a role, I often don’t get to know why.
I didn’t mention this before, because I cared about whether Alice was actually abused. I had a hunch they were making false claims, but I didn’t want to invalidate victims who might be telling the truth. As of now, this seems … less likely.
These allegations do cause harm: to me, to other Nonlinear employees trying to contribute to EA and the people Nonlinear helps through our work.
In the future, please verify these more seriously. Thank you.
The first time I asked Nonlinear about the allegations, it took me maybe 5-10 minutes to figure out there were multiple misleading statements, since I was shown message logs.
In fundraising, reputation matters. Serious, public allegations of abuse means funders are (rightfully) hesitant, and less funding goes to researchers.
If you are reading this and trying to get into AI Safety/longtermism from a non EA hub, do reach out and I’ll try to reply when I can! We gotta support each other >:)