To riff off a particularly disturbing line in the article:
Anyone who as a member of the AI safety community has committed, or is committing, sexual assault is harming the advancement of AI safety, and this Forum poster suggests that an agentic option for those people would be to remove themselves from the community as soon as possible. (I mean go find a non-AI job at Google or something.)
Whoever suggested to a survivor that they should consider death by suicide should also leave ASAP.
[Edit to add: My sentiment is not limited to sexual assault; many forms of sexual misconduct that do not involve assault warrant the same sentiment.]
What you’re referring to in the last sentence sounds like evil that doesn’t even bother to hide.
But this other part maybe warrants a bit of engagement:
She says others in the community told her allegations of misconduct harmed the advancement of AI safety,
If the allegations are true and serious, then I think it makes sense even just on deterrence grounds for people to have their pursuits harmed, no matter their entanglement with EA/AI safety or their ability to contribute to important causes. In addition, even if we went with the act utilitarian logic of “how much good can this person do?,” I don’t buy that interpersonally callous, predatory individuals are a good thing for a research community (no matter how smart or accomplished they seem). Finding out that someone does stuff that warrants their exclusion from the community (and damages its reputation) is really strong evidence that they weren’t serious enough about having positive impact. One would have to be scarily good at mental gymnastics to think otherwise, to think that this isn’t a bad sign about someone’s commitment and orientation to have impact. (It’s already suspicious most researchers in EA have worldviews that play to their strengths or make their own work seem particularly important. To some degree, biases in that area are probably unavoidable. Still, at the very least, we can try to select for people who are capable of putting in a half-decent effort to avoid these biases and get it right.)
Of course, sometimes particular behaviors seem unforgivable to some people but somewhat less bad to others. Therefore, I think it’s really important to be clear/precise what an accusation is about. (I acknowledge that it can be tricky to give specifics due to protecting anonymity of accusers.) I can imagine circumstances where specific accusations would have significantly bad consequences on net – but not really if they are precise (also in the sense of not omitting important context) and truthful!
To riff off a particularly disturbing line in the article:
Anyone who as a member of the AI safety community has committed, or is committing, sexual assault is harming the advancement of AI safety, and this Forum poster suggests that an agentic option for those people would be to remove themselves from the community as soon as possible. (I mean go find a non-AI job at Google or something.)
Whoever suggested to a survivor that they should consider death by suicide should also leave ASAP.
[Edit to add: My sentiment is not limited to sexual assault; many forms of sexual misconduct that do not involve assault warrant the same sentiment.]
I share this sentiment.
What you’re referring to in the last sentence sounds like evil that doesn’t even bother to hide.
But this other part maybe warrants a bit of engagement:
If the allegations are true and serious, then I think it makes sense even just on deterrence grounds for people to have their pursuits harmed, no matter their entanglement with EA/AI safety or their ability to contribute to important causes. In addition, even if we went with the act utilitarian logic of “how much good can this person do?,” I don’t buy that interpersonally callous, predatory individuals are a good thing for a research community (no matter how smart or accomplished they seem). Finding out that someone does stuff that warrants their exclusion from the community (and damages its reputation) is really strong evidence that they weren’t serious enough about having positive impact. One would have to be scarily good at mental gymnastics to think otherwise, to think that this isn’t a bad sign about someone’s commitment and orientation to have impact. (It’s already suspicious most researchers in EA have worldviews that play to their strengths or make their own work seem particularly important. To some degree, biases in that area are probably unavoidable. Still, at the very least, we can try to select for people who are capable of putting in a half-decent effort to avoid these biases and get it right.)
Of course, sometimes particular behaviors seem unforgivable to some people but somewhat less bad to others. Therefore, I think it’s really important to be clear/precise what an accusation is about. (I acknowledge that it can be tricky to give specifics due to protecting anonymity of accusers.) I can imagine circumstances where specific accusations would have significantly bad consequences on net – but not really if they are precise (also in the sense of not omitting important context) and truthful!