Note that L was the only example in your list which was specifically related to EA. I believe that that accusation was false. See here for previous discussion.
I would also be interested in more clarification about how EA relevant the case studies provided might be, to whatever extent this is possible without breaking confidentiality. For example:
We were pressured to sign non-disclosure agreements or “consent statements” in a manipulative “community process”.
this does not sound like the work of CEA Community health team, but it would be an important update if it was, and it would be useful to clarify if it wasn’t so people don’t jump to the wrong conclusions.
That being said, I think the AI community in the Bay Area is probably sufficiently small such that these cases may be personally relevant to individual EAs even if it’s not institutionally relevant-it seems plausible that a potential victim who gets into AI work via EA might meet alleged abusers in cases A to K, even if no EA organizations or self-identified EAs are involved.
The situation with person L was deeply tragic. This comment explains some of the actions taken by CEA’s Community Health team as a result of their reports.
Even if most examples are unrelated to EA, if it’s true that the Silicon Valley AI community has zero accountability for bad behavior, that seems like it should concern us?
EDIT: I discuss a [high uncertainty] alternative hypothesis in this comment.
I think where it relates to EA is our worry about the future of complex life. If transformative superintelligence is developed in a morally bankrupt environment, will that create value-aligned AI?
Note that L was the only example in your list which was specifically related to EA. I believe that that accusation was false. See here for previous discussion.
I would also be interested in more clarification about how EA relevant the case studies provided might be, to whatever extent this is possible without breaking confidentiality. For example:
this does not sound like the work of CEA Community health team, but it would be an important update if it was, and it would be useful to clarify if it wasn’t so people don’t jump to the wrong conclusions.
That being said, I think the AI community in the Bay Area is probably sufficiently small such that these cases may be personally relevant to individual EAs even if it’s not institutionally relevant-it seems plausible that a potential victim who gets into AI work via EA might meet alleged abusers in cases A to K, even if no EA organizations or self-identified EAs are involved.
I can’t comment on whether these cases were EA involved because I don’t know.
As you said, the Silicon Valley AI community is extremely small, which makes this relevant to the EA AI sphere, and AI safety more broadly.
The situation with person L was deeply tragic. This comment explains some of the actions taken by CEA’s Community Health team as a result of their reports.
Even if most examples are unrelated to EA, if it’s true that the Silicon Valley AI community has zero accountability for bad behavior, that seems like it should concern us?
EDIT: I discuss a [high uncertainty] alternative hypothesis in this comment.
I think where it relates to EA is our worry about the future of complex life. If transformative superintelligence is developed in a morally bankrupt environment, will that create value-aligned AI?
I don’t see anything on the linked post in this comment that L’s report was false from legitimate sources.
OP stated that L’s accusations were dismissed by the EA community.