I would also be interested in more clarification about how EA relevant the case studies provided might be, to whatever extent this is possible without breaking confidentiality. For example:
We were pressured to sign non-disclosure agreements or “consent statements” in a manipulative “community process”.
this does not sound like the work of CEA Community health team, but it would be an important update if it was, and it would be useful to clarify if it wasn’t so people don’t jump to the wrong conclusions.
That being said, I think the AI community in the Bay Area is probably sufficiently small such that these cases may be personally relevant to individual EAs even if it’s not institutionally relevant-it seems plausible that a potential victim who gets into AI work via EA might meet alleged abusers in cases A to K, even if no EA organizations or self-identified EAs are involved.
I would also be interested in more clarification about how EA relevant the case studies provided might be, to whatever extent this is possible without breaking confidentiality. For example:
this does not sound like the work of CEA Community health team, but it would be an important update if it was, and it would be useful to clarify if it wasn’t so people don’t jump to the wrong conclusions.
That being said, I think the AI community in the Bay Area is probably sufficiently small such that these cases may be personally relevant to individual EAs even if it’s not institutionally relevant-it seems plausible that a potential victim who gets into AI work via EA might meet alleged abusers in cases A to K, even if no EA organizations or self-identified EAs are involved.
I can’t comment on whether these cases were EA involved because I don’t know.
As you said, the Silicon Valley AI community is extremely small, which makes this relevant to the EA AI sphere, and AI safety more broadly.