Thanks for coming back. Hm in my mind, yes if all you are doing is handling immediate reports due to an acute issue (like the acute issue at your church), then yes a non-EA contractor makes sense. However if you want things like ongoing data collection and incident collection for ~the rest of time, it does have to be actually collected within or near to the company/CEA, enough that they and the surveyor can work together. Why would you It seems bad [and risky] to keep the other company on payroll forever and never actually be the owner of the data about your own community?
Additionally I don’t trust non-EAs to build a survey that really gives survey respondents the proper choices to select. I think data-collection infrastucture such as a survey should be designed by someone who understands the “shape” and “many facets” of the EA community very well so as to not miss things. Because it is quite the varied community. In my mind, you need optional questions about work settings, social setting, conference settings, courses, workshops, and more. And each of these requires an understanding of what can go wrong in that particular setting, and you will want to also include correlations you are looking for throughout that people can select. So I actually think, ironically, that data-collecting infrastructure and analysis by non-EAs will have more gaps and therefore bias (unintended or intended) than when designed by EA data analysts and survey experts.
That brings me to the middle option (somewhere between CEA and non-EA contract), which is what I understand CEA’s CH Team to be doing based on talks/emails with Catherine: commissioning independent data collection and analysis from Rethink Priorities. RP has a skilled/educated professional survey arm. It is not part of Effective Ventures (CEA’s parent organisation), so it is still an external investigation and bias should be minimized. If I understand correctly, CEA/CH team will give over their current data to RP [whoops nvm see Chatherine’s comment below], and RP will build and promote a survey (and possibly other infrastructure for data-collection), and finally do their own all-encompassing data analysis without CH Team involved, [possibly but not decided yet]. That’s my rough understanding as of conversation last month anyway.
I do find the question of how data will be handled to be a bit tangential to this post, and I encourage people to comment there if concerned. Though I’d actually just caution patience instead. This is a very important problem to the Community Health Team, and I hope this separation (CHT/RP) is enough for people. Personally, the only bias I’d expect Rethink Priorities [and the CH Team] to have would be to work extra hard because they’d care a lot about solving the problem as best they can. EAs know that as best you can requires naked, unwarped truth, as close as you can get, so I don’t expect RP to be biased against finding truth at all.
Now I find myself considering, “Well, what if RP isn’t separate enough for people, and they want a non-EA investigator, despite risk that non-EAs won’t understand the culture well enough for investigating cultural problems?”.… And idk, maybe people will feel that way. But then I feel incredible concern and confusion: I would honestly wonder if there is any hope of building resilient trust between EAs and EA institutions at all. If some EA readers don’t trust other skilled EAs to try really hard (and competently) to find the truth and good solutions in our own community, idk what to say. It’s hard to imagine myself staying in EA if I thought that way. Hopefully no readers here do think that, hopefully readers think RP separation is enough, as I do, but idk, just making my thoughts known.
Thanks Ivy and Jason for your thoughts on internal and external investigations of problems of sexual misconduct in EA.
There are a few different investigation type things going on at the moment, and some of them aren’t fully scoped or planned. So it is a bit confusing. To clarify, this is where we are at right now:
Analysing existing data sources (in progress—Rethink Priorities has kindly given us some (as yet) unpublished data from the 2022 Survey to help with this step)
We are considering gathering and analysing more data about the experiences of women and gender minorities in EA, and have talked with Rethink Priorities about whether and how they could help. Nothing has been decided yet. To clarify a statement in Ivy’s comment though, we’re not planning to hand over any information we have (e.g. survey data from EAG(x)s or information about sexual misconduct cases raised to our team) to Rethink Priorities as part of this process.
The Community Health team are doing our own internal review into our handling of the complaints about Owen and our overall processes for dealing with complaints and concerns. More information about this here.
Any competent outside firm would gather input from stakeholders before releasing a survey. But I hear the broader concern, and note that some sort of internal-external hybrid is possible. The minimal level of outside involvement, to me, would involve serving as a data guardian, data pre-processor, and auditor-of-sorts. This is related to the two reasons I think outside involvement is important: external credibility, and respondent assurance.
As far as external validity, I think media reports like this have the capacity to do significant harm to EA’s objectives. Longtermist EA remains, on the whole, more talent-constrained and influence-constrained than funding-constrained. The adverse effect on talent joining EA could be considerable. Social influence is underrated; for example, technically solving AI safety might not actually accomplish much without the ability to socially pressure corporations to adopt effective (but profit-reducing) safety methods or convince governments to compel them to do so.
When the next article comes out down the road, here’s what I think EA would be best served by being able to say if possible:
(A) According to a study overseen by a respected independent investigator, the EA community’s rate of sexual misconduct is at most no greater than the base rate.
(B) We have best-in-class systems in place for preventing sexual misconduct and supporting survivors, designed in connection with outside experts. We recognize that sexual misconduct does occur, and we have robust systems for responding to reports and taking the steps we can to protect the community. There is independent oversight over the response system.
(C) Unfortunately, there isn’t that much we can do about problematic individuals who run in EA-adjacent circles but are unaffiliated with institutional EA.
(A) isn’t externally credible without some independent organization vouching for the analysis in some fashion. In my view, (B) requires at some degree of external oversight to be externally credible after the Owen situation, but that’s another story. Interestingly, I think a lot of the potential responses are appropriate either as defensive measures under the “this is overblown reporting by hostile media outlets” hypothesis or “there is a significant problem here” hypothesis. I’d like to see at least funding and policy commitments on some of those initatives in the near term, which would reduce the time pressure on other initiatives for which there is a good chance that further datagathering would substantially change the desirability, scope, layout, etc.
I think one has to balance the goal of external credibility against other goals. But moving the research to (say) RP as opposed to CEA wouldn’t move the external-credibility needle in any appreciable fashion.
The other element here is respondent assurance. Some respondents, especially those no longer associated with EA, may be more comfortable giving responses if the initial data collection itself and any necessary de-identification is done by an outside organization. (It’s plausible to me that the combination of responses in a raw survey response could be uniquely identifying.)
Ideally, you would want to maximize the number of survivors who would be willing to confidentally name the person who committed misconduct. This would allow the outside organization to do a few things that would address methodological concerns in the Time article. First, it could identify perpetrators who had committed misconduct against multiple survivors, avoiding the incorrect impression that perpetrators were more numerous than they were. Second, it could use pre-defined criteria to determine if the perpetrator was actually an EA, again addressing one of the issues with the Time article. Otherwise, you end up with a numerator covering all instances in which someone reports misconduct by someone they identified as an EA . . . but use narrower criteria to develop the denominator, leading to an inflated figure. It would likely be legally safer for CEA to turn over its event-ban list to the outside organization under an NDA for very limited purposes than it would be to turn it over to RP. That would help another criticism of the Time article, that it failed to address CEA’s response to various incidents.
Contingent on budget and maybe early datagathering, I would consider polling men too about things like attitudes associated with rape culture. Surveying or focusing-grouping people about deviant beliefs and behaviors (I’m using “deviant” here as sociologists do), not to mention their own harassment or misconduct, is extremely challenging to start with. You need an independent investigator with ironclad promises of confidentiality to have a chance at that kind of research. But then again, it’s been almost 20 years since my somewhat limited graduate training in social science research methods, so I could be wrong on this.
Thanks for coming back. Hm in my mind, yes if all you are doing is handling immediate reports due to an acute issue (like the acute issue at your church), then yes a non-EA contractor makes sense. However if you want things like ongoing data collection and incident collection for ~the rest of time, it does have to be actually collected within or near to the company/CEA, enough that they and the surveyor can work together.
Why would youIt seems bad [and risky] to keep the other company on payroll forever and never actually be the owner of the data about your own community?Additionally I don’t trust non-EAs to build a survey that really gives survey respondents the proper choices to select. I think data-collection infrastucture such as a survey should be designed by someone who understands the “shape” and “many facets” of the EA community very well so as to not miss things. Because it is quite the varied community. In my mind, you need optional questions about work settings, social setting, conference settings, courses, workshops, and more. And each of these requires an understanding of what can go wrong in that particular setting, and you will want to also include correlations you are looking for throughout that people can select. So I actually think, ironically, that data-collecting infrastructure and analysis by non-EAs will have more gaps and therefore bias (unintended or intended) than when designed by EA data analysts and survey experts.
That brings me to the middle option (somewhere between CEA and non-EA contract), which is what I understand CEA’s CH Team to be doing based on talks/emails with Catherine: commissioning independent data collection and analysis from Rethink Priorities. RP has a skilled/educated professional survey arm. It is not part of Effective Ventures (CEA’s parent organisation), so it is still an external investigation and bias should be minimized. If I understand correctly,
CEA/CH team will give over their current data to RP[whoops nvm see Chatherine’s comment below], and RP will build and promote a survey (and possibly other infrastructure for data-collection), and finally do their own all-encompassing data analysis without CH Team involved, [possibly but not decided yet]. That’s my rough understanding as of conversation last month anyway.I do find the question of how data will be handled to be a bit tangential to this post, and I encourage people to comment there if concerned. Though I’d actually just caution patience instead. This is a very important problem to the Community Health Team, and I hope this separation (CHT/RP) is enough for people. Personally, the only bias I’d expect Rethink Priorities [and the CH Team] to have would be to work extra hard because they’d care a lot about solving the problem as best they can. EAs know that as best you can requires naked, unwarped truth, as close as you can get, so I don’t expect RP to be biased against finding truth at all.
Now I find myself considering, “Well, what if RP isn’t separate enough for people, and they want a non-EA investigator, despite risk that non-EAs won’t understand the culture well enough for investigating cultural problems?”.… And idk, maybe people will feel that way. But then I feel incredible concern and confusion: I would honestly wonder if there is any hope of building resilient trust between EAs and EA institutions at all. If some EA readers don’t trust other skilled EAs to try really hard (and competently) to find the truth and good solutions in our own community, idk what to say. It’s hard to imagine myself staying in EA if I thought that way. Hopefully no readers here do think that, hopefully readers think RP separation is enough, as I do, but idk, just making my thoughts known.
Thanks Ivy and Jason for your thoughts on internal and external investigations of problems of sexual misconduct in EA.
There are a few different investigation type things going on at the moment, and some of them aren’t fully scoped or planned. So it is a bit confusing. To clarify, this is where we are at right now:
Catherine, Anu and Lukasz from the Community Health team are investigating the experiences of women and gender minorities in EA.
Analysing existing data sources (in progress—Rethink Priorities has kindly given us some (as yet) unpublished data from the 2022 Survey to help with this step)
We are considering gathering and analysing more data about the experiences of women and gender minorities in EA, and have talked with Rethink Priorities about whether and how they could help. Nothing has been decided yet. To clarify a statement in Ivy’s comment though, we’re not planning to hand over any information we have (e.g. survey data from EAG(x)s or information about sexual misconduct cases raised to our team) to Rethink Priorities as part of this process.
The EV board has commissioned an external investigation by an independent law firm into Owen’s behaviour and the Community Health team’s response.
The Community Health team are doing our own internal review into our handling of the complaints about Owen and our overall processes for dealing with complaints and concerns. More information about this here.
Any competent outside firm would gather input from stakeholders before releasing a survey. But I hear the broader concern, and note that some sort of internal-external hybrid is possible. The minimal level of outside involvement, to me, would involve serving as a data guardian, data pre-processor, and auditor-of-sorts. This is related to the two reasons I think outside involvement is important: external credibility, and respondent assurance.
As far as external validity, I think media reports like this have the capacity to do significant harm to EA’s objectives. Longtermist EA remains, on the whole, more talent-constrained and influence-constrained than funding-constrained. The adverse effect on talent joining EA could be considerable. Social influence is underrated; for example, technically solving AI safety might not actually accomplish much without the ability to socially pressure corporations to adopt effective (but profit-reducing) safety methods or convince governments to compel them to do so.
When the next article comes out down the road, here’s what I think EA would be best served by being able to say if possible:
(A) According to a study overseen by a respected independent investigator, the EA community’s rate of sexual misconduct is at most no greater than the base rate.
(B) We have best-in-class systems in place for preventing sexual misconduct and supporting survivors, designed in connection with outside experts. We recognize that sexual misconduct does occur, and we have robust systems for responding to reports and taking the steps we can to protect the community. There is independent oversight over the response system.
(C) Unfortunately, there isn’t that much we can do about problematic individuals who run in EA-adjacent circles but are unaffiliated with institutional EA.
(A) isn’t externally credible without some independent organization vouching for the analysis in some fashion. In my view, (B) requires at some degree of external oversight to be externally credible after the Owen situation, but that’s another story. Interestingly, I think a lot of the potential responses are appropriate either as defensive measures under the “this is overblown reporting by hostile media outlets” hypothesis or “there is a significant problem here” hypothesis. I’d like to see at least funding and policy commitments on some of those initatives in the near term, which would reduce the time pressure on other initiatives for which there is a good chance that further datagathering would substantially change the desirability, scope, layout, etc.
I think one has to balance the goal of external credibility against other goals. But moving the research to (say) RP as opposed to CEA wouldn’t move the external-credibility needle in any appreciable fashion.
The other element here is respondent assurance. Some respondents, especially those no longer associated with EA, may be more comfortable giving responses if the initial data collection itself and any necessary de-identification is done by an outside organization. (It’s plausible to me that the combination of responses in a raw survey response could be uniquely identifying.)
Ideally, you would want to maximize the number of survivors who would be willing to confidentally name the person who committed misconduct. This would allow the outside organization to do a few things that would address methodological concerns in the Time article. First, it could identify perpetrators who had committed misconduct against multiple survivors, avoiding the incorrect impression that perpetrators were more numerous than they were. Second, it could use pre-defined criteria to determine if the perpetrator was actually an EA, again addressing one of the issues with the Time article. Otherwise, you end up with a numerator covering all instances in which someone reports misconduct by someone they identified as an EA . . . but use narrower criteria to develop the denominator, leading to an inflated figure. It would likely be legally safer for CEA to turn over its event-ban list to the outside organization under an NDA for very limited purposes than it would be to turn it over to RP. That would help another criticism of the Time article, that it failed to address CEA’s response to various incidents.
Contingent on budget and maybe early datagathering, I would consider polling men too about things like attitudes associated with rape culture. Surveying or focusing-grouping people about deviant beliefs and behaviors (I’m using “deviant” here as sociologists do), not to mention their own harassment or misconduct, is extremely challenging to start with. You need an independent investigator with ironclad promises of confidentiality to have a chance at that kind of research. But then again, it’s been almost 20 years since my somewhat limited graduate training in social science research methods, so I could be wrong on this.