I also think many commenters are missing a likely iceberg effect here. The base rate of survivors reporting sexual assault to any kind of authority or external watchdog is low. Thus, an assumption that the journalists at Time and Bloomberg identified all, most, or even a sizable fraction of survivors is not warranted on available information.
We would expect the journalists to significantly underidentify potential cases because:
Some survivors choose to tell no one, only professional supporters like therapists, or only people they can trust to keep the info confidential. Journalists will almost never find these survivors even with tons of resources.
Some survivors could probably be identified by a more extensive journalistic investigation, but journalism isn’t a cash cow anymore. The news org has to balance the value of additional investigation that it internalized against the cost of a deeper investigation. (This also explains why news articles likely have a much higher percentage of publicly-known stories than the true percentage of all stories that are publicly known.)
There are also many reasons a survivor known to a journalist may decide not to agree to be a source, like:
Deciding it is best for their mental well-being not to reopen past trauma by being interviewed about it;
Not wanting the story of what happened to them broadcast in public, even anonymously, and possibly dissected on this Forum among other places;
Feeling that sharing their story would harm EA’s object-level work, and deciding not to do so on that basis;
Concern that their anonymity could be unmasked;
Concern that the abuser could recognize them and retaliate, even if the general public can’t;
Other reasons.
Thus, my prediction of the actual scope of a problem vs. how many people have come forward is something vaguely like an S curve.
An analogous inference from my field of work would be dealing with people caught drunk/drink driving. A very low percentage of episodes come to the authorities’ attention. On the first arrest, it’s plausible that the driver made a poor isolated choice and has only driven drunk once or a few times. On the second arrest, it’s rather unlikely that the problem is isolated but it’s plausible that it hasn’t happened several dozens of times. On the third arrest . . . absent a reason to think the base rate of detection is way off for this person, there is an extremely high probability the person is drunk/drink driving an awful lot.
So based on the number of stories journalists found in which survivors were willing to speak to the journalists, what is the true number of stories and perpetrators? I don’t have a good estimate, in part because I don’t know how many of the stories in the news articles involve the same perpetrators.
I think good answers could be obtained by professional, independent, neutral researchers . . . but that won’t be quick and won’t be cheap. So someone would have to be willing to pay and pre-commit to publishing certain types of de-identified data.
First of all, its a bit patronizing that you imply that people who aren’t updating and handwringing on the Bloomberg piece haven’t considered iceberg effect and uncounted victims. Iceberg effect has been mentioned in discussion before many times, and to any of us who care about sexual misconduct it was already an obvious possibility.
Second, the opinions of those of us who don’t have problems with the EA community any worse than anywhere else (in fact some of us think it is better than other places!), also matter. Frankly I’m tired of current positive reports from women being downgraded and salacious reports (even if very old) being given all the publicity. So it’s a bit of a tangent, but I’ll say it here: I’m a woman and I enjoy the EA community and support how gender-related experiences are handled when they are reported. [I’ve been all the way down my side of the iceberg and I have not experienced anything in EA that implies that things are worse here than other “communities”. I say this not to discredit reports women do put forward, but to balance the narrative]
Next, please people, try to keep in mind other hypotheses and stop jumping the gun about EA til that community wide data does come out. I never ever wanted to post this and perhaps be mistakenly seen as minimizing victim experiences but now that iceberg effect is being discussed, some of you need to read this piece as the other possible side of the coin (the opposite hypothesis) before you make conclusions about EA or even rationality. Read here: https://slatestarcodex.com/2015/09/16/cardiologists-and-chinese-robbers/
While we wait for better community-wide data over these coming months, please realize that these reports in the Bloomberg piece are (1) from years ago and (2) have to do much more with the rationality community than EA (they are different! If you remember one line from this comment remember that!!) and (3) just because they are new to you doesn’t mean they are new to others of us here. Relatedly, you hearing about these reports right now doesn’t mean EAs don’t care and don’t want to fix issues or aren’t trying hard to make out the base of the iceberg 🧐, (4) the journalist did not do a good job of providing details that the community should care about like how these cases have already been handled, some over 5 years ago. (5) relatedly, please realize that it was not the journalist’s goal here to help the EA community decide if it is safe or not. So don’t read the piece like the journalist had you in mind and gave everything someone in your position with your goals to engage the EA community might need. You will have to decide safety for yourself, but realize you were not the Bloomberg journalist’s intended audience. Their goal was to inform the non-rat public of things in rationalist history. You have to fill in the gaps for you, an EA, yourself, and be diligent in ways the journalist was not
General heuristics about reality like “iceberg effect” are fine to mention but they are also conjecture. I hope they are not enough to fully sway anyone that the EA community has a problem right now today. Especially because, again the Bloomberg piece is not even really about EA!
I needed to walk away from this thread due to some unrelated stressful drama at work, which seems to have resolved earlier this week. So I took today off to recover from it. :) I wanted to return to this in part to point out what I think are some potential cruxes, since I expect some of the same cruxes will continue to come up in further discussions of these topics down the road.
1. I think we may have different assumptions or beliefs about the credibility of internal data-gathering versus independent data-gathering. Although the review into handling of the Owen situation is being handled by an outside firm, I don’t believe the broader inquiry you linked is.
I generally don’t update significantly on internal reporting by an organization which has an incentive to paint a rosy picture of things. That isn’t anti-CEA animus; I feel the same way about religious groups, professional sports leagues, and any number of other organizations/movements.
In contrast, an outside professional firm would bring much more credibility to assessing the situation. If you want to get as close as ground truth as possible, you don’t want someone with an incentive to sell more newspapers or someone hostile to EA—but you also don’t want those researching and writing the report to be favorably inclined to EA either. If the truth is the goal, those involved shouldn’t be even unconsciously influenced by the potential effect of the report on EA. This counts for double after the situation with Owen wasn’t managed well.
Conditional on the news articles being inaccurate and/or overstated, an internal review is a much weaker shield with which to defend EA against misrepresentations in the public sphere because the public has to decide how much to trust inside researchers/writers. An outside firm also allows people to come forward who do not want to reveal their identities to any EA organization, and brings specialized expertise in data collection on sensitive topics that is unlikely to be available in-house.
As I see it, the standard practice in situations like this is to bring in a professional, independent, and neutral third party to figure out what is going on. For example, when there were allegations of sexual misconduct in the Antarctic research community, the responsible agencies brought in an independent firm to conduct surveys, do interviews, and the like. The report is here.
Likewise, one of the churches in the group of 15-20 churches I attend discovered a sexual predator in its midst. Everyone who attended any of the 15-20 churches was given the contact information for an independent investigative firm and urged to share any information or concerns about other possible misconduct anywhere in the group. The investigative firm promised that no personally-identifiable information would be released to the church group without the reporter’s permission (although declining permission would sharply limit the action the church could take against any wrongdoer). The group committed, in advance, to releasing the independent investigative report with redactions only to protect the identities of survivors and witnesses. Those steps built credibility with me that the group of churches was taking this seriously and that the public report would be a full and accurate reflection on what the investigators found.
2. Based on crux 1, I suspect people may be trying to answer different questions based on this article. If one expects to significantly update on the CEA data gathering, a main question is whether there is enough information to warrant taking significant actions now on incomplete information rather than waiting for information to assist in making a more accurate decision. If one doesn’t expect to significantly update on that data gathering, a main question is whether there is enough information to warrant pursuing independent information gathering. The quantum of evidence needed seems significantly higher for the first question than the second. (Either formulation is consistent with taking actions now that should be in undertaken no matter what additional data comes in, or actions where the EV is much greater than the costs.)
Thanks for coming back. Hm in my mind, yes if all you are doing is handling immediate reports due to an acute issue (like the acute issue at your church), then yes a non-EA contractor makes sense. However if you want things like ongoing data collection and incident collection for ~the rest of time, it does have to be actually collected within or near to the company/CEA, enough that they and the surveyor can work together. Why would you It seems bad [and risky] to keep the other company on payroll forever and never actually be the owner of the data about your own community?
Additionally I don’t trust non-EAs to build a survey that really gives survey respondents the proper choices to select. I think data-collection infrastucture such as a survey should be designed by someone who understands the “shape” and “many facets” of the EA community very well so as to not miss things. Because it is quite the varied community. In my mind, you need optional questions about work settings, social setting, conference settings, courses, workshops, and more. And each of these requires an understanding of what can go wrong in that particular setting, and you will want to also include correlations you are looking for throughout that people can select. So I actually think, ironically, that data-collecting infrastructure and analysis by non-EAs will have more gaps and therefore bias (unintended or intended) than when designed by EA data analysts and survey experts.
That brings me to the middle option (somewhere between CEA and non-EA contract), which is what I understand CEA’s CH Team to be doing based on talks/emails with Catherine: commissioning independent data collection and analysis from Rethink Priorities. RP has a skilled/educated professional survey arm. It is not part of Effective Ventures (CEA’s parent organisation), so it is still an external investigation and bias should be minimized. If I understand correctly, CEA/CH team will give over their current data to RP [whoops nvm see Chatherine’s comment below], and RP will build and promote a survey (and possibly other infrastructure for data-collection), and finally do their own all-encompassing data analysis without CH Team involved, [possibly but not decided yet]. That’s my rough understanding as of conversation last month anyway.
I do find the question of how data will be handled to be a bit tangential to this post, and I encourage people to comment there if concerned. Though I’d actually just caution patience instead. This is a very important problem to the Community Health Team, and I hope this separation (CHT/RP) is enough for people. Personally, the only bias I’d expect Rethink Priorities [and the CH Team] to have would be to work extra hard because they’d care a lot about solving the problem as best they can. EAs know that as best you can requires naked, unwarped truth, as close as you can get, so I don’t expect RP to be biased against finding truth at all.
Now I find myself considering, “Well, what if RP isn’t separate enough for people, and they want a non-EA investigator, despite risk that non-EAs won’t understand the culture well enough for investigating cultural problems?”.… And idk, maybe people will feel that way. But then I feel incredible concern and confusion: I would honestly wonder if there is any hope of building resilient trust between EAs and EA institutions at all. If some EA readers don’t trust other skilled EAs to try really hard (and competently) to find the truth and good solutions in our own community, idk what to say. It’s hard to imagine myself staying in EA if I thought that way. Hopefully no readers here do think that, hopefully readers think RP separation is enough, as I do, but idk, just making my thoughts known.
Thanks Ivy and Jason for your thoughts on internal and external investigations of problems of sexual misconduct in EA.
There are a few different investigation type things going on at the moment, and some of them aren’t fully scoped or planned. So it is a bit confusing. To clarify, this is where we are at right now:
Analysing existing data sources (in progress—Rethink Priorities has kindly given us some (as yet) unpublished data from the 2022 Survey to help with this step)
We are considering gathering and analysing more data about the experiences of women and gender minorities in EA, and have talked with Rethink Priorities about whether and how they could help. Nothing has been decided yet. To clarify a statement in Ivy’s comment though, we’re not planning to hand over any information we have (e.g. survey data from EAG(x)s or information about sexual misconduct cases raised to our team) to Rethink Priorities as part of this process.
The Community Health team are doing our own internal review into our handling of the complaints about Owen and our overall processes for dealing with complaints and concerns. More information about this here.
Any competent outside firm would gather input from stakeholders before releasing a survey. But I hear the broader concern, and note that some sort of internal-external hybrid is possible. The minimal level of outside involvement, to me, would involve serving as a data guardian, data pre-processor, and auditor-of-sorts. This is related to the two reasons I think outside involvement is important: external credibility, and respondent assurance.
As far as external validity, I think media reports like this have the capacity to do significant harm to EA’s objectives. Longtermist EA remains, on the whole, more talent-constrained and influence-constrained than funding-constrained. The adverse effect on talent joining EA could be considerable. Social influence is underrated; for example, technically solving AI safety might not actually accomplish much without the ability to socially pressure corporations to adopt effective (but profit-reducing) safety methods or convince governments to compel them to do so.
When the next article comes out down the road, here’s what I think EA would be best served by being able to say if possible:
(A) According to a study overseen by a respected independent investigator, the EA community’s rate of sexual misconduct is at most no greater than the base rate.
(B) We have best-in-class systems in place for preventing sexual misconduct and supporting survivors, designed in connection with outside experts. We recognize that sexual misconduct does occur, and we have robust systems for responding to reports and taking the steps we can to protect the community. There is independent oversight over the response system.
(C) Unfortunately, there isn’t that much we can do about problematic individuals who run in EA-adjacent circles but are unaffiliated with institutional EA.
(A) isn’t externally credible without some independent organization vouching for the analysis in some fashion. In my view, (B) requires at some degree of external oversight to be externally credible after the Owen situation, but that’s another story. Interestingly, I think a lot of the potential responses are appropriate either as defensive measures under the “this is overblown reporting by hostile media outlets” hypothesis or “there is a significant problem here” hypothesis. I’d like to see at least funding and policy commitments on some of those initatives in the near term, which would reduce the time pressure on other initiatives for which there is a good chance that further datagathering would substantially change the desirability, scope, layout, etc.
I think one has to balance the goal of external credibility against other goals. But moving the research to (say) RP as opposed to CEA wouldn’t move the external-credibility needle in any appreciable fashion.
The other element here is respondent assurance. Some respondents, especially those no longer associated with EA, may be more comfortable giving responses if the initial data collection itself and any necessary de-identification is done by an outside organization. (It’s plausible to me that the combination of responses in a raw survey response could be uniquely identifying.)
Ideally, you would want to maximize the number of survivors who would be willing to confidentally name the person who committed misconduct. This would allow the outside organization to do a few things that would address methodological concerns in the Time article. First, it could identify perpetrators who had committed misconduct against multiple survivors, avoiding the incorrect impression that perpetrators were more numerous than they were. Second, it could use pre-defined criteria to determine if the perpetrator was actually an EA, again addressing one of the issues with the Time article. Otherwise, you end up with a numerator covering all instances in which someone reports misconduct by someone they identified as an EA . . . but use narrower criteria to develop the denominator, leading to an inflated figure. It would likely be legally safer for CEA to turn over its event-ban list to the outside organization under an NDA for very limited purposes than it would be to turn it over to RP. That would help another criticism of the Time article, that it failed to address CEA’s response to various incidents.
Contingent on budget and maybe early datagathering, I would consider polling men too about things like attitudes associated with rape culture. Surveying or focusing-grouping people about deviant beliefs and behaviors (I’m using “deviant” here as sociologists do), not to mention their own harassment or misconduct, is extremely challenging to start with. You need an independent investigator with ironclad promises of confidentiality to have a chance at that kind of research. But then again, it’s been almost 20 years since my somewhat limited graduate training in social science research methods, so I could be wrong on this.
I realized I missed the bit where you talk about how we might not need such intense data to respond now. Yes, I agree with that. I personally expect that most community builders/leaders are already brainstorming ideas, and even implementing them, to make their spaces better for women. I also expect that most EA men will be much more careful moving forward to avoid saying or doing things which can cause discomfort for women. We will see what comes of it. Actually I’m working on a piece about actions individuals can take now… maybe I will DM it to ya with no pressure at all o.o
I also think many commenters are missing a likely iceberg effect here. The base rate of survivors reporting sexual assault to any kind of authority or external watchdog is low. Thus, an assumption that the journalists at Time and Bloomberg identified all, most, or even a sizable fraction of survivors is not warranted on available information.
We would expect the journalists to significantly underidentify potential cases because:
Some survivors choose to tell no one, only professional supporters like therapists, or only people they can trust to keep the info confidential. Journalists will almost never find these survivors even with tons of resources.
Some survivors could probably be identified by a more extensive journalistic investigation, but journalism isn’t a cash cow anymore. The news org has to balance the value of additional investigation that it internalized against the cost of a deeper investigation. (This also explains why news articles likely have a much higher percentage of publicly-known stories than the true percentage of all stories that are publicly known.)
There are also many reasons a survivor known to a journalist may decide not to agree to be a source, like:
Deciding it is best for their mental well-being not to reopen past trauma by being interviewed about it;
Not wanting the story of what happened to them broadcast in public, even anonymously, and possibly dissected on this Forum among other places;
Feeling that sharing their story would harm EA’s object-level work, and deciding not to do so on that basis;
Concern that their anonymity could be unmasked;
Concern that the abuser could recognize them and retaliate, even if the general public can’t;
Other reasons.
Thus, my prediction of the actual scope of a problem vs. how many people have come forward is something vaguely like an S curve.
An analogous inference from my field of work would be dealing with people caught drunk/drink driving. A very low percentage of episodes come to the authorities’ attention. On the first arrest, it’s plausible that the driver made a poor isolated choice and has only driven drunk once or a few times. On the second arrest, it’s rather unlikely that the problem is isolated but it’s plausible that it hasn’t happened several dozens of times. On the third arrest . . . absent a reason to think the base rate of detection is way off for this person, there is an extremely high probability the person is drunk/drink driving an awful lot.
So based on the number of stories journalists found in which survivors were willing to speak to the journalists, what is the true number of stories and perpetrators? I don’t have a good estimate, in part because I don’t know how many of the stories in the news articles involve the same perpetrators.
I think good answers could be obtained by professional, independent, neutral researchers . . . but that won’t be quick and won’t be cheap. So someone would have to be willing to pay and pre-commit to publishing certain types of de-identified data.
[Edit: If you want a visual analogy about discovery, but one that doesn’t overweight any one perspective, might I suggest the parable of the blind men and the elephant? https://en.wikipedia.org/wiki/Blind_men_and_an_elephant ]
First of all, its a bit patronizing that you imply that people who aren’t updating and handwringing on the Bloomberg piece haven’t considered iceberg effect and uncounted victims. Iceberg effect has been mentioned in discussion before many times, and to any of us who care about sexual misconduct it was already an obvious possibility.
Second, the opinions of those of us who don’t have problems with the EA community any worse than anywhere else (in fact some of us think it is better than other places!), also matter. Frankly I’m tired of current positive reports from women being downgraded and salacious reports (even if very old) being given all the publicity. So it’s a bit of a tangent, but I’ll say it here: I’m a woman and I enjoy the EA community and support how gender-related experiences are handled when they are reported. [I’ve been all the way down my side of the iceberg and I have not experienced anything in EA that implies that things are worse here than other “communities”. I say this not to discredit reports women do put forward, but to balance the narrative]
Third, CEA is moving forward on getting data: https://forum.effectivealtruism.org/posts/mEkRrDweNSdNdrmvx/plans-for-investigating-and-improving-the-experience-of
Next, please people, try to keep in mind other hypotheses and stop jumping the gun about EA til that community wide data does come out. I never ever wanted to post this and perhaps be mistakenly seen as minimizing victim experiences but now that iceberg effect is being discussed, some of you need to read this piece as the other possible side of the coin (the opposite hypothesis) before you make conclusions about EA or even rationality. Read here: https://slatestarcodex.com/2015/09/16/cardiologists-and-chinese-robbers/
While we wait for better community-wide data over these coming months, please realize that these reports in the Bloomberg piece are (1) from years ago and (2) have to do much more with the rationality community than EA (they are different! If you remember one line from this comment remember that!!) and (3) just because they are new to you doesn’t mean they are new to others of us here. Relatedly, you hearing about these reports right now doesn’t mean EAs don’t care and don’t want to fix issues or aren’t trying hard to make out the base of the iceberg 🧐, (4) the journalist did not do a good job of providing details that the community should care about like how these cases have already been handled, some over 5 years ago. (5) relatedly, please realize that it was not the journalist’s goal here to help the EA community decide if it is safe or not. So don’t read the piece like the journalist had you in mind and gave everything someone in your position with your goals to engage the EA community might need. You will have to decide safety for yourself, but realize you were not the Bloomberg journalist’s intended audience. Their goal was to inform the non-rat public of things in rationalist history. You have to fill in the gaps for you, an EA, yourself, and be diligent in ways the journalist was not
General heuristics about reality like “iceberg effect” are fine to mention but they are also conjecture. I hope they are not enough to fully sway anyone that the EA community has a problem right now today. Especially because, again the Bloomberg piece is not even really about EA!
I needed to walk away from this thread due to some unrelated stressful drama at work, which seems to have resolved earlier this week. So I took today off to recover from it. :) I wanted to return to this in part to point out what I think are some potential cruxes, since I expect some of the same cruxes will continue to come up in further discussions of these topics down the road.
1. I think we may have different assumptions or beliefs about the credibility of internal data-gathering versus independent data-gathering. Although the review into handling of the Owen situation is being handled by an outside firm, I don’t believe the broader inquiry you linked is.
I generally don’t update significantly on internal reporting by an organization which has an incentive to paint a rosy picture of things. That isn’t anti-CEA animus; I feel the same way about religious groups, professional sports leagues, and any number of other organizations/movements.
In contrast, an outside professional firm would bring much more credibility to assessing the situation. If you want to get as close as ground truth as possible, you don’t want someone with an incentive to sell more newspapers or someone hostile to EA—but you also don’t want those researching and writing the report to be favorably inclined to EA either. If the truth is the goal, those involved shouldn’t be even unconsciously influenced by the potential effect of the report on EA. This counts for double after the situation with Owen wasn’t managed well.
Conditional on the news articles being inaccurate and/or overstated, an internal review is a much weaker shield with which to defend EA against misrepresentations in the public sphere because the public has to decide how much to trust inside researchers/writers. An outside firm also allows people to come forward who do not want to reveal their identities to any EA organization, and brings specialized expertise in data collection on sensitive topics that is unlikely to be available in-house.
As I see it, the standard practice in situations like this is to bring in a professional, independent, and neutral third party to figure out what is going on. For example, when there were allegations of sexual misconduct in the Antarctic research community, the responsible agencies brought in an independent firm to conduct surveys, do interviews, and the like. The report is here.
Likewise, one of the churches in the group of 15-20 churches I attend discovered a sexual predator in its midst. Everyone who attended any of the 15-20 churches was given the contact information for an independent investigative firm and urged to share any information or concerns about other possible misconduct anywhere in the group. The investigative firm promised that no personally-identifiable information would be released to the church group without the reporter’s permission (although declining permission would sharply limit the action the church could take against any wrongdoer). The group committed, in advance, to releasing the independent investigative report with redactions only to protect the identities of survivors and witnesses. Those steps built credibility with me that the group of churches was taking this seriously and that the public report would be a full and accurate reflection on what the investigators found.
2. Based on crux 1, I suspect people may be trying to answer different questions based on this article. If one expects to significantly update on the CEA data gathering, a main question is whether there is enough information to warrant taking significant actions now on incomplete information rather than waiting for information to assist in making a more accurate decision. If one doesn’t expect to significantly update on that data gathering, a main question is whether there is enough information to warrant pursuing independent information gathering. The quantum of evidence needed seems significantly higher for the first question than the second. (Either formulation is consistent with taking actions now that should be in undertaken no matter what additional data comes in, or actions where the EV is much greater than the costs.)
Thanks for coming back. Hm in my mind, yes if all you are doing is handling immediate reports due to an acute issue (like the acute issue at your church), then yes a non-EA contractor makes sense. However if you want things like ongoing data collection and incident collection for ~the rest of time, it does have to be actually collected within or near to the company/CEA, enough that they and the surveyor can work together.
Why would youIt seems bad [and risky] to keep the other company on payroll forever and never actually be the owner of the data about your own community?Additionally I don’t trust non-EAs to build a survey that really gives survey respondents the proper choices to select. I think data-collection infrastucture such as a survey should be designed by someone who understands the “shape” and “many facets” of the EA community very well so as to not miss things. Because it is quite the varied community. In my mind, you need optional questions about work settings, social setting, conference settings, courses, workshops, and more. And each of these requires an understanding of what can go wrong in that particular setting, and you will want to also include correlations you are looking for throughout that people can select. So I actually think, ironically, that data-collecting infrastructure and analysis by non-EAs will have more gaps and therefore bias (unintended or intended) than when designed by EA data analysts and survey experts.
That brings me to the middle option (somewhere between CEA and non-EA contract), which is what I understand CEA’s CH Team to be doing based on talks/emails with Catherine: commissioning independent data collection and analysis from Rethink Priorities. RP has a skilled/educated professional survey arm. It is not part of Effective Ventures (CEA’s parent organisation), so it is still an external investigation and bias should be minimized. If I understand correctly,
CEA/CH team will give over their current data to RP[whoops nvm see Chatherine’s comment below], and RP will build and promote a survey (and possibly other infrastructure for data-collection), and finally do their own all-encompassing data analysis without CH Team involved, [possibly but not decided yet]. That’s my rough understanding as of conversation last month anyway.I do find the question of how data will be handled to be a bit tangential to this post, and I encourage people to comment there if concerned. Though I’d actually just caution patience instead. This is a very important problem to the Community Health Team, and I hope this separation (CHT/RP) is enough for people. Personally, the only bias I’d expect Rethink Priorities [and the CH Team] to have would be to work extra hard because they’d care a lot about solving the problem as best they can. EAs know that as best you can requires naked, unwarped truth, as close as you can get, so I don’t expect RP to be biased against finding truth at all.
Now I find myself considering, “Well, what if RP isn’t separate enough for people, and they want a non-EA investigator, despite risk that non-EAs won’t understand the culture well enough for investigating cultural problems?”.… And idk, maybe people will feel that way. But then I feel incredible concern and confusion: I would honestly wonder if there is any hope of building resilient trust between EAs and EA institutions at all. If some EA readers don’t trust other skilled EAs to try really hard (and competently) to find the truth and good solutions in our own community, idk what to say. It’s hard to imagine myself staying in EA if I thought that way. Hopefully no readers here do think that, hopefully readers think RP separation is enough, as I do, but idk, just making my thoughts known.
Thanks Ivy and Jason for your thoughts on internal and external investigations of problems of sexual misconduct in EA.
There are a few different investigation type things going on at the moment, and some of them aren’t fully scoped or planned. So it is a bit confusing. To clarify, this is where we are at right now:
Catherine, Anu and Lukasz from the Community Health team are investigating the experiences of women and gender minorities in EA.
Analysing existing data sources (in progress—Rethink Priorities has kindly given us some (as yet) unpublished data from the 2022 Survey to help with this step)
We are considering gathering and analysing more data about the experiences of women and gender minorities in EA, and have talked with Rethink Priorities about whether and how they could help. Nothing has been decided yet. To clarify a statement in Ivy’s comment though, we’re not planning to hand over any information we have (e.g. survey data from EAG(x)s or information about sexual misconduct cases raised to our team) to Rethink Priorities as part of this process.
The EV board has commissioned an external investigation by an independent law firm into Owen’s behaviour and the Community Health team’s response.
The Community Health team are doing our own internal review into our handling of the complaints about Owen and our overall processes for dealing with complaints and concerns. More information about this here.
Any competent outside firm would gather input from stakeholders before releasing a survey. But I hear the broader concern, and note that some sort of internal-external hybrid is possible. The minimal level of outside involvement, to me, would involve serving as a data guardian, data pre-processor, and auditor-of-sorts. This is related to the two reasons I think outside involvement is important: external credibility, and respondent assurance.
As far as external validity, I think media reports like this have the capacity to do significant harm to EA’s objectives. Longtermist EA remains, on the whole, more talent-constrained and influence-constrained than funding-constrained. The adverse effect on talent joining EA could be considerable. Social influence is underrated; for example, technically solving AI safety might not actually accomplish much without the ability to socially pressure corporations to adopt effective (but profit-reducing) safety methods or convince governments to compel them to do so.
When the next article comes out down the road, here’s what I think EA would be best served by being able to say if possible:
(A) According to a study overseen by a respected independent investigator, the EA community’s rate of sexual misconduct is at most no greater than the base rate.
(B) We have best-in-class systems in place for preventing sexual misconduct and supporting survivors, designed in connection with outside experts. We recognize that sexual misconduct does occur, and we have robust systems for responding to reports and taking the steps we can to protect the community. There is independent oversight over the response system.
(C) Unfortunately, there isn’t that much we can do about problematic individuals who run in EA-adjacent circles but are unaffiliated with institutional EA.
(A) isn’t externally credible without some independent organization vouching for the analysis in some fashion. In my view, (B) requires at some degree of external oversight to be externally credible after the Owen situation, but that’s another story. Interestingly, I think a lot of the potential responses are appropriate either as defensive measures under the “this is overblown reporting by hostile media outlets” hypothesis or “there is a significant problem here” hypothesis. I’d like to see at least funding and policy commitments on some of those initatives in the near term, which would reduce the time pressure on other initiatives for which there is a good chance that further datagathering would substantially change the desirability, scope, layout, etc.
I think one has to balance the goal of external credibility against other goals. But moving the research to (say) RP as opposed to CEA wouldn’t move the external-credibility needle in any appreciable fashion.
The other element here is respondent assurance. Some respondents, especially those no longer associated with EA, may be more comfortable giving responses if the initial data collection itself and any necessary de-identification is done by an outside organization. (It’s plausible to me that the combination of responses in a raw survey response could be uniquely identifying.)
Ideally, you would want to maximize the number of survivors who would be willing to confidentally name the person who committed misconduct. This would allow the outside organization to do a few things that would address methodological concerns in the Time article. First, it could identify perpetrators who had committed misconduct against multiple survivors, avoiding the incorrect impression that perpetrators were more numerous than they were. Second, it could use pre-defined criteria to determine if the perpetrator was actually an EA, again addressing one of the issues with the Time article. Otherwise, you end up with a numerator covering all instances in which someone reports misconduct by someone they identified as an EA . . . but use narrower criteria to develop the denominator, leading to an inflated figure. It would likely be legally safer for CEA to turn over its event-ban list to the outside organization under an NDA for very limited purposes than it would be to turn it over to RP. That would help another criticism of the Time article, that it failed to address CEA’s response to various incidents.
Contingent on budget and maybe early datagathering, I would consider polling men too about things like attitudes associated with rape culture. Surveying or focusing-grouping people about deviant beliefs and behaviors (I’m using “deviant” here as sociologists do), not to mention their own harassment or misconduct, is extremely challenging to start with. You need an independent investigator with ironclad promises of confidentiality to have a chance at that kind of research. But then again, it’s been almost 20 years since my somewhat limited graduate training in social science research methods, so I could be wrong on this.
I realized I missed the bit where you talk about how we might not need such intense data to respond now. Yes, I agree with that. I personally expect that most community builders/leaders are already brainstorming ideas, and even implementing them, to make their spaces better for women. I also expect that most EA men will be much more careful moving forward to avoid saying or doing things which can cause discomfort for women. We will see what comes of it. Actually I’m working on a piece about actions individuals can take now… maybe I will DM it to ya with no pressure at all o.o