Here’s a concrete suggestion: less emphasis on a person’s impact in “interpersonal harm” cases.
2 of the 7 rows in the table on the community health team post on this topic are about making sure that we can still get the value of perpetrator’s work.
This seems to me to be the sort of naive consequentialist reasoning that people find so offputting about EA. It seems to me that when a perpetrator commits a wrongdoing, the question to ask isn’t “do they do work that we value and want more of”. The questions are “did they commit a wrong? what sort of wrong? what is the appropriate response to that wrongdoing?”.
(And note that my link is to an official statement from the team explicitly dedicated to helping protect the community. Imagine how much more prevalent the attitude of “focus on impact” is in some less cautious corners of the community.)
Looking at the context of that table, that is a list of difficult tradeoffs where both sides are valuable and they’re not sure they have the right balance. This seems pretty correct to me? It is a difficult tradeoff, both sides ARE valuable, and they may not have the right balance.
My view is that this is false. Whether or not someone’s work is useful should play no role in determining the appropriate reaction to predatory behaviour, so there’s just no tradeoff we should be reflecting on here. I don’t think this is a difficult question. I don’t think that (for example) the talent bottleneck is relevant to how EA should respond to predatory behaviour: if people act in a predatory way, that should be acted on even if it makes it harder to find talent. The tradeoff is simple because one side of it should be entirely ignored.
I’m sure that many of the readers of this forum will disagree with me about this. But my view is that the community will never robustly act on predatory behaviour while it continues to treat impact as one of the relevant factors in determining how to respond to such behaviour.
(I also think this is an example of how, despite some people’s protestations, EA does in fact engage in a sort of means-end reasoning that is in violation of common sense morality and that does involve a failure of what many people would see as integrity).
I think it’s important to be clear about this viewpoint, but I do worry that in doing so it will sound like I’m attacking Neel. So I want to be clear that this is not the case; I don’t know Neel but I imagine he’s an excellent and lovely human being. I just happen to think he’s wrong about this specific issue, and I happen to think that the fact that many EAs hold this view has had serious and negative effects.
ETA: Even with all of that said, I do agree that the full post from the community health team contains much more detail than I summarised in my brief reference, and I think people should not judge the full contents of the post based on my comment (instead, I would encourage people to read the post itself).
It’s perhaps worth noting that I think there’s a pretty strong consequentialist case against considering impact in these cases. I think doing so has reputational costs, I think it encourages future wrongdoing, and I think it discourages valuable contributions to the community from those who are driven away. (This is just the point that consequentialist EAs are making when they argue against being “naive” consequentialists).
But I will leave someone else to make this case in detail if they wish to, because I think that this is not the point. I personally find it disturbing that I would have to make a case in impact terms in order to encourage robust action against perpetrators, and I don’t feel comfortable doing so in detail.
I think maybe that the balance I’d strike here is as follows: we always respect nonintervention requests by victims. That is if the victim says “I was harmed by X, but I think the consequences of me reporting this should not include consequence Y” then we avoid intervening in ways that will cause Y. This is a good practice generally, because you never want to disincentivize people from reporting by making it so that them reporting has consequences they don’t want. Usually the sorts of unwanted consequences in question are things like “I’m afraid of backlash if someone tells X that I’m the one who reported them” or “I’m just saying this to help you establish a pattern of bad behavior by X, but I don’t want to be involved in this so don’t do anything about it just based on my report.” But this sort of nonintervention request might also be made by victims whose point of view is “I think X is doing really impactful work, and I want my report to at most limit their engagement with EA in certain contexts (e.g., situations where they have significant influence over young EAs), not to limit their involvement in EA generally.” In other words, leave impact considerations to the victim’s own choice.
I’m not sure this the right balance. I wrote it with one specific real example from my own life in mind, and I don’t know how well it generalizes. But it does seem to me like any less victim-friendly positions than that would probably indeed be worse even from a completely consequentialist perspective, because of the likelihood of driving victims away from EA.
because of the likelihood of driving victims away from EA.
And, after a while, also people who aren’t yet victims but know how the community will act (or fall to act) if they become ones, so they just opt out preemptively.
This is a valid consideration, however, one could argue that if we were to give victims the option to opt out of the specific consequence that might have been crucial in preventing future wrongdoings by the same person or other people, then perpetrators would think they can still carry on with their behavior. Especially if the victim decides to opt the perpetrator out of all serious consequences. It also could be the case that victims that are affected by what happened to them psychologically might not be able to make an informed judgment of consequences at that very moment, as we know everyone has their own time frame of processing the wrongdoing that was done to them.
Hmm, I can see where you’re coming from, but this seems hard to argue in absolutes. There’s situations where it’s unclear and the evidence is murky re whether the predatory behaviour actually happened, or where the behaviour could maybe be seen as predatory in a certain light and cultural context but not in others. I’m reluctant to say that a factor just does not matter, though it seems reasonable to argue that EAs overweight it.
This will be my last message in this thread, because I find this conversation upsetting every time it happens (and every time it becomes clear that nothing will change). I find it really distressing that a bunch of lovely and caring people can come together and create a community that can be so unfriendly to the victims of assault and harassment.
And I find it upsetting that these lovely and caring people can fall into serious moral failure, in the way that this is a serious moral failure from my perspective on morality (I say this while also accepting that this reflects not evilness but rather a disagreement about morality, such that the lovely, caring people really do continue to be lovely and caring and they simply disagree with me about a substantive question).
To reply to your specific comments, I certainly agree that there is room for nuance: situations can be unclear and there can be clashes of cultural norms. Navigating the moral world is difficult and we certainly need to pay attention to nuances to navigate it well.
Yet as far as I’m concerned, it remains the case that someone’s contributions via their work are irrelevant to assessing how we should respond to their serious wrongdoing. It’s possible to accept the existence of nuance without thinking that all nuances matter. I do not think that this nuance matters.
(I’m happy to stick to discussing serious cases of wrongdoing and simply set aside the more marginal cases. I think it would represent such a huge step forwards if EA could come to robustly act on serious wrongdoing, so I don’t want to get distracted by trying to figure out the appropriate reaction to the less crucial cases.)
I cannot provide an argument for this of the form that Oliver would like, not least because his comment suggests he might prefer an argument that is ultimately consequentialist in nature even if at some layers removed, but I think this is the fundamentally wrong approach.
Everyone accepts some moral claims as fundamental. I take it as a fundamental moral claim that when a perpetrator commits a serious wrong against someone it is the nature of the wrong (and perhaps the views of the person wronged, per Jenny’s comment) that determine the appropriate response. I don’t expect that everyone reading this comment will agree with this, and I don’t believe it’s always possible to argue someone into a moral view (I think at some fundamental level, we end up having to accept irreconcilable disagreements, as much as that frustrates the EA urge to be able to use reason to settle all matters).
(At this point, we could push into hypothetical scenarios like, “what if you were literally certain that if we reacted appropriately to the wrongdoing then everyone would be tortured forever?”. Would the consequences still be irrelevant? Perhaps not, but the fact of the matter is that we do not live in a hypothetical world. I will say this much: I think that the nature of the wrongdoing is the vastly dominating factor in determining how to respond to that wrongdoing. In realistic cases, it is powerful enough that we don’t need to reflect on the other considerations that carry less weight in this context.)
I’ve said I don’t expect to convince the consequentialists reading this to accept my view. What’s the point then? Perhaps I simply hope to make clear just how crucial an issue of moral conscience this is for some people. And perhaps I hope that this might at least push EA to consider a compromise that is more responsive to this matter of conscience.
I’m sorry you’ve found this conversation upsetting, and think it’s entirely reasonable to not want to continue it, so I’ll leave things here. I appreciate the openness, and you still being willing to express this opinion despite expecting to find the conversation upsetting!
I think you could try to argue (but you do have to argue) that the harm from this kind of behavior is much more important than the contributions from the same people, especially when the behavior is minor. Or you could try to argue that there is a moral schelling fence here that suggest some kind of deontological rule that we shouldn’t cross, not because we know what happens when we cross it, but because it sure is a pretty universal rule (which, to be clear, in this case I don’t think applies, though I think there is an interesting argument to be made here). Or you could argue that there is some group of experts on this topic with a good track record that we should defer to on this topic, even if we don’t understand their reasoning.
But I do think at the end this is a position that has to be argued against (and I think there are interesting arguments to be made), and I don’t think this comment succeeds at that. I think it contains snippets of considerations, but I don’t like the degree to which it tries to frame its position as obvious, while mostly only hinting at underlying arguments.
Just to be more concrete, what would you say is an example of a behaviour that you think does not warrant action, because “the harm from this kind of behaviour is not much more important than the contributions from the same people”?
And where would you personally draw the line? i.e., what does the most harmful example look like that still does not warrant action, because the harm is not much more important the contributions?
One downside of having the bad-ness of say, sexual violence[1]be mitigated by their perceived impact,(how is the community health team actually measuring this? how good someone’s forum posts are? or whether they work at an EA org? or whether they are “EA leadership”?) when considering what the appropriate action should be (if this is happening) is that it plausibly leads to different standards for bad behaviour. By the community health team’s own standards, taking someone’s potential impact into account as a mitigating factor seems like it could increase the risk of harm to members of the community (by not taking sufficient action with the justification of perceived impact), while being more unfair to people who are accused of wrongdoing. To be clear, I’m basing this off the forum post, not any non-public information
Additionally, a common theme about basically every sexual violence scandal that I’ve read about is that there were (often multiple) warnings beforehand that were not taken seriously.
If there is a major sexual violence scandal in EA in the future, it will be pretty damning if the warnings and concerns were clearly raised, but the community health teamchose not to act because they decided it wasn’t worth the tradeoff against the person/people’s impact.
Another point is that people who are considered impactful are likely to be somewhat correlated with people who have gained respect and power in the EA space, have seniority or leadership roles etc. Given the role that abuse of power plays in sexual violence, we should be especially cautious of considerations that might indirectly favour those who have power.
More weakly, even if you hold the view that it is in fact the community health team’s role to “take the talent bottleneck seriously; don’t hamper hiring / projects too much” when responding to say, a sexual violence allegation, it seems like it would be easy to overvalue the bad-ness of the immediate action against the person’s impact, and undervalue the bad-ness of many more people opting to not get involved, or distance themselves from the EA movement because they perceive it to be an unsafe place for women, with unreliable ways of holding perpetrators accountable.
That being said, I think the community health team has an incredibly difficult job, and while they play an important role in mediating community norms and dynamics (and thus have corresponding amount of responsibility), it’s always easier to make comments of a critical nature than to make the difficult decisions they have to make. I’m grateful they exist, and don’t want my comment to come across like an attack of the community health team or its individuals!
Thanks for raising this, I think I wasn’t clear enough in the post cited.
To clarify—that line in the table is referring specifically to sharing research, not all kinds of participation in the community. I meant it about things like “should people still be able to post their research on the EA Forum, or receive a grant to do research, if they’ve treated other people badly?” I find that a genuinely hard question. I don’t want to ignore the past or enable more harm. But I also don’t want to suppress content that would be useful to other EAs (and to the world) because of the person who produced it.
I see that as a pretty different question from “Should they attend conferences?” and other things more relevant to their participation in the community side of EA.
1.) Clearly this is better than the alternative where the same considerations are applied to other ways of participating in the community.
2.) My issue isn’t particularly with the community health team, but with a general attitude that I’ve often encountered among EAs in more informal discussions. Sadly, informal discussions are hard to provide concrete evidence of, so I pointed to an example that I take to be less egregious, though I still think on the wrong side of things here. I am more concerned by the general attitude that is held by some EAs I’ve spoken to than two specific lines of a specific post.
3.) People are banned from the forum for being rude in relatively minor ways. And yet let’s imagine a hypothetical case where someone is accused of serious wrongdoing and further are specifically accused of carrying out some elements of wrongdoing via online social networks. It would seem weird to ban the first person for minor rudeness, but give the second person access to a platform that can allow them to build status and communicate with people via just the sort of medium that they allegedly used to carry out previous wrongdoing. Yet I think this is a plausible outcome of the current policies on when to ban people and how to react to interpersonal harm.
4.) I agree that it’s a different question; I still don’t think it’s a difficult one. For a start, I think it’s a little odd to conceive of this as “suppressing” content. People can still post content in lots of other places, and indeed other people can share it on the EA forum if they want to. Further, I don’t think you can separate out enabling harm from posting to the forum, given that forum posts can confer status to people and status can help people to commit harm. So I think that the current policy just does enable harm. I think enabling this harm is the wrong call.
5.) I also think we could run the consequentialist case here, pointing to the fact that other people might not contribute to EA because they find the EA attitude to these cases concerning and don’t feel safe or comfortable in the community.
All of that said, I think it’s important to say again, per point 1, that I do agree that the issue is much less concerning when it doesn’t involve real world contact between people, and that I appreciate you taking the time to reply.
On a tangent, I also want to flag that this exemplifies the importance of transparent policies and rationales in orgs relating to the community. Without Julia Wise’s post on her approach, which was effectively secret for a long time before, it would be impossible to have this discussion. I believe publishing that post was a result of community pressure for transparency, and that we should continue pressing for that kind of transparency in other areas of EA.
making sure that we can still get the value of perpetrator’s work.
The standard recommendation I’ve always heard is basically in the family of tradeoffs, but says that you never really land on the side of preserving the perpetrator’s contributions when you factor in the victim’s contributions and higher order effects from networks/feedback loops.
I don’t understand what’s going on here. Sometimes someone is a bit rude and causes a tiny bit of interpersonal harm. Sometimes someone smells bad. Sometimes someone has a slightly bad temper. Of course I care about being able to benefit from the contributions of those people, many great scientists and thinkers in history had problems of this type.
How is it possible to “never land on the side of preserving the perpetrator’s contributions” without specifying the severity of the things going on? Of course there will be many levels of severity where you have to make difficult tradeoffs here, this seems so obvious that I don’t understand what is going on in this thread.
I think the heuristic I mentioned is designed for sexual assault, and I wouldn’t expect it to be the right move for less severe values of interpersonal harm.
Realizing now that I did the very thing that annoys me about these discussions: make statements tuned for severe and obvious cases that have implications about less severe or obvious cases, but not being clear about it, leaving the reader to wonder if they ought to round up the less obvious cases into a more obvious case. Sorry about that.
In context, I definitely read this as about median/modal allegations of harm that are reported to the CEA CH team. I expect them to be substantially more severe than the examples you listed.
The modal thing that gets reported to community health is something like “This person did a thing that made me / my friend kind of uncomfortable, and I’d like you to notice if other people report more problems from them.”
Huh, I actually think a lot of relatively minor pieces of harm get reported to the CEA CH team, where probably nobody involved would want the other party to just be completely excluded from the community, or give no care to their ability to continue contributing.
A lot of the things I talk to the CH team about are things like “this person seemed kind of salesy when I interfaced with them, and I would want someone to keep track of whether other people feel the same, and maybe watch out for some bigger pattern”.
I’m trying to get a model of what you’re saying here:
A lot of the things I talk to the CH team about are things like “this person seemed kind of salesy when I interfaced with them, and I would want someone to keep track of whether other people feel the same, and maybe watch out for some bigger pattern”.
Is the CH team (terrible initials BTW) initiating contact with you, about this low urgency, low danger work? Or are you initiating contact with them?
In either case, it’s not clear what this is saying or how it’s negative or positive.
For example, in a Bayesian model sort of sense, I don’t see how this gives information on the CH team being ineffective or effective, or the EA community being bad or good.
(To be honest, IMO keeping track of these small things seems very favorable. It seems consistent with the CH team being involved in the community. This seems like it gives depth/competence/context when there is a much more major issue. It is also seems like a class of nuanced, quiet, conscientious work that has long term benefits for everyone, but is less visible, compared to other ways of doing this work, like big splashy announcements (as negative examples think dysfunction of institutions in The Wire)).
What I’m trying to get at is that you are one of the most respected people and have good insights, so if you have a model of how things should improve, or EA institutions are low wattage or high wattage, on the CH team or otherwise, it would be good to hear.
Is the CH team (terrible initials BTW) initiating contact with you, about this low urgency, low danger work? Or are you initiating contact with them?
I have some recurring meetings with Nicole (though we sure have been skipping a lot of them in recent months) where I tend to bring these things up.
In either case, it’s not clear what this is saying or how it’s negative or positive.
Sorry, I am just responding to Linch’s statement that the median/modal piece of harm that gets reported to the CH team is probably quite severe (whereas I think the majority are pretty minor, and one of the primary jobs of the CH team is to figure out how to aggregate lots of weak points of evidence that might point to some kind of large distributed harm).
(To be honest, IMO keeping track of these small things seems very favorable. It seems consistent with the CH team being involved in the community. This seems like it gives depth/competence/context when there is a much more major issue. It is also seems like a class of nuanced, quiet, conscientious work that has long term benefits for everyone, but is less visible, compared to other ways of doing this work, like big splashy announcements (as negative examples think dysfunction of institutions in The Wire)).
Yep, this seems right to me. I am glad the CH team is filling this function. I think there are better ways of going about it than they historically have, and I have some criticisms, but I am overall happy that an institution like this exists (and indeed think that something nearby that could have aggregated more evidence on Sam’s dishonesty could have maybe done something about the FTX situation).
Here’s a concrete suggestion: less emphasis on a person’s impact in “interpersonal harm” cases.
2 of the 7 rows in the table on the community health team post on this topic are about making sure that we can still get the value of perpetrator’s work.
This seems to me to be the sort of naive consequentialist reasoning that people find so offputting about EA. It seems to me that when a perpetrator commits a wrongdoing, the question to ask isn’t “do they do work that we value and want more of”. The questions are “did they commit a wrong? what sort of wrong? what is the appropriate response to that wrongdoing?”.
(And note that my link is to an official statement from the team explicitly dedicated to helping protect the community. Imagine how much more prevalent the attitude of “focus on impact” is in some less cautious corners of the community.)
Looking at the context of that table, that is a list of difficult tradeoffs where both sides are valuable and they’re not sure they have the right balance. This seems pretty correct to me? It is a difficult tradeoff, both sides ARE valuable, and they may not have the right balance.
My view is that this is false. Whether or not someone’s work is useful should play no role in determining the appropriate reaction to predatory behaviour, so there’s just no tradeoff we should be reflecting on here. I don’t think this is a difficult question. I don’t think that (for example) the talent bottleneck is relevant to how EA should respond to predatory behaviour: if people act in a predatory way, that should be acted on even if it makes it harder to find talent. The tradeoff is simple because one side of it should be entirely ignored.
I’m sure that many of the readers of this forum will disagree with me about this. But my view is that the community will never robustly act on predatory behaviour while it continues to treat impact as one of the relevant factors in determining how to respond to such behaviour.
(I also think this is an example of how, despite some people’s protestations, EA does in fact engage in a sort of means-end reasoning that is in violation of common sense morality and that does involve a failure of what many people would see as integrity).
I think it’s important to be clear about this viewpoint, but I do worry that in doing so it will sound like I’m attacking Neel. So I want to be clear that this is not the case; I don’t know Neel but I imagine he’s an excellent and lovely human being. I just happen to think he’s wrong about this specific issue, and I happen to think that the fact that many EAs hold this view has had serious and negative effects.
ETA: Even with all of that said, I do agree that the full post from the community health team contains much more detail than I summarised in my brief reference, and I think people should not judge the full contents of the post based on my comment (instead, I would encourage people to read the post itself).
It’s perhaps worth noting that I think there’s a pretty strong consequentialist case against considering impact in these cases. I think doing so has reputational costs, I think it encourages future wrongdoing, and I think it discourages valuable contributions to the community from those who are driven away. (This is just the point that consequentialist EAs are making when they argue against being “naive” consequentialists).
But I will leave someone else to make this case in detail if they wish to, because I think that this is not the point. I personally find it disturbing that I would have to make a case in impact terms in order to encourage robust action against perpetrators, and I don’t feel comfortable doing so in detail.
I think maybe that the balance I’d strike here is as follows: we always respect nonintervention requests by victims. That is if the victim says “I was harmed by X, but I think the consequences of me reporting this should not include consequence Y” then we avoid intervening in ways that will cause Y. This is a good practice generally, because you never want to disincentivize people from reporting by making it so that them reporting has consequences they don’t want. Usually the sorts of unwanted consequences in question are things like “I’m afraid of backlash if someone tells X that I’m the one who reported them” or “I’m just saying this to help you establish a pattern of bad behavior by X, but I don’t want to be involved in this so don’t do anything about it just based on my report.” But this sort of nonintervention request might also be made by victims whose point of view is “I think X is doing really impactful work, and I want my report to at most limit their engagement with EA in certain contexts (e.g., situations where they have significant influence over young EAs), not to limit their involvement in EA generally.” In other words, leave impact considerations to the victim’s own choice.
I’m not sure this the right balance. I wrote it with one specific real example from my own life in mind, and I don’t know how well it generalizes. But it does seem to me like any less victim-friendly positions than that would probably indeed be worse even from a completely consequentialist perspective, because of the likelihood of driving victims away from EA.
And, after a while, also people who aren’t yet victims but know how the community will act (or fall to act) if they become ones, so they just opt out preemptively.
This is a valid consideration, however, one could argue that if we were to give victims the option to opt out of the specific consequence that might have been crucial in preventing future wrongdoings by the same person or other people, then perpetrators would think they can still carry on with their behavior. Especially if the victim decides to opt the perpetrator out of all serious consequences. It also could be the case that victims that are affected by what happened to them psychologically might not be able to make an informed judgment of consequences at that very moment, as we know everyone has their own time frame of processing the wrongdoing that was done to them.
Hmm, I can see where you’re coming from, but this seems hard to argue in absolutes. There’s situations where it’s unclear and the evidence is murky re whether the predatory behaviour actually happened, or where the behaviour could maybe be seen as predatory in a certain light and cultural context but not in others. I’m reluctant to say that a factor just does not matter, though it seems reasonable to argue that EAs overweight it.
This will be my last message in this thread, because I find this conversation upsetting every time it happens (and every time it becomes clear that nothing will change). I find it really distressing that a bunch of lovely and caring people can come together and create a community that can be so unfriendly to the victims of assault and harassment.
And I find it upsetting that these lovely and caring people can fall into serious moral failure, in the way that this is a serious moral failure from my perspective on morality (I say this while also accepting that this reflects not evilness but rather a disagreement about morality, such that the lovely, caring people really do continue to be lovely and caring and they simply disagree with me about a substantive question).
To reply to your specific comments, I certainly agree that there is room for nuance: situations can be unclear and there can be clashes of cultural norms. Navigating the moral world is difficult and we certainly need to pay attention to nuances to navigate it well.
Yet as far as I’m concerned, it remains the case that someone’s contributions via their work are irrelevant to assessing how we should respond to their serious wrongdoing. It’s possible to accept the existence of nuance without thinking that all nuances matter. I do not think that this nuance matters.
(I’m happy to stick to discussing serious cases of wrongdoing and simply set aside the more marginal cases. I think it would represent such a huge step forwards if EA could come to robustly act on serious wrongdoing, so I don’t want to get distracted by trying to figure out the appropriate reaction to the less crucial cases.)
I cannot provide an argument for this of the form that Oliver would like, not least because his comment suggests he might prefer an argument that is ultimately consequentialist in nature even if at some layers removed, but I think this is the fundamentally wrong approach.
Everyone accepts some moral claims as fundamental. I take it as a fundamental moral claim that when a perpetrator commits a serious wrong against someone it is the nature of the wrong (and perhaps the views of the person wronged, per Jenny’s comment) that determine the appropriate response. I don’t expect that everyone reading this comment will agree with this, and I don’t believe it’s always possible to argue someone into a moral view (I think at some fundamental level, we end up having to accept irreconcilable disagreements, as much as that frustrates the EA urge to be able to use reason to settle all matters).
(At this point, we could push into hypothetical scenarios like, “what if you were literally certain that if we reacted appropriately to the wrongdoing then everyone would be tortured forever?”. Would the consequences still be irrelevant? Perhaps not, but the fact of the matter is that we do not live in a hypothetical world. I will say this much: I think that the nature of the wrongdoing is the vastly dominating factor in determining how to respond to that wrongdoing. In realistic cases, it is powerful enough that we don’t need to reflect on the other considerations that carry less weight in this context.)
I’ve said I don’t expect to convince the consequentialists reading this to accept my view. What’s the point then? Perhaps I simply hope to make clear just how crucial an issue of moral conscience this is for some people. And perhaps I hope that this might at least push EA to consider a compromise that is more responsive to this matter of conscience.
I’m sorry you’ve found this conversation upsetting, and think it’s entirely reasonable to not want to continue it, so I’ll leave things here. I appreciate the openness, and you still being willing to express this opinion despite expecting to find the conversation upsetting!
I think you could try to argue (but you do have to argue) that the harm from this kind of behavior is much more important than the contributions from the same people, especially when the behavior is minor. Or you could try to argue that there is a moral schelling fence here that suggest some kind of deontological rule that we shouldn’t cross, not because we know what happens when we cross it, but because it sure is a pretty universal rule (which, to be clear, in this case I don’t think applies, though I think there is an interesting argument to be made here). Or you could argue that there is some group of experts on this topic with a good track record that we should defer to on this topic, even if we don’t understand their reasoning.
But I do think at the end this is a position that has to be argued against (and I think there are interesting arguments to be made), and I don’t think this comment succeeds at that. I think it contains snippets of considerations, but I don’t like the degree to which it tries to frame its position as obvious, while mostly only hinting at underlying arguments.
Just to be more concrete, what would you say is an example of a behaviour that you think does not warrant action, because “the harm from this kind of behaviour is not much more important than the contributions from the same people”?
And where would you personally draw the line? i.e., what does the most harmful example look like that still does not warrant action, because the harm is not much more important the contributions?
While I agree that both sides are valuable, I agree with the anon here—I don’t think these tradeoffs are particularly relevant to a community health team investigating interpersonal harm cases with the goal of “reduc[ing] risk of harm to members of the community while being fair to people who are accused of wrongdoing”.
One downside of having the bad-ness of say, sexual violence[1]be mitigated by their perceived impact,(how is the community health team actually measuring this? how good someone’s forum posts are? or whether they work at an EA org? or whether they are “EA leadership”?) when considering what the appropriate action should be (if this is happening) is that it plausibly leads to different standards for bad behaviour. By the community health team’s own standards, taking someone’s potential impact into account as a mitigating factor seems like it could increase the risk of harm to members of the community (by not taking sufficient action with the justification of perceived impact), while being more unfair to people who are accused of wrongdoing. To be clear, I’m basing this off the forum post, not any non-public information
Additionally, a common theme about basically every sexual violence scandal that I’ve read about is that there were (often multiple) warnings beforehand that were not taken seriously.
If there is a major sexual violence scandal in EA in the future, it will be pretty damning if the warnings and concerns were clearly raised, but the community health team chose not to act because they decided it wasn’t worth the tradeoff against the person/people’s impact.
Another point is that people who are considered impactful are likely to be somewhat correlated with people who have gained respect and power in the EA space, have seniority or leadership roles etc. Given the role that abuse of power plays in sexual violence, we should be especially cautious of considerations that might indirectly favour those who have power.
More weakly, even if you hold the view that it is in fact the community health team’s role to “take the talent bottleneck seriously; don’t hamper hiring / projects too much” when responding to say, a sexual violence allegation, it seems like it would be easy to overvalue the bad-ness of the immediate action against the person’s impact, and undervalue the bad-ness of many more people opting to not get involved, or distance themselves from the EA movement because they perceive it to be an unsafe place for women, with unreliable ways of holding perpetrators accountable.
That being said, I think the community health team has an incredibly difficult job, and while they play an important role in mediating community norms and dynamics (and thus have corresponding amount of responsibility), it’s always easier to make comments of a critical nature than to make the difficult decisions they have to make. I’m grateful they exist, and don’t want my comment to come across like an attack of the community health team or its individuals!
(commenting in personal capacity etc)
used as an umbrella term to include things like verbal harassment. See definition here.
Thanks for raising this, I think I wasn’t clear enough in the post cited.
To clarify—that line in the table is referring specifically to sharing research, not all kinds of participation in the community. I meant it about things like “should people still be able to post their research on the EA Forum, or receive a grant to do research, if they’ve treated other people badly?” I find that a genuinely hard question. I don’t want to ignore the past or enable more harm. But I also don’t want to suppress content that would be useful to other EAs (and to the world) because of the person who produced it.
I see that as a pretty different question from “Should they attend conferences?” and other things more relevant to their participation in the community side of EA.
A few brief comments.
1.) Clearly this is better than the alternative where the same considerations are applied to other ways of participating in the community.
2.) My issue isn’t particularly with the community health team, but with a general attitude that I’ve often encountered among EAs in more informal discussions. Sadly, informal discussions are hard to provide concrete evidence of, so I pointed to an example that I take to be less egregious, though I still think on the wrong side of things here. I am more concerned by the general attitude that is held by some EAs I’ve spoken to than two specific lines of a specific post.
3.) People are banned from the forum for being rude in relatively minor ways. And yet let’s imagine a hypothetical case where someone is accused of serious wrongdoing and further are specifically accused of carrying out some elements of wrongdoing via online social networks. It would seem weird to ban the first person for minor rudeness, but give the second person access to a platform that can allow them to build status and communicate with people via just the sort of medium that they allegedly used to carry out previous wrongdoing. Yet I think this is a plausible outcome of the current policies on when to ban people and how to react to interpersonal harm.
4.) I agree that it’s a different question; I still don’t think it’s a difficult one. For a start, I think it’s a little odd to conceive of this as “suppressing” content. People can still post content in lots of other places, and indeed other people can share it on the EA forum if they want to. Further, I don’t think you can separate out enabling harm from posting to the forum, given that forum posts can confer status to people and status can help people to commit harm. So I think that the current policy just does enable harm. I think enabling this harm is the wrong call.
5.) I also think we could run the consequentialist case here, pointing to the fact that other people might not contribute to EA because they find the EA attitude to these cases concerning and don’t feel safe or comfortable in the community.
All of that said, I think it’s important to say again, per point 1, that I do agree that the issue is much less concerning when it doesn’t involve real world contact between people, and that I appreciate you taking the time to reply.
I strongly agree with this.
On a tangent, I also want to flag that this exemplifies the importance of transparent policies and rationales in orgs relating to the community. Without Julia Wise’s post on her approach, which was effectively secret for a long time before, it would be impossible to have this discussion. I believe publishing that post was a result of community pressure for transparency, and that we should continue pressing for that kind of transparency in other areas of EA.
The standard recommendation I’ve always heard is basically in the family of tradeoffs, but says that you never really land on the side of preserving the perpetrator’s contributions when you factor in the victim’s contributions and higher order effects from networks/feedback loops.
I don’t understand what’s going on here. Sometimes someone is a bit rude and causes a tiny bit of interpersonal harm. Sometimes someone smells bad. Sometimes someone has a slightly bad temper. Of course I care about being able to benefit from the contributions of those people, many great scientists and thinkers in history had problems of this type.
How is it possible to “never land on the side of preserving the perpetrator’s contributions” without specifying the severity of the things going on? Of course there will be many levels of severity where you have to make difficult tradeoffs here, this seems so obvious that I don’t understand what is going on in this thread.
I think the heuristic I mentioned is designed for sexual assault, and I wouldn’t expect it to be the right move for less severe values of interpersonal harm.
Realizing now that I did the very thing that annoys me about these discussions: make statements tuned for severe and obvious cases that have implications about less severe or obvious cases, but not being clear about it, leaving the reader to wonder if they ought to round up the less obvious cases into a more obvious case. Sorry about that.
In context, I definitely read this as about median/modal allegations of harm that are reported to the CEA CH team. I expect them to be substantially more severe than the examples you listed.
The modal thing that gets reported to community health is something like “This person did a thing that made me / my friend kind of uncomfortable, and I’d like you to notice if other people report more problems from them.”
Thanks, this is helpful!
Huh, I actually think a lot of relatively minor pieces of harm get reported to the CEA CH team, where probably nobody involved would want the other party to just be completely excluded from the community, or give no care to their ability to continue contributing.
A lot of the things I talk to the CH team about are things like “this person seemed kind of salesy when I interfaced with them, and I would want someone to keep track of whether other people feel the same, and maybe watch out for some bigger pattern”.
I’m trying to get a model of what you’re saying here:
Is the CH team (terrible initials BTW) initiating contact with you, about this low urgency, low danger work? Or are you initiating contact with them?
In either case, it’s not clear what this is saying or how it’s negative or positive.
For example, in a Bayesian model sort of sense, I don’t see how this gives information on the CH team being ineffective or effective, or the EA community being bad or good.
(To be honest, IMO keeping track of these small things seems very favorable. It seems consistent with the CH team being involved in the community. This seems like it gives depth/competence/context when there is a much more major issue. It is also seems like a class of nuanced, quiet, conscientious work that has long term benefits for everyone, but is less visible, compared to other ways of doing this work, like big splashy announcements (as negative examples think dysfunction of institutions in The Wire)).
What I’m trying to get at is that you are one of the most respected people and have good insights, so if you have a model of how things should improve, or EA institutions are low wattage or high wattage, on the CH team or otherwise, it would be good to hear.
I have some recurring meetings with Nicole (though we sure have been skipping a lot of them in recent months) where I tend to bring these things up.
Sorry, I am just responding to Linch’s statement that the median/modal piece of harm that gets reported to the CH team is probably quite severe (whereas I think the majority are pretty minor, and one of the primary jobs of the CH team is to figure out how to aggregate lots of weak points of evidence that might point to some kind of large distributed harm).
Yep, this seems right to me. I am glad the CH team is filling this function. I think there are better ways of going about it than they historically have, and I have some criticisms, but I am overall happy that an institution like this exists (and indeed think that something nearby that could have aggregated more evidence on Sam’s dishonesty could have maybe done something about the FTX situation).