Looking at the context of that table, that is a list of difficult tradeoffs where both sides are valuable and they’re not sure they have the right balance. This seems pretty correct to me? It is a difficult tradeoff, both sides ARE valuable, and they may not have the right balance.
My view is that this is false. Whether or not someone’s work is useful should play no role in determining the appropriate reaction to predatory behaviour, so there’s just no tradeoff we should be reflecting on here. I don’t think this is a difficult question. I don’t think that (for example) the talent bottleneck is relevant to how EA should respond to predatory behaviour: if people act in a predatory way, that should be acted on even if it makes it harder to find talent. The tradeoff is simple because one side of it should be entirely ignored.
I’m sure that many of the readers of this forum will disagree with me about this. But my view is that the community will never robustly act on predatory behaviour while it continues to treat impact as one of the relevant factors in determining how to respond to such behaviour.
(I also think this is an example of how, despite some people’s protestations, EA does in fact engage in a sort of means-end reasoning that is in violation of common sense morality and that does involve a failure of what many people would see as integrity).
I think it’s important to be clear about this viewpoint, but I do worry that in doing so it will sound like I’m attacking Neel. So I want to be clear that this is not the case; I don’t know Neel but I imagine he’s an excellent and lovely human being. I just happen to think he’s wrong about this specific issue, and I happen to think that the fact that many EAs hold this view has had serious and negative effects.
ETA: Even with all of that said, I do agree that the full post from the community health team contains much more detail than I summarised in my brief reference, and I think people should not judge the full contents of the post based on my comment (instead, I would encourage people to read the post itself).
It’s perhaps worth noting that I think there’s a pretty strong consequentialist case against considering impact in these cases. I think doing so has reputational costs, I think it encourages future wrongdoing, and I think it discourages valuable contributions to the community from those who are driven away. (This is just the point that consequentialist EAs are making when they argue against being “naive” consequentialists).
But I will leave someone else to make this case in detail if they wish to, because I think that this is not the point. I personally find it disturbing that I would have to make a case in impact terms in order to encourage robust action against perpetrators, and I don’t feel comfortable doing so in detail.
I think maybe that the balance I’d strike here is as follows: we always respect nonintervention requests by victims. That is if the victim says “I was harmed by X, but I think the consequences of me reporting this should not include consequence Y” then we avoid intervening in ways that will cause Y. This is a good practice generally, because you never want to disincentivize people from reporting by making it so that them reporting has consequences they don’t want. Usually the sorts of unwanted consequences in question are things like “I’m afraid of backlash if someone tells X that I’m the one who reported them” or “I’m just saying this to help you establish a pattern of bad behavior by X, but I don’t want to be involved in this so don’t do anything about it just based on my report.” But this sort of nonintervention request might also be made by victims whose point of view is “I think X is doing really impactful work, and I want my report to at most limit their engagement with EA in certain contexts (e.g., situations where they have significant influence over young EAs), not to limit their involvement in EA generally.” In other words, leave impact considerations to the victim’s own choice.
I’m not sure this the right balance. I wrote it with one specific real example from my own life in mind, and I don’t know how well it generalizes. But it does seem to me like any less victim-friendly positions than that would probably indeed be worse even from a completely consequentialist perspective, because of the likelihood of driving victims away from EA.
because of the likelihood of driving victims away from EA.
And, after a while, also people who aren’t yet victims but know how the community will act (or fall to act) if they become ones, so they just opt out preemptively.
This is a valid consideration, however, one could argue that if we were to give victims the option to opt out of the specific consequence that might have been crucial in preventing future wrongdoings by the same person or other people, then perpetrators would think they can still carry on with their behavior. Especially if the victim decides to opt the perpetrator out of all serious consequences. It also could be the case that victims that are affected by what happened to them psychologically might not be able to make an informed judgment of consequences at that very moment, as we know everyone has their own time frame of processing the wrongdoing that was done to them.
Hmm, I can see where you’re coming from, but this seems hard to argue in absolutes. There’s situations where it’s unclear and the evidence is murky re whether the predatory behaviour actually happened, or where the behaviour could maybe be seen as predatory in a certain light and cultural context but not in others. I’m reluctant to say that a factor just does not matter, though it seems reasonable to argue that EAs overweight it.
This will be my last message in this thread, because I find this conversation upsetting every time it happens (and every time it becomes clear that nothing will change). I find it really distressing that a bunch of lovely and caring people can come together and create a community that can be so unfriendly to the victims of assault and harassment.
And I find it upsetting that these lovely and caring people can fall into serious moral failure, in the way that this is a serious moral failure from my perspective on morality (I say this while also accepting that this reflects not evilness but rather a disagreement about morality, such that the lovely, caring people really do continue to be lovely and caring and they simply disagree with me about a substantive question).
To reply to your specific comments, I certainly agree that there is room for nuance: situations can be unclear and there can be clashes of cultural norms. Navigating the moral world is difficult and we certainly need to pay attention to nuances to navigate it well.
Yet as far as I’m concerned, it remains the case that someone’s contributions via their work are irrelevant to assessing how we should respond to their serious wrongdoing. It’s possible to accept the existence of nuance without thinking that all nuances matter. I do not think that this nuance matters.
(I’m happy to stick to discussing serious cases of wrongdoing and simply set aside the more marginal cases. I think it would represent such a huge step forwards if EA could come to robustly act on serious wrongdoing, so I don’t want to get distracted by trying to figure out the appropriate reaction to the less crucial cases.)
I cannot provide an argument for this of the form that Oliver would like, not least because his comment suggests he might prefer an argument that is ultimately consequentialist in nature even if at some layers removed, but I think this is the fundamentally wrong approach.
Everyone accepts some moral claims as fundamental. I take it as a fundamental moral claim that when a perpetrator commits a serious wrong against someone it is the nature of the wrong (and perhaps the views of the person wronged, per Jenny’s comment) that determine the appropriate response. I don’t expect that everyone reading this comment will agree with this, and I don’t believe it’s always possible to argue someone into a moral view (I think at some fundamental level, we end up having to accept irreconcilable disagreements, as much as that frustrates the EA urge to be able to use reason to settle all matters).
(At this point, we could push into hypothetical scenarios like, “what if you were literally certain that if we reacted appropriately to the wrongdoing then everyone would be tortured forever?”. Would the consequences still be irrelevant? Perhaps not, but the fact of the matter is that we do not live in a hypothetical world. I will say this much: I think that the nature of the wrongdoing is the vastly dominating factor in determining how to respond to that wrongdoing. In realistic cases, it is powerful enough that we don’t need to reflect on the other considerations that carry less weight in this context.)
I’ve said I don’t expect to convince the consequentialists reading this to accept my view. What’s the point then? Perhaps I simply hope to make clear just how crucial an issue of moral conscience this is for some people. And perhaps I hope that this might at least push EA to consider a compromise that is more responsive to this matter of conscience.
I’m sorry you’ve found this conversation upsetting, and think it’s entirely reasonable to not want to continue it, so I’ll leave things here. I appreciate the openness, and you still being willing to express this opinion despite expecting to find the conversation upsetting!
I think you could try to argue (but you do have to argue) that the harm from this kind of behavior is much more important than the contributions from the same people, especially when the behavior is minor. Or you could try to argue that there is a moral schelling fence here that suggest some kind of deontological rule that we shouldn’t cross, not because we know what happens when we cross it, but because it sure is a pretty universal rule (which, to be clear, in this case I don’t think applies, though I think there is an interesting argument to be made here). Or you could argue that there is some group of experts on this topic with a good track record that we should defer to on this topic, even if we don’t understand their reasoning.
But I do think at the end this is a position that has to be argued against (and I think there are interesting arguments to be made), and I don’t think this comment succeeds at that. I think it contains snippets of considerations, but I don’t like the degree to which it tries to frame its position as obvious, while mostly only hinting at underlying arguments.
Just to be more concrete, what would you say is an example of a behaviour that you think does not warrant action, because “the harm from this kind of behaviour is not much more important than the contributions from the same people”?
And where would you personally draw the line? i.e., what does the most harmful example look like that still does not warrant action, because the harm is not much more important the contributions?
One downside of having the bad-ness of say, sexual violence[1]be mitigated by their perceived impact,(how is the community health team actually measuring this? how good someone’s forum posts are? or whether they work at an EA org? or whether they are “EA leadership”?) when considering what the appropriate action should be (if this is happening) is that it plausibly leads to different standards for bad behaviour. By the community health team’s own standards, taking someone’s potential impact into account as a mitigating factor seems like it could increase the risk of harm to members of the community (by not taking sufficient action with the justification of perceived impact), while being more unfair to people who are accused of wrongdoing. To be clear, I’m basing this off the forum post, not any non-public information
Additionally, a common theme about basically every sexual violence scandal that I’ve read about is that there were (often multiple) warnings beforehand that were not taken seriously.
If there is a major sexual violence scandal in EA in the future, it will be pretty damning if the warnings and concerns were clearly raised, but the community health teamchose not to act because they decided it wasn’t worth the tradeoff against the person/people’s impact.
Another point is that people who are considered impactful are likely to be somewhat correlated with people who have gained respect and power in the EA space, have seniority or leadership roles etc. Given the role that abuse of power plays in sexual violence, we should be especially cautious of considerations that might indirectly favour those who have power.
More weakly, even if you hold the view that it is in fact the community health team’s role to “take the talent bottleneck seriously; don’t hamper hiring / projects too much” when responding to say, a sexual violence allegation, it seems like it would be easy to overvalue the bad-ness of the immediate action against the person’s impact, and undervalue the bad-ness of many more people opting to not get involved, or distance themselves from the EA movement because they perceive it to be an unsafe place for women, with unreliable ways of holding perpetrators accountable.
That being said, I think the community health team has an incredibly difficult job, and while they play an important role in mediating community norms and dynamics (and thus have corresponding amount of responsibility), it’s always easier to make comments of a critical nature than to make the difficult decisions they have to make. I’m grateful they exist, and don’t want my comment to come across like an attack of the community health team or its individuals!
Looking at the context of that table, that is a list of difficult tradeoffs where both sides are valuable and they’re not sure they have the right balance. This seems pretty correct to me? It is a difficult tradeoff, both sides ARE valuable, and they may not have the right balance.
My view is that this is false. Whether or not someone’s work is useful should play no role in determining the appropriate reaction to predatory behaviour, so there’s just no tradeoff we should be reflecting on here. I don’t think this is a difficult question. I don’t think that (for example) the talent bottleneck is relevant to how EA should respond to predatory behaviour: if people act in a predatory way, that should be acted on even if it makes it harder to find talent. The tradeoff is simple because one side of it should be entirely ignored.
I’m sure that many of the readers of this forum will disagree with me about this. But my view is that the community will never robustly act on predatory behaviour while it continues to treat impact as one of the relevant factors in determining how to respond to such behaviour.
(I also think this is an example of how, despite some people’s protestations, EA does in fact engage in a sort of means-end reasoning that is in violation of common sense morality and that does involve a failure of what many people would see as integrity).
I think it’s important to be clear about this viewpoint, but I do worry that in doing so it will sound like I’m attacking Neel. So I want to be clear that this is not the case; I don’t know Neel but I imagine he’s an excellent and lovely human being. I just happen to think he’s wrong about this specific issue, and I happen to think that the fact that many EAs hold this view has had serious and negative effects.
ETA: Even with all of that said, I do agree that the full post from the community health team contains much more detail than I summarised in my brief reference, and I think people should not judge the full contents of the post based on my comment (instead, I would encourage people to read the post itself).
It’s perhaps worth noting that I think there’s a pretty strong consequentialist case against considering impact in these cases. I think doing so has reputational costs, I think it encourages future wrongdoing, and I think it discourages valuable contributions to the community from those who are driven away. (This is just the point that consequentialist EAs are making when they argue against being “naive” consequentialists).
But I will leave someone else to make this case in detail if they wish to, because I think that this is not the point. I personally find it disturbing that I would have to make a case in impact terms in order to encourage robust action against perpetrators, and I don’t feel comfortable doing so in detail.
I think maybe that the balance I’d strike here is as follows: we always respect nonintervention requests by victims. That is if the victim says “I was harmed by X, but I think the consequences of me reporting this should not include consequence Y” then we avoid intervening in ways that will cause Y. This is a good practice generally, because you never want to disincentivize people from reporting by making it so that them reporting has consequences they don’t want. Usually the sorts of unwanted consequences in question are things like “I’m afraid of backlash if someone tells X that I’m the one who reported them” or “I’m just saying this to help you establish a pattern of bad behavior by X, but I don’t want to be involved in this so don’t do anything about it just based on my report.” But this sort of nonintervention request might also be made by victims whose point of view is “I think X is doing really impactful work, and I want my report to at most limit their engagement with EA in certain contexts (e.g., situations where they have significant influence over young EAs), not to limit their involvement in EA generally.” In other words, leave impact considerations to the victim’s own choice.
I’m not sure this the right balance. I wrote it with one specific real example from my own life in mind, and I don’t know how well it generalizes. But it does seem to me like any less victim-friendly positions than that would probably indeed be worse even from a completely consequentialist perspective, because of the likelihood of driving victims away from EA.
And, after a while, also people who aren’t yet victims but know how the community will act (or fall to act) if they become ones, so they just opt out preemptively.
This is a valid consideration, however, one could argue that if we were to give victims the option to opt out of the specific consequence that might have been crucial in preventing future wrongdoings by the same person or other people, then perpetrators would think they can still carry on with their behavior. Especially if the victim decides to opt the perpetrator out of all serious consequences. It also could be the case that victims that are affected by what happened to them psychologically might not be able to make an informed judgment of consequences at that very moment, as we know everyone has their own time frame of processing the wrongdoing that was done to them.
Hmm, I can see where you’re coming from, but this seems hard to argue in absolutes. There’s situations where it’s unclear and the evidence is murky re whether the predatory behaviour actually happened, or where the behaviour could maybe be seen as predatory in a certain light and cultural context but not in others. I’m reluctant to say that a factor just does not matter, though it seems reasonable to argue that EAs overweight it.
This will be my last message in this thread, because I find this conversation upsetting every time it happens (and every time it becomes clear that nothing will change). I find it really distressing that a bunch of lovely and caring people can come together and create a community that can be so unfriendly to the victims of assault and harassment.
And I find it upsetting that these lovely and caring people can fall into serious moral failure, in the way that this is a serious moral failure from my perspective on morality (I say this while also accepting that this reflects not evilness but rather a disagreement about morality, such that the lovely, caring people really do continue to be lovely and caring and they simply disagree with me about a substantive question).
To reply to your specific comments, I certainly agree that there is room for nuance: situations can be unclear and there can be clashes of cultural norms. Navigating the moral world is difficult and we certainly need to pay attention to nuances to navigate it well.
Yet as far as I’m concerned, it remains the case that someone’s contributions via their work are irrelevant to assessing how we should respond to their serious wrongdoing. It’s possible to accept the existence of nuance without thinking that all nuances matter. I do not think that this nuance matters.
(I’m happy to stick to discussing serious cases of wrongdoing and simply set aside the more marginal cases. I think it would represent such a huge step forwards if EA could come to robustly act on serious wrongdoing, so I don’t want to get distracted by trying to figure out the appropriate reaction to the less crucial cases.)
I cannot provide an argument for this of the form that Oliver would like, not least because his comment suggests he might prefer an argument that is ultimately consequentialist in nature even if at some layers removed, but I think this is the fundamentally wrong approach.
Everyone accepts some moral claims as fundamental. I take it as a fundamental moral claim that when a perpetrator commits a serious wrong against someone it is the nature of the wrong (and perhaps the views of the person wronged, per Jenny’s comment) that determine the appropriate response. I don’t expect that everyone reading this comment will agree with this, and I don’t believe it’s always possible to argue someone into a moral view (I think at some fundamental level, we end up having to accept irreconcilable disagreements, as much as that frustrates the EA urge to be able to use reason to settle all matters).
(At this point, we could push into hypothetical scenarios like, “what if you were literally certain that if we reacted appropriately to the wrongdoing then everyone would be tortured forever?”. Would the consequences still be irrelevant? Perhaps not, but the fact of the matter is that we do not live in a hypothetical world. I will say this much: I think that the nature of the wrongdoing is the vastly dominating factor in determining how to respond to that wrongdoing. In realistic cases, it is powerful enough that we don’t need to reflect on the other considerations that carry less weight in this context.)
I’ve said I don’t expect to convince the consequentialists reading this to accept my view. What’s the point then? Perhaps I simply hope to make clear just how crucial an issue of moral conscience this is for some people. And perhaps I hope that this might at least push EA to consider a compromise that is more responsive to this matter of conscience.
I’m sorry you’ve found this conversation upsetting, and think it’s entirely reasonable to not want to continue it, so I’ll leave things here. I appreciate the openness, and you still being willing to express this opinion despite expecting to find the conversation upsetting!
I think you could try to argue (but you do have to argue) that the harm from this kind of behavior is much more important than the contributions from the same people, especially when the behavior is minor. Or you could try to argue that there is a moral schelling fence here that suggest some kind of deontological rule that we shouldn’t cross, not because we know what happens when we cross it, but because it sure is a pretty universal rule (which, to be clear, in this case I don’t think applies, though I think there is an interesting argument to be made here). Or you could argue that there is some group of experts on this topic with a good track record that we should defer to on this topic, even if we don’t understand their reasoning.
But I do think at the end this is a position that has to be argued against (and I think there are interesting arguments to be made), and I don’t think this comment succeeds at that. I think it contains snippets of considerations, but I don’t like the degree to which it tries to frame its position as obvious, while mostly only hinting at underlying arguments.
Just to be more concrete, what would you say is an example of a behaviour that you think does not warrant action, because “the harm from this kind of behaviour is not much more important than the contributions from the same people”?
And where would you personally draw the line? i.e., what does the most harmful example look like that still does not warrant action, because the harm is not much more important the contributions?
While I agree that both sides are valuable, I agree with the anon here—I don’t think these tradeoffs are particularly relevant to a community health team investigating interpersonal harm cases with the goal of “reduc[ing] risk of harm to members of the community while being fair to people who are accused of wrongdoing”.
One downside of having the bad-ness of say, sexual violence[1]be mitigated by their perceived impact,(how is the community health team actually measuring this? how good someone’s forum posts are? or whether they work at an EA org? or whether they are “EA leadership”?) when considering what the appropriate action should be (if this is happening) is that it plausibly leads to different standards for bad behaviour. By the community health team’s own standards, taking someone’s potential impact into account as a mitigating factor seems like it could increase the risk of harm to members of the community (by not taking sufficient action with the justification of perceived impact), while being more unfair to people who are accused of wrongdoing. To be clear, I’m basing this off the forum post, not any non-public information
Additionally, a common theme about basically every sexual violence scandal that I’ve read about is that there were (often multiple) warnings beforehand that were not taken seriously.
If there is a major sexual violence scandal in EA in the future, it will be pretty damning if the warnings and concerns were clearly raised, but the community health team chose not to act because they decided it wasn’t worth the tradeoff against the person/people’s impact.
Another point is that people who are considered impactful are likely to be somewhat correlated with people who have gained respect and power in the EA space, have seniority or leadership roles etc. Given the role that abuse of power plays in sexual violence, we should be especially cautious of considerations that might indirectly favour those who have power.
More weakly, even if you hold the view that it is in fact the community health team’s role to “take the talent bottleneck seriously; don’t hamper hiring / projects too much” when responding to say, a sexual violence allegation, it seems like it would be easy to overvalue the bad-ness of the immediate action against the person’s impact, and undervalue the bad-ness of many more people opting to not get involved, or distance themselves from the EA movement because they perceive it to be an unsafe place for women, with unreliable ways of holding perpetrators accountable.
That being said, I think the community health team has an incredibly difficult job, and while they play an important role in mediating community norms and dynamics (and thus have corresponding amount of responsibility), it’s always easier to make comments of a critical nature than to make the difficult decisions they have to make. I’m grateful they exist, and don’t want my comment to come across like an attack of the community health team or its individuals!
(commenting in personal capacity etc)
used as an umbrella term to include things like verbal harassment. See definition here.