I think collecting data is a great idea, and I’m really glad this is happening. Thank you for doing this! Because one of your goals is to “better [understand] the experiences of women and gender minorities in the EA community,” I wanted to relay one reaction I had to the Community Health Team’s website.
I found some of the language offputting because it seems to suggest that instances of (e.g.) sexual misconduct will be assessed primarily in terms of their impact on EA, rather than on the people involved. Here’s an example:
“Our goal is to help the EA community and related communities have more positive impact, because we want a radically better world. A healthy community is a means to that end.”
My basic reaction is: it is important to prevent sexual harassment (etc) because harassment is bad for the people who experience it, regardless of whether it affects the EA community’s ability to have a positive impact.
This language is potentially alienating in and of itself, but also risks contributing to biased reporting by suggesting that the Community Health Team’s response to the same kind of behavior might depend, for instance, on the perceived importance to EA of the parties involved. People are often already reluctant to disclose bad experiences, and I worry that framing the Community Health Team’s work in this way will compound this, particularly in cases where accusations are being made against more established members of the community.
I read this with the knowledge that “we don’t do smartass trolley problem calculations when it comes to shit like this, it never helps” is something reasonably well ingrained in the community, but this might be a good moment to make this clear to people who may perhaps be newer
That this is reasonably well-ingrained in the community is less clear to me, especially post FTX. If the Community Health Team does see their goal as simply “support the community by supporting community members,” why not just plainly state that?
I’d actually love the Community Health Team to clarify:
Holding fixed the facts of a case, would the Community Health Team endorse a policy of considering the value of the accused/their work to EA when deciding how forcefully to respond? For example, if someone did something bad at an EAG, would “how valuable is this person’s work to the community?” be considered when deciding whether to ban them from future EAGs?
If the Community Health Team does endorse (1), how much weight does the “value to the community” criterion get relative to other criteria in determining a response?
If the Community Health Team does not endorse (1), are there any policies or procedures on the books to prevent (1) from happening?
This is especially important to get some clarity on since most people’s priors about how a community or community health team makes these decisions is based on their experiences from other communities they may be a part of like their universities, workplaces, social groups. If the Community Health team’s values or weights in this area are different to those of non-EA communities, it is absolutely essential for people to know this. I would go far enough to say that depending on the difference in values and the difference in approaches to sexual harassment (etc) policy, not offering clarity here can be considered as being deceptive because it prevents people from making their own decisions based on how they value their personal safety and well-being.
I appreciate your attention to the language here. Having personal experience of not being believed or supported (outside of EA), I know how challenging it can be to try to keep going, let alone consider relative impact. I was quick to endorse the spirit of the overall message (which was, at least in part, informed by my knowledge of those involved) and should have noted my own reservations with some of the language.
Suppose, hypothetically, that every individual EA would be just as effective, do just as much good, without an EA community as with one. In that case, how many resources should CEA and other EA orgs devote to community building? My answer is exactly 0. That implies that the EA community is a means to an end, the end of making EAs more effective.
That said, I wouldn’t necessarily generalize to other communities. And I agree that assessing a particular case of alleged wrongdoing should not depend on the perceived value of the accused’s contributions to EA causes, and I do not read CEA’s language as implying otherwise.
I agree that meta work as a whole can only be justified from an EA framework on consequentialist grounds—any other conclusion would result in partiality, holding the interests of EAs as more weighty than the interests of others.
However, I would argue that certain non-consequentialist moral duties come into play conditioned on certain choices. For example, if CEA decides to hold conferences, that creates a duty to take reasonable steps to prevent and address harassment and other misconduct at the conference. If an EA organization chooses to give someone power, and the person uses that power to further harassment (or to retaliate against a survivor), then the EA organization has a duty to take appropriate action.
Likewise, I don’t have a specific moral duty to dogs currently sitting in shelters. But having adopted my dog, I now have moral duties relating to her well-being. If I choose to drive and negligently run over someone with my car, I have a moral duty to compensate them for the harm I caused. I cannot get out of those moral duties by observing that my money would be more effectively spent on bednets than on basic care for my dog or on compensating the accident victim.
So if—for example—CEA knows that someone is a sufficiently bad actor, its obligation to promote a healthy community by banning that person from CEA events is not only based on consequentialist logic. It is based on CEA’s obligation to take reasonable steps to protect people at its events.
I think it would be a bad idea for the Community Health Team to view their goal as promoting the EA community’s ends, rather than the well-being of community members. Here is a non-exhaustive list of reasons why:
The Community Health Team can likely best promote the ends of the EA community by promoting the well-being of community members. I suspect doing more involved EV calculations will lead to worse community health, and thus a less impactful EA community. (I think the TIME story provides some evidence for this.)
Harassment is intrinsically bad (i.e., it is an end we should avoid).
Treating instances of harassment as bad only (or primarily) for instrumental reasons risks compounding harms experienced by victims of harassment. It is bad enough to be harassed, but worse to know that the people you are supposed to be able to turn to for support will do an EV calculation to decide what to do about it (even if they believe you).
If I know that reporting bad behavior to the Community Health Team may prompt them to, e.g., assess the accused’s and my relative contributions to EA, then I may be less inclined to report. Thus, instrumentalizing community health may undermine community health.
Suggesting that harassment primarily matters because it may make the community less impactful is alienating to people. (I have strong consequentialist leanings, andstillfeel alienated by this language.)
If the Community Health Team thinks that repercussions should be contingent upon, e.g., the value of the research the accused party is doing, then this renders it difficult to create clear standards of conduct. For instance, this makes it harder to create rules like: “If someone does X, the punishment will be Y” because Y will depend on who the “someone” is. In the absence of clear standards of conduct, there will be more harmful behavior.
It’s intuitively unjust to make the consequences of bad behavior contingent upon someone’s perceived value to the community. The Community Health Team plays a role that is sort of analogous to a university’s disciplinary committee, and most people think it’s very bad when a university gives a lighter punishment to someone who commits rape because they are a star athlete, or their dad is a major donor, etc. The language the Community Health Team uses on their website (and here) feels worryingly close to this.
I’m not fully sold on utilitarianism myself, but it seems like your main argument here is that harassment/negative community norms are ends to pursue in themselves, which again is against a strictly consequentialist framework.
I broadly agree with you, but I think this is one of those messy areas where EAs strong commitment to utilitarian reasoning makes things complicated. As you say, from a utilitarian perspective it’s better to not treat community health instrumentally because it will lead to less trust. However, if the community health team is truly utilitarian, then they would have strong reason to treat the community instrumentally but simply keep that part of their reasoning a secret.
Building trust in a utilitarian community seems extremely difficult for this reason. For instance, see Singer’s paper on secrecy in utilitarianism:
First of all, because you can’t actually predict and quantify the aggregate effect of choices regarding community health on the movement’s impact. You’re better off taking it as a rule of thumb that people need to feel safe in the community, no matter what.
Second, because not everyone here is a utilitarian, and even those who partly are also want to feel safe in their own lives.
Having a healthy community better than having an unhealthy community, all else being equal, because people being harmed is bad. This is a consequence we care about under consequentialism, even if it had zero effect on the other things we care about.
As it happens, a healthy community almost certainly has a positive effect on the other things we care about as well. But emphasizing this aspect makes it look like we don’t care about the first thing as well.
Sure but then you need to make a case for why you would prioritise this over anything else that you think has good consequences, I think the com health statement tries to make that argument (though it’s not fully specified) whereas a statement like “we want to do x because x is bad” doesn’t really help me understand why they want to prioritise x.
Okay, I feel like we need to rewind a bit. The problem is that people who have experienced behaviour like harrassment are getting the impression from that document that EA health might ignore their complaint depending on how “effective” the bad actor in question is, based on some naive EV calculation.
Now I’m assuming this impression is mistaken, in which case literally all they need to do is update the document to make it clear they don’t tolerate bad behaviour, whoever it comes from. This costs 0$.
I don’t think that impression would be unfounded. In Julia Wise’s post from last August, she mentioned these trade-offs (among others):
Encourage the sharing of research and other work, even if the people producing it have done bad stuff personally
Don’t let people use EA to gain social status that they’ll use to do more bad stuff
Take the talent bottleneck seriously; don’t hamper hiring / projects too much
Take culture seriously; don’t create a culture where people can predictably get away with bad stuff if they’re also producing impact
This means, on the one hand, that the team is well aware of the potential consequences of doing naive impact calculations to decide on their actions. On the other hand, it means the impact of any decided policy for handling complaints, in terms of the work accused people are doing, is certainly taken into account.
More generally, it seems that the team does think of their endgoal as making the most positive impact (which fits what other CEA higher ups have said about the goals of the org as a whole), and creating a safe community is indeed just a means to that end.
This all makes me somewhat distrustful of the Community Health team.
I think collecting data is a great idea, and I’m really glad this is happening. Thank you for doing this! Because one of your goals is to “better [understand] the experiences of women and gender minorities in the EA community,” I wanted to relay one reaction I had to the Community Health Team’s website.
I found some of the language offputting because it seems to suggest that instances of (e.g.) sexual misconduct will be assessed primarily in terms of their impact on EA, rather than on the people involved. Here’s an example:
My basic reaction is: it is important to prevent sexual harassment (etc) because harassment is bad for the people who experience it, regardless of whether it affects the EA community’s ability to have a positive impact.
This language is potentially alienating in and of itself, but also risks contributing to biased reporting by suggesting that the Community Health Team’s response to the same kind of behavior might depend, for instance, on the perceived importance to EA of the parties involved. People are often already reluctant to disclose bad experiences, and I worry that framing the Community Health Team’s work in this way will compound this, particularly in cases where accusations are being made against more established members of the community.
I read this with the knowledge that “we don’t do smartass trolley problem calculations when it comes to shit like this, it never helps” is something reasonably well ingrained in the community, but this might be a good moment to make this clear to people who may perhaps be newer
That this is reasonably well-ingrained in the community is less clear to me, especially post FTX. If the Community Health Team does see their goal as simply “support the community by supporting community members,” why not just plainly state that?
I’d actually love the Community Health Team to clarify:
Holding fixed the facts of a case, would the Community Health Team endorse a policy of considering the value of the accused/their work to EA when deciding how forcefully to respond? For example, if someone did something bad at an EAG, would “how valuable is this person’s work to the community?” be considered when deciding whether to ban them from future EAGs?
If the Community Health Team does endorse (1), how much weight does the “value to the community” criterion get relative to other criteria in determining a response?
If the Community Health Team does not endorse (1), are there any policies or procedures on the books to prevent (1) from happening?
This is especially important to get some clarity on since most people’s priors about how a community or community health team makes these decisions is based on their experiences from other communities they may be a part of like their universities, workplaces, social groups. If the Community Health team’s values or weights in this area are different to those of non-EA communities, it is absolutely essential for people to know this.
I would go far enough to say that depending on the difference in values and the difference in approaches to sexual harassment (etc) policy, not offering clarity here can be considered as being deceptive because it prevents people from making their own decisions based on how they value their personal safety and well-being.
I appreciate your attention to the language here. Having personal experience of not being believed or supported (outside of EA), I know how challenging it can be to try to keep going, let alone consider relative impact. I was quick to endorse the spirit of the overall message (which was, at least in part, informed by my knowledge of those involved) and should have noted my own reservations with some of the language.
I agree that language if very off-putting. A healthy community should not be a means to an end.
Suppose, hypothetically, that every individual EA would be just as effective, do just as much good, without an EA community as with one. In that case, how many resources should CEA and other EA orgs devote to community building? My answer is exactly 0. That implies that the EA community is a means to an end, the end of making EAs more effective.
That said, I wouldn’t necessarily generalize to other communities. And I agree that assessing a particular case of alleged wrongdoing should not depend on the perceived value of the accused’s contributions to EA causes, and I do not read CEA’s language as implying otherwise.
I agree that meta work as a whole can only be justified from an EA framework on consequentialist grounds—any other conclusion would result in partiality, holding the interests of EAs as more weighty than the interests of others.
However, I would argue that certain non-consequentialist moral duties come into play conditioned on certain choices. For example, if CEA decides to hold conferences, that creates a duty to take reasonable steps to prevent and address harassment and other misconduct at the conference. If an EA organization chooses to give someone power, and the person uses that power to further harassment (or to retaliate against a survivor), then the EA organization has a duty to take appropriate action.
Likewise, I don’t have a specific moral duty to dogs currently sitting in shelters. But having adopted my dog, I now have moral duties relating to her well-being. If I choose to drive and negligently run over someone with my car, I have a moral duty to compensate them for the harm I caused. I cannot get out of those moral duties by observing that my money would be more effectively spent on bednets than on basic care for my dog or on compensating the accident victim.
So if—for example—CEA knows that someone is a sufficiently bad actor, its obligation to promote a healthy community by banning that person from CEA events is not only based on consequentialist logic. It is based on CEA’s obligation to take reasonable steps to protect people at its events.
Why not? In consequentialism/utilitarian philosophy basically everything except utility itself is a means to an end.
I think it would be a bad idea for the Community Health Team to view their goal as promoting the EA community’s ends, rather than the well-being of community members. Here is a non-exhaustive list of reasons why:
The Community Health Team can likely best promote the ends of the EA community by promoting the well-being of community members. I suspect doing more involved EV calculations will lead to worse community health, and thus a less impactful EA community. (I think the TIME story provides some evidence for this.)
Harassment is intrinsically bad (i.e., it is an end we should avoid).
Treating instances of harassment as bad only (or primarily) for instrumental reasons risks compounding harms experienced by victims of harassment. It is bad enough to be harassed, but worse to know that the people you are supposed to be able to turn to for support will do an EV calculation to decide what to do about it (even if they believe you).
If I know that reporting bad behavior to the Community Health Team may prompt them to, e.g., assess the accused’s and my relative contributions to EA, then I may be less inclined to report. Thus, instrumentalizing community health may undermine community health.
Suggesting that harassment primarily matters because it may make the community less impactful is alienating to people. (I have strong consequentialist leanings, and still feel alienated by this language.)
If the Community Health Team thinks that repercussions should be contingent upon, e.g., the value of the research the accused party is doing, then this renders it difficult to create clear standards of conduct. For instance, this makes it harder to create rules like: “If someone does X, the punishment will be Y” because Y will depend on who the “someone” is. In the absence of clear standards of conduct, there will be more harmful behavior.
It’s intuitively unjust to make the consequences of bad behavior contingent upon someone’s perceived value to the community. The Community Health Team plays a role that is sort of analogous to a university’s disciplinary committee, and most people think it’s very bad when a university gives a lighter punishment to someone who commits rape because they are a star athlete, or their dad is a major donor, etc. The language the Community Health Team uses on their website (and here) feels worryingly close to this.
I’m not fully sold on utilitarianism myself, but it seems like your main argument here is that harassment/negative community norms are ends to pursue in themselves, which again is against a strictly consequentialist framework.
I broadly agree with you, but I think this is one of those messy areas where EAs strong commitment to utilitarian reasoning makes things complicated. As you say, from a utilitarian perspective it’s better to not treat community health instrumentally because it will lead to less trust. However, if the community health team is truly utilitarian, then they would have strong reason to treat the community instrumentally but simply keep that part of their reasoning a secret.
Building trust in a utilitarian community seems extremely difficult for this reason. For instance, see Singer’s paper on secrecy in utilitarianism:
https://betonit.substack.com/p/singer-and-the-noble-lie
First of all, because you can’t actually predict and quantify the aggregate effect of choices regarding community health on the movement’s impact. You’re better off taking it as a rule of thumb that people need to feel safe in the community, no matter what.
Second, because not everyone here is a utilitarian, and even those who partly are also want to feel safe in their own lives.
Having a healthy community better than having an unhealthy community, all else being equal, because people being harmed is bad. This is a consequence we care about under consequentialism, even if it had zero effect on the other things we care about.
As it happens, a healthy community almost certainly has a positive effect on the other things we care about as well. But emphasizing this aspect makes it look like we don’t care about the first thing as well.
Sure but then you need to make a case for why you would prioritise this over anything else that you think has good consequences, I think the com health statement tries to make that argument (though it’s not fully specified) whereas a statement like “we want to do x because x is bad” doesn’t really help me understand why they want to prioritise x.
Okay, I feel like we need to rewind a bit. The problem is that people who have experienced behaviour like harrassment are getting the impression from that document that EA health might ignore their complaint depending on how “effective” the bad actor in question is, based on some naive EV calculation.
Now I’m assuming this impression is mistaken, in which case literally all they need to do is update the document to make it clear they don’t tolerate bad behaviour, whoever it comes from. This costs 0$.
I don’t think that impression would be unfounded. In Julia Wise’s post from last August, she mentioned these trade-offs (among others):
This means, on the one hand, that the team is well aware of the potential consequences of doing naive impact calculations to decide on their actions. On the other hand, it means the impact of any decided policy for handling complaints, in terms of the work accused people are doing, is certainly taken into account.
More generally, it seems that the team does think of their endgoal as making the most positive impact (which fits what other CEA higher ups have said about the goals of the org as a whole), and creating a safe community is indeed just a means to that end.
This all makes me somewhat distrustful of the Community Health team.