Suppose, hypothetically, that every individual EA would be just as effective, do just as much good, without an EA community as with one. In that case, how many resources should CEA and other EA orgs devote to community building? My answer is exactly 0. That implies that the EA community is a means to an end, the end of making EAs more effective.
That said, I wouldn’t necessarily generalize to other communities. And I agree that assessing a particular case of alleged wrongdoing should not depend on the perceived value of the accused’s contributions to EA causes, and I do not read CEA’s language as implying otherwise.
I agree that meta work as a whole can only be justified from an EA framework on consequentialist grounds—any other conclusion would result in partiality, holding the interests of EAs as more weighty than the interests of others.
However, I would argue that certain non-consequentialist moral duties come into play conditioned on certain choices. For example, if CEA decides to hold conferences, that creates a duty to take reasonable steps to prevent and address harassment and other misconduct at the conference. If an EA organization chooses to give someone power, and the person uses that power to further harassment (or to retaliate against a survivor), then the EA organization has a duty to take appropriate action.
Likewise, I don’t have a specific moral duty to dogs currently sitting in shelters. But having adopted my dog, I now have moral duties relating to her well-being. If I choose to drive and negligently run over someone with my car, I have a moral duty to compensate them for the harm I caused. I cannot get out of those moral duties by observing that my money would be more effectively spent on bednets than on basic care for my dog or on compensating the accident victim.
So if—for example—CEA knows that someone is a sufficiently bad actor, its obligation to promote a healthy community by banning that person from CEA events is not only based on consequentialist logic. It is based on CEA’s obligation to take reasonable steps to protect people at its events.
I think it would be a bad idea for the Community Health Team to view their goal as promoting the EA community’s ends, rather than the well-being of community members. Here is a non-exhaustive list of reasons why:
The Community Health Team can likely best promote the ends of the EA community by promoting the well-being of community members. I suspect doing more involved EV calculations will lead to worse community health, and thus a less impactful EA community. (I think the TIME story provides some evidence for this.)
Harassment is intrinsically bad (i.e., it is an end we should avoid).
Treating instances of harassment as bad only (or primarily) for instrumental reasons risks compounding harms experienced by victims of harassment. It is bad enough to be harassed, but worse to know that the people you are supposed to be able to turn to for support will do an EV calculation to decide what to do about it (even if they believe you).
If I know that reporting bad behavior to the Community Health Team may prompt them to, e.g., assess the accused’s and my relative contributions to EA, then I may be less inclined to report. Thus, instrumentalizing community health may undermine community health.
Suggesting that harassment primarily matters because it may make the community less impactful is alienating to people. (I have strong consequentialist leanings, andstillfeel alienated by this language.)
If the Community Health Team thinks that repercussions should be contingent upon, e.g., the value of the research the accused party is doing, then this renders it difficult to create clear standards of conduct. For instance, this makes it harder to create rules like: “If someone does X, the punishment will be Y” because Y will depend on who the “someone” is. In the absence of clear standards of conduct, there will be more harmful behavior.
It’s intuitively unjust to make the consequences of bad behavior contingent upon someone’s perceived value to the community. The Community Health Team plays a role that is sort of analogous to a university’s disciplinary committee, and most people think it’s very bad when a university gives a lighter punishment to someone who commits rape because they are a star athlete, or their dad is a major donor, etc. The language the Community Health Team uses on their website (and here) feels worryingly close to this.
I’m not fully sold on utilitarianism myself, but it seems like your main argument here is that harassment/negative community norms are ends to pursue in themselves, which again is against a strictly consequentialist framework.
I broadly agree with you, but I think this is one of those messy areas where EAs strong commitment to utilitarian reasoning makes things complicated. As you say, from a utilitarian perspective it’s better to not treat community health instrumentally because it will lead to less trust. However, if the community health team is truly utilitarian, then they would have strong reason to treat the community instrumentally but simply keep that part of their reasoning a secret.
Building trust in a utilitarian community seems extremely difficult for this reason. For instance, see Singer’s paper on secrecy in utilitarianism:
First of all, because you can’t actually predict and quantify the aggregate effect of choices regarding community health on the movement’s impact. You’re better off taking it as a rule of thumb that people need to feel safe in the community, no matter what.
Second, because not everyone here is a utilitarian, and even those who partly are also want to feel safe in their own lives.
Having a healthy community better than having an unhealthy community, all else being equal, because people being harmed is bad. This is a consequence we care about under consequentialism, even if it had zero effect on the other things we care about.
As it happens, a healthy community almost certainly has a positive effect on the other things we care about as well. But emphasizing this aspect makes it look like we don’t care about the first thing as well.
Sure but then you need to make a case for why you would prioritise this over anything else that you think has good consequences, I think the com health statement tries to make that argument (though it’s not fully specified) whereas a statement like “we want to do x because x is bad” doesn’t really help me understand why they want to prioritise x.
Okay, I feel like we need to rewind a bit. The problem is that people who have experienced behaviour like harrassment are getting the impression from that document that EA health might ignore their complaint depending on how “effective” the bad actor in question is, based on some naive EV calculation.
Now I’m assuming this impression is mistaken, in which case literally all they need to do is update the document to make it clear they don’t tolerate bad behaviour, whoever it comes from. This costs 0$.
I don’t think that impression would be unfounded. In Julia Wise’s post from last August, she mentioned these trade-offs (among others):
Encourage the sharing of research and other work, even if the people producing it have done bad stuff personally
Don’t let people use EA to gain social status that they’ll use to do more bad stuff
Take the talent bottleneck seriously; don’t hamper hiring / projects too much
Take culture seriously; don’t create a culture where people can predictably get away with bad stuff if they’re also producing impact
This means, on the one hand, that the team is well aware of the potential consequences of doing naive impact calculations to decide on their actions. On the other hand, it means the impact of any decided policy for handling complaints, in terms of the work accused people are doing, is certainly taken into account.
More generally, it seems that the team does think of their endgoal as making the most positive impact (which fits what other CEA higher ups have said about the goals of the org as a whole), and creating a safe community is indeed just a means to that end.
This all makes me somewhat distrustful of the Community Health team.
I agree that language if very off-putting. A healthy community should not be a means to an end.
Suppose, hypothetically, that every individual EA would be just as effective, do just as much good, without an EA community as with one. In that case, how many resources should CEA and other EA orgs devote to community building? My answer is exactly 0. That implies that the EA community is a means to an end, the end of making EAs more effective.
That said, I wouldn’t necessarily generalize to other communities. And I agree that assessing a particular case of alleged wrongdoing should not depend on the perceived value of the accused’s contributions to EA causes, and I do not read CEA’s language as implying otherwise.
I agree that meta work as a whole can only be justified from an EA framework on consequentialist grounds—any other conclusion would result in partiality, holding the interests of EAs as more weighty than the interests of others.
However, I would argue that certain non-consequentialist moral duties come into play conditioned on certain choices. For example, if CEA decides to hold conferences, that creates a duty to take reasonable steps to prevent and address harassment and other misconduct at the conference. If an EA organization chooses to give someone power, and the person uses that power to further harassment (or to retaliate against a survivor), then the EA organization has a duty to take appropriate action.
Likewise, I don’t have a specific moral duty to dogs currently sitting in shelters. But having adopted my dog, I now have moral duties relating to her well-being. If I choose to drive and negligently run over someone with my car, I have a moral duty to compensate them for the harm I caused. I cannot get out of those moral duties by observing that my money would be more effectively spent on bednets than on basic care for my dog or on compensating the accident victim.
So if—for example—CEA knows that someone is a sufficiently bad actor, its obligation to promote a healthy community by banning that person from CEA events is not only based on consequentialist logic. It is based on CEA’s obligation to take reasonable steps to protect people at its events.
Why not? In consequentialism/utilitarian philosophy basically everything except utility itself is a means to an end.
I think it would be a bad idea for the Community Health Team to view their goal as promoting the EA community’s ends, rather than the well-being of community members. Here is a non-exhaustive list of reasons why:
The Community Health Team can likely best promote the ends of the EA community by promoting the well-being of community members. I suspect doing more involved EV calculations will lead to worse community health, and thus a less impactful EA community. (I think the TIME story provides some evidence for this.)
Harassment is intrinsically bad (i.e., it is an end we should avoid).
Treating instances of harassment as bad only (or primarily) for instrumental reasons risks compounding harms experienced by victims of harassment. It is bad enough to be harassed, but worse to know that the people you are supposed to be able to turn to for support will do an EV calculation to decide what to do about it (even if they believe you).
If I know that reporting bad behavior to the Community Health Team may prompt them to, e.g., assess the accused’s and my relative contributions to EA, then I may be less inclined to report. Thus, instrumentalizing community health may undermine community health.
Suggesting that harassment primarily matters because it may make the community less impactful is alienating to people. (I have strong consequentialist leanings, and still feel alienated by this language.)
If the Community Health Team thinks that repercussions should be contingent upon, e.g., the value of the research the accused party is doing, then this renders it difficult to create clear standards of conduct. For instance, this makes it harder to create rules like: “If someone does X, the punishment will be Y” because Y will depend on who the “someone” is. In the absence of clear standards of conduct, there will be more harmful behavior.
It’s intuitively unjust to make the consequences of bad behavior contingent upon someone’s perceived value to the community. The Community Health Team plays a role that is sort of analogous to a university’s disciplinary committee, and most people think it’s very bad when a university gives a lighter punishment to someone who commits rape because they are a star athlete, or their dad is a major donor, etc. The language the Community Health Team uses on their website (and here) feels worryingly close to this.
I’m not fully sold on utilitarianism myself, but it seems like your main argument here is that harassment/negative community norms are ends to pursue in themselves, which again is against a strictly consequentialist framework.
I broadly agree with you, but I think this is one of those messy areas where EAs strong commitment to utilitarian reasoning makes things complicated. As you say, from a utilitarian perspective it’s better to not treat community health instrumentally because it will lead to less trust. However, if the community health team is truly utilitarian, then they would have strong reason to treat the community instrumentally but simply keep that part of their reasoning a secret.
Building trust in a utilitarian community seems extremely difficult for this reason. For instance, see Singer’s paper on secrecy in utilitarianism:
https://betonit.substack.com/p/singer-and-the-noble-lie
First of all, because you can’t actually predict and quantify the aggregate effect of choices regarding community health on the movement’s impact. You’re better off taking it as a rule of thumb that people need to feel safe in the community, no matter what.
Second, because not everyone here is a utilitarian, and even those who partly are also want to feel safe in their own lives.
Having a healthy community better than having an unhealthy community, all else being equal, because people being harmed is bad. This is a consequence we care about under consequentialism, even if it had zero effect on the other things we care about.
As it happens, a healthy community almost certainly has a positive effect on the other things we care about as well. But emphasizing this aspect makes it look like we don’t care about the first thing as well.
Sure but then you need to make a case for why you would prioritise this over anything else that you think has good consequences, I think the com health statement tries to make that argument (though it’s not fully specified) whereas a statement like “we want to do x because x is bad” doesn’t really help me understand why they want to prioritise x.
Okay, I feel like we need to rewind a bit. The problem is that people who have experienced behaviour like harrassment are getting the impression from that document that EA health might ignore their complaint depending on how “effective” the bad actor in question is, based on some naive EV calculation.
Now I’m assuming this impression is mistaken, in which case literally all they need to do is update the document to make it clear they don’t tolerate bad behaviour, whoever it comes from. This costs 0$.
I don’t think that impression would be unfounded. In Julia Wise’s post from last August, she mentioned these trade-offs (among others):
This means, on the one hand, that the team is well aware of the potential consequences of doing naive impact calculations to decide on their actions. On the other hand, it means the impact of any decided policy for handling complaints, in terms of the work accused people are doing, is certainly taken into account.
More generally, it seems that the team does think of their endgoal as making the most positive impact (which fits what other CEA higher ups have said about the goals of the org as a whole), and creating a safe community is indeed just a means to that end.
This all makes me somewhat distrustful of the Community Health team.