I think it would be a bad idea for the Community Health Team to view their goal as promoting the EA community’s ends, rather than the well-being of community members. Here is a non-exhaustive list of reasons why:
The Community Health Team can likely best promote the ends of the EA community by promoting the well-being of community members. I suspect doing more involved EV calculations will lead to worse community health, and thus a less impactful EA community. (I think the TIME story provides some evidence for this.)
Harassment is intrinsically bad (i.e., it is an end we should avoid).
Treating instances of harassment as bad only (or primarily) for instrumental reasons risks compounding harms experienced by victims of harassment. It is bad enough to be harassed, but worse to know that the people you are supposed to be able to turn to for support will do an EV calculation to decide what to do about it (even if they believe you).
If I know that reporting bad behavior to the Community Health Team may prompt them to, e.g., assess the accused’s and my relative contributions to EA, then I may be less inclined to report. Thus, instrumentalizing community health may undermine community health.
Suggesting that harassment primarily matters because it may make the community less impactful is alienating to people. (I have strong consequentialist leanings, andstillfeel alienated by this language.)
If the Community Health Team thinks that repercussions should be contingent upon, e.g., the value of the research the accused party is doing, then this renders it difficult to create clear standards of conduct. For instance, this makes it harder to create rules like: “If someone does X, the punishment will be Y” because Y will depend on who the “someone” is. In the absence of clear standards of conduct, there will be more harmful behavior.
It’s intuitively unjust to make the consequences of bad behavior contingent upon someone’s perceived value to the community. The Community Health Team plays a role that is sort of analogous to a university’s disciplinary committee, and most people think it’s very bad when a university gives a lighter punishment to someone who commits rape because they are a star athlete, or their dad is a major donor, etc. The language the Community Health Team uses on their website (and here) feels worryingly close to this.
I’m not fully sold on utilitarianism myself, but it seems like your main argument here is that harassment/negative community norms are ends to pursue in themselves, which again is against a strictly consequentialist framework.
I broadly agree with you, but I think this is one of those messy areas where EAs strong commitment to utilitarian reasoning makes things complicated. As you say, from a utilitarian perspective it’s better to not treat community health instrumentally because it will lead to less trust. However, if the community health team is truly utilitarian, then they would have strong reason to treat the community instrumentally but simply keep that part of their reasoning a secret.
Building trust in a utilitarian community seems extremely difficult for this reason. For instance, see Singer’s paper on secrecy in utilitarianism:
I think it would be a bad idea for the Community Health Team to view their goal as promoting the EA community’s ends, rather than the well-being of community members. Here is a non-exhaustive list of reasons why:
The Community Health Team can likely best promote the ends of the EA community by promoting the well-being of community members. I suspect doing more involved EV calculations will lead to worse community health, and thus a less impactful EA community. (I think the TIME story provides some evidence for this.)
Harassment is intrinsically bad (i.e., it is an end we should avoid).
Treating instances of harassment as bad only (or primarily) for instrumental reasons risks compounding harms experienced by victims of harassment. It is bad enough to be harassed, but worse to know that the people you are supposed to be able to turn to for support will do an EV calculation to decide what to do about it (even if they believe you).
If I know that reporting bad behavior to the Community Health Team may prompt them to, e.g., assess the accused’s and my relative contributions to EA, then I may be less inclined to report. Thus, instrumentalizing community health may undermine community health.
Suggesting that harassment primarily matters because it may make the community less impactful is alienating to people. (I have strong consequentialist leanings, and still feel alienated by this language.)
If the Community Health Team thinks that repercussions should be contingent upon, e.g., the value of the research the accused party is doing, then this renders it difficult to create clear standards of conduct. For instance, this makes it harder to create rules like: “If someone does X, the punishment will be Y” because Y will depend on who the “someone” is. In the absence of clear standards of conduct, there will be more harmful behavior.
It’s intuitively unjust to make the consequences of bad behavior contingent upon someone’s perceived value to the community. The Community Health Team plays a role that is sort of analogous to a university’s disciplinary committee, and most people think it’s very bad when a university gives a lighter punishment to someone who commits rape because they are a star athlete, or their dad is a major donor, etc. The language the Community Health Team uses on their website (and here) feels worryingly close to this.
I’m not fully sold on utilitarianism myself, but it seems like your main argument here is that harassment/negative community norms are ends to pursue in themselves, which again is against a strictly consequentialist framework.
I broadly agree with you, but I think this is one of those messy areas where EAs strong commitment to utilitarian reasoning makes things complicated. As you say, from a utilitarian perspective it’s better to not treat community health instrumentally because it will lead to less trust. However, if the community health team is truly utilitarian, then they would have strong reason to treat the community instrumentally but simply keep that part of their reasoning a secret.
Building trust in a utilitarian community seems extremely difficult for this reason. For instance, see Singer’s paper on secrecy in utilitarianism:
https://betonit.substack.com/p/singer-and-the-noble-lie