I’m not fully sold on utilitarianism myself, but it seems like your main argument here is that harassment/negative community norms are ends to pursue in themselves, which again is against a strictly consequentialist framework.
I broadly agree with you, but I think this is one of those messy areas where EAs strong commitment to utilitarian reasoning makes things complicated. As you say, from a utilitarian perspective it’s better to not treat community health instrumentally because it will lead to less trust. However, if the community health team is truly utilitarian, then they would have strong reason to treat the community instrumentally but simply keep that part of their reasoning a secret.
Building trust in a utilitarian community seems extremely difficult for this reason. For instance, see Singer’s paper on secrecy in utilitarianism:
I’m not fully sold on utilitarianism myself, but it seems like your main argument here is that harassment/negative community norms are ends to pursue in themselves, which again is against a strictly consequentialist framework.
I broadly agree with you, but I think this is one of those messy areas where EAs strong commitment to utilitarian reasoning makes things complicated. As you say, from a utilitarian perspective it’s better to not treat community health instrumentally because it will lead to less trust. However, if the community health team is truly utilitarian, then they would have strong reason to treat the community instrumentally but simply keep that part of their reasoning a secret.
Building trust in a utilitarian community seems extremely difficult for this reason. For instance, see Singer’s paper on secrecy in utilitarianism:
https://betonit.substack.com/p/singer-and-the-noble-lie