I agree that meta work as a whole can only be justified from an EA framework on consequentialist grounds—any other conclusion would result in partiality, holding the interests of EAs as more weighty than the interests of others.
However, I would argue that certain non-consequentialist moral duties come into play conditioned on certain choices. For example, if CEA decides to hold conferences, that creates a duty to take reasonable steps to prevent and address harassment and other misconduct at the conference. If an EA organization chooses to give someone power, and the person uses that power to further harassment (or to retaliate against a survivor), then the EA organization has a duty to take appropriate action.
Likewise, I don’t have a specific moral duty to dogs currently sitting in shelters. But having adopted my dog, I now have moral duties relating to her well-being. If I choose to drive and negligently run over someone with my car, I have a moral duty to compensate them for the harm I caused. I cannot get out of those moral duties by observing that my money would be more effectively spent on bednets than on basic care for my dog or on compensating the accident victim.
So if—for example—CEA knows that someone is a sufficiently bad actor, its obligation to promote a healthy community by banning that person from CEA events is not only based on consequentialist logic. It is based on CEA’s obligation to take reasonable steps to protect people at its events.
I agree that meta work as a whole can only be justified from an EA framework on consequentialist grounds—any other conclusion would result in partiality, holding the interests of EAs as more weighty than the interests of others.
However, I would argue that certain non-consequentialist moral duties come into play conditioned on certain choices. For example, if CEA decides to hold conferences, that creates a duty to take reasonable steps to prevent and address harassment and other misconduct at the conference. If an EA organization chooses to give someone power, and the person uses that power to further harassment (or to retaliate against a survivor), then the EA organization has a duty to take appropriate action.
Likewise, I don’t have a specific moral duty to dogs currently sitting in shelters. But having adopted my dog, I now have moral duties relating to her well-being. If I choose to drive and negligently run over someone with my car, I have a moral duty to compensate them for the harm I caused. I cannot get out of those moral duties by observing that my money would be more effectively spent on bednets than on basic care for my dog or on compensating the accident victim.
So if—for example—CEA knows that someone is a sufficiently bad actor, its obligation to promote a healthy community by banning that person from CEA events is not only based on consequentialist logic. It is based on CEA’s obligation to take reasonable steps to protect people at its events.