I came across Toby Ord’s “moral trade” proposal recently. As far as I can tell (correct me if I’m wrong), it’s intended to let effective altruists with different values to cooperate and reach a mutually beneficial outcome—for example, if I think that Thing A is good, and you think that it’s bad, we can avoid wasting our money by agreeing not to donate to charities that are primarily about promoting or opposing Thing A. This seems most applicable to animal rights/ethics and population ethics, where there’s little consensus on how valuable particular outcomes are.
My question is: how would moral trade work on a large scale when it involves agreeing not to do something? (I’m particularly interested in the case of population changes, where EAs with different population-ethical views might have incompatible goals.) It seems like “cheating” would be quite hard to prevent, since it’s hard to detect secret donations.
I came across Toby Ord’s “moral trade” proposal recently. As far as I can tell (correct me if I’m wrong), it’s intended to let effective altruists with different values to cooperate and reach a mutually beneficial outcome—for example, if I think that Thing A is good, and you think that it’s bad, we can avoid wasting our money by agreeing not to donate to charities that are primarily about promoting or opposing Thing A. This seems most applicable to animal rights/ethics and population ethics, where there’s little consensus on how valuable particular outcomes are.
My question is: how would moral trade work on a large scale when it involves agreeing not to do something? (I’m particularly interested in the case of population changes, where EAs with different population-ethical views might have incompatible goals.) It seems like “cheating” would be quite hard to prevent, since it’s hard to detect secret donations.