I am wondering if assigning “moral credit” for offset purposes is too complex to do with an algorithm and instead requires context-specific application of judgment. A few possible examples:
Let’s assume that most of the individuals who voted for Prop 12 consume animal products regulated by the measure, and that Prop 12 causes an increase in the cost of those products. By voting yes, these individuals would have voted for taking money out of their own pockets to pay for the increase in animal welfare. While I’m fine adjusting “moral credit” based on the risk undertaken, I’m uneasy with a system that gives donors orders of magnitude more moral credit than others who voluntarily bear costs to achieve the objective.
I also wouldn’t collectively give the Supreme Court any “moral credit” for voting to uphold Prop 12, such that at least the Justices in the majority should feel entitled to eat meat without offsetting. This holds despite the counterfactual value and what I imagine the Shapley value for each Justice’s vote would be.
Moreover, every elections cycle, the voters could repeal Prop 12. Getting the repeal measure on the ballot shouldn’t be too difficult, and there are monied interests who would happily bear those costs. If they do not, it is likely because they decided that the voters would shoot them down. So for the subsequent elections cycle, there are at least two necessary conditions for Prop 12′s benefits to persist to Cycle 2: it got passed at the beginning of Cycle 1 and it didn’t get repealed at the start of Cycle 2. It’s true that nobody really did anything during Cycle 1 to protect Prop 12, but it’s also true that the voters at the end of Cycle 1 have been judged willing to continue bearing Prop 12′s costs in Cycle 2 to continue its benefits. It seems odd to attribute all of the benefits accruing in Cycle 2 to Cycle 1 activity. But how to split the moral credit here?
Motivated reasoning is always a risk, and any moral-credit granting analysis is more likely to be underinclusive (and thus over-grant available moral credit to influences that were identified) than the reverse. In some or even many cases, it may be necessary to apply an upward adjustment on even min(counterfactual value, Shapley value) to account for these factors.
Thanks for this comment, it felt awkward to include all veto-players in Shapley value calculation while writing the post, now I’m able to see why. For offsetting we’re interested in making every single individual weakly better off in expectation compared to the counterfactual where you don’t exist/don’t move your body etc. so that no one can complain about your existence. So instances of doing harm can only be offset by doing good. Meanwhile, Shapley doesn’t distinguish between doing/allowing, therefore it assigns credit to everyone who could have prevented an outcome even if they haven’t done any good.
I am wondering if assigning “moral credit” for offset purposes is too complex to do with an algorithm and instead requires context-specific application of judgment. A few possible examples:
Let’s assume that most of the individuals who voted for Prop 12 consume animal products regulated by the measure, and that Prop 12 causes an increase in the cost of those products. By voting yes, these individuals would have voted for taking money out of their own pockets to pay for the increase in animal welfare. While I’m fine adjusting “moral credit” based on the risk undertaken, I’m uneasy with a system that gives donors orders of magnitude more moral credit than others who voluntarily bear costs to achieve the objective.
I also wouldn’t collectively give the Supreme Court any “moral credit” for voting to uphold Prop 12, such that at least the Justices in the majority should feel entitled to eat meat without offsetting. This holds despite the counterfactual value and what I imagine the Shapley value for each Justice’s vote would be.
Moreover, every elections cycle, the voters could repeal Prop 12. Getting the repeal measure on the ballot shouldn’t be too difficult, and there are monied interests who would happily bear those costs. If they do not, it is likely because they decided that the voters would shoot them down. So for the subsequent elections cycle, there are at least two necessary conditions for Prop 12′s benefits to persist to Cycle 2: it got passed at the beginning of Cycle 1 and it didn’t get repealed at the start of Cycle 2. It’s true that nobody really did anything during Cycle 1 to protect Prop 12, but it’s also true that the voters at the end of Cycle 1 have been judged willing to continue bearing Prop 12′s costs in Cycle 2 to continue its benefits. It seems odd to attribute all of the benefits accruing in Cycle 2 to Cycle 1 activity. But how to split the moral credit here?
Motivated reasoning is always a risk, and any moral-credit granting analysis is more likely to be underinclusive (and thus over-grant available moral credit to influences that were identified) than the reverse. In some or even many cases, it may be necessary to apply an upward adjustment on even min(counterfactual value, Shapley value) to account for these factors.
Thanks for this comment, it felt awkward to include all veto-players in Shapley value calculation while writing the post, now I’m able to see why. For offsetting we’re interested in making every single individual weakly better off in expectation compared to the counterfactual where you don’t exist/don’t move your body etc. so that no one can complain about your existence. So instances of doing harm can only be offset by doing good. Meanwhile, Shapley doesn’t distinguish between doing/allowing, therefore it assigns credit to everyone who could have prevented an outcome even if they haven’t done any good.