I believe we should think in terms of marginal effectiveness rather than offsetting particular harms we (individually or as a community) cause (see the author’s “you will have contributed in a small way to this failure” argument). If you want to offset harm that you have done, there’s little reason to do so by donating to Salinas rather than doing good in a more effective manner.
I have no involvement in the Oregon race, but I disagree with this particular line of reasoning. Even setting aside traditional non-consequentialist arguments for compensating for harm (which I happen to believe in, and which I think are perfectly fine for EAs to act upon while still being EAs), this line of reasoning only works if one adopts causal decision theory.
If we instead adopt functional decision theory, then there are much stronger reasons to consistently act as a harm-compensating agent. In particular, it can disincentivize harmful strategic behavior by others who try to influence you by simulating what might do in the future. If you cannot be simulated to harm some party without compensating them later, then you cannot be influenced to do so by others. It also enables co-operation with others who can now trust you will compensate them for harm (necessary even for everyday economic interactions).
I think one could disagree as to whether FDT applies in this case (and also disagree with FDT in general), but I want to push back against the general argument that we should always be marginal thinkers, without consideration for the history of past events.
(S/O to particlemaniafor having first explained this argument to me. There’s also an argument to be made that conventional morality evolved FDT-like characteristics precisely to solve these strategic problems, but I won’t get into that here.)
I don’t buy that CDT vs FDT matters here? It’s seems like you’ll do better to always try to do what’s best (and appropriately take into account how actors may try to influence you) than focus on compensating for harm. And perfect altruists (at least) are able to cooperate without compensating one another’s harms. And it’s not like there’s potential cooperation with Salinas here—donating to her won’t affect her actions. And there are some cases where you should act differently if you thought you were being simulated, but those seem to be the exception for general harm-offsetting decisions.
(I probably can’t continue a discussion on this now, sorry, but if there’s something explaining this argument in more detail I’d try to read it.)
P.S. thinking in terms of contractualism, I think rational agents would prefer good-maximizing over harm-compensation policies, e.g. from behind a veil of ignorance.
I have no involvement in the Oregon race, but I disagree with this particular line of reasoning. Even setting aside traditional non-consequentialist arguments for compensating for harm (which I happen to believe in, and which I think are perfectly fine for EAs to act upon while still being EAs), this line of reasoning only works if one adopts causal decision theory.
If we instead adopt functional decision theory, then there are much stronger reasons to consistently act as a harm-compensating agent. In particular, it can disincentivize harmful strategic behavior by others who try to influence you by simulating what might do in the future. If you cannot be simulated to harm some party without compensating them later, then you cannot be influenced to do so by others. It also enables co-operation with others who can now trust you will compensate them for harm (necessary even for everyday economic interactions).
I think one could disagree as to whether FDT applies in this case (and also disagree with FDT in general), but I want to push back against the general argument that we should always be marginal thinkers, without consideration for the history of past events.
(S/O to particlemania for having first explained this argument to me. There’s also an argument to be made that conventional morality evolved FDT-like characteristics precisely to solve these strategic problems, but I won’t get into that here.)
I don’t buy that CDT vs FDT matters here? It’s seems like you’ll do better to always try to do what’s best (and appropriately take into account how actors may try to influence you) than focus on compensating for harm. And perfect altruists (at least) are able to cooperate without compensating one another’s harms. And it’s not like there’s potential cooperation with Salinas here—donating to her won’t affect her actions. And there are some cases where you should act differently if you thought you were being simulated, but those seem to be the exception for general harm-offsetting decisions.
(I probably can’t continue a discussion on this now, sorry, but if there’s something explaining this argument in more detail I’d try to read it.)
P.S. thinking in terms of contractualism, I think rational agents would prefer good-maximizing over harm-compensation policies, e.g. from behind a veil of ignorance.