I don’t buy that CDT vs FDT matters here? It’s seems like you’ll do better to always try to do what’s best (and appropriately take into account how actors may try to influence you) than focus on compensating for harm. And perfect altruists (at least) are able to cooperate without compensating one another’s harms. And it’s not like there’s potential cooperation with Salinas here—donating to her won’t affect her actions. And there are some cases where you should act differently if you thought you were being simulated, but those seem to be the exception for general harm-offsetting decisions.
(I probably can’t continue a discussion on this now, sorry, but if there’s something explaining this argument in more detail I’d try to read it.)
P.S. thinking in terms of contractualism, I think rational agents would prefer good-maximizing over harm-compensation policies, e.g. from behind a veil of ignorance.
I don’t buy that CDT vs FDT matters here? It’s seems like you’ll do better to always try to do what’s best (and appropriately take into account how actors may try to influence you) than focus on compensating for harm. And perfect altruists (at least) are able to cooperate without compensating one another’s harms. And it’s not like there’s potential cooperation with Salinas here—donating to her won’t affect her actions. And there are some cases where you should act differently if you thought you were being simulated, but those seem to be the exception for general harm-offsetting decisions.
(I probably can’t continue a discussion on this now, sorry, but if there’s something explaining this argument in more detail I’d try to read it.)
P.S. thinking in terms of contractualism, I think rational agents would prefer good-maximizing over harm-compensation policies, e.g. from behind a veil of ignorance.