Someone raised the point that if EAs try to offset the harm SBF caused, this creates a moral hazard of the form “people may be more willing to cause harm in the name of EA in the future, expecting other EAs to offset that harm”.
I think that’s a stronger objection to “offset all of SBF’s harms” (which I don’t endorse anyway) than to “collectively (at the community level, not the individual org level) give back the amount EA received”, but maybe it will shift my view once I’ve chewed on it a bit more. At a glance, I don’t think I’d expect this concern to be a dominant factor?
Our offsetting their harms out of the budget for things they care most about should be bad by their own lights, though, unless they’re naive consequentialists (which they may disproportionately be). They should care more about the harms this way, since it’s clear it’s counterproductive by their own lights, whereas it’s easy to discount harms to FTX customers relative to longtermist (or generally EA) donations.
Agreed, though I think the primary reason the EA community should collectively give back money that was stolen and given to us is “it’s the right thing to do”.
This is related to incentives, and there are complicated ways in which being a high-integrity, broadly honorable community finds a lot of its justification in game-theoretic LDT-ish arguments, but I think EAs empirically are better at reasoning about “what’s the right thing to do?” than at explicitly reasoning about LDT.
I think that’s fair, but even when we consider the “right thing to do”, there’s still a question of on whom this burden is supposed to fall and how to do this in a fair way. Even for cross-cause funders, if they model themselves as multiple cause-specific agents or representing sections of the EA community, there’s still an issue of being fair to those agents or community sections. I think tracking the counterfactual without FTX/Alameda (or just their bad actions) would be the most accurate way to capture this + maybe some more pooled EA community fund thing.
One way to think about it is to ask who benefited counterfactually, whether directly and indirectly (through counterfactually shifting budgets) from FTX funds, and treat those individuals as owing debt equal to the counterfactual funding received. Some individuals who actually received FTX funding might not have even benefited counterfactually. Then funders can decide which debts to take on, which might happen by cause or worldview, e.g. global health people might not feel like they should take on the debts of longtermists, although they might anyway towards a community pool.
(Also, I don’t think LDT really needs to come into it specifically. Are the different decision theories going to disagree dramatically here? Or, at least, I think they should all recognize some deterence value here, but maybe give it different relative weight.)
Someone raised the point that if EAs try to offset the harm SBF caused, this creates a moral hazard of the form “people may be more willing to cause harm in the name of EA in the future, expecting other EAs to offset that harm”.
I think that’s a stronger objection to “offset all of SBF’s harms” (which I don’t endorse anyway) than to “collectively (at the community level, not the individual org level) give back the amount EA received”, but maybe it will shift my view once I’ve chewed on it a bit more. At a glance, I don’t think I’d expect this concern to be a dominant factor?
Our offsetting their harms out of the budget for things they care most about should be bad by their own lights, though, unless they’re naive consequentialists (which they may disproportionately be). They should care more about the harms this way, since it’s clear it’s counterproductive by their own lights, whereas it’s easy to discount harms to FTX customers relative to longtermist (or generally EA) donations.
Agreed, though I think the primary reason the EA community should collectively give back money that was stolen and given to us is “it’s the right thing to do”.
This is related to incentives, and there are complicated ways in which being a high-integrity, broadly honorable community finds a lot of its justification in game-theoretic LDT-ish arguments, but I think EAs empirically are better at reasoning about “what’s the right thing to do?” than at explicitly reasoning about LDT.
I think that’s fair, but even when we consider the “right thing to do”, there’s still a question of on whom this burden is supposed to fall and how to do this in a fair way. Even for cross-cause funders, if they model themselves as multiple cause-specific agents or representing sections of the EA community, there’s still an issue of being fair to those agents or community sections. I think tracking the counterfactual without FTX/Alameda (or just their bad actions) would be the most accurate way to capture this + maybe some more pooled EA community fund thing.
One way to think about it is to ask who benefited counterfactually, whether directly and indirectly (through counterfactually shifting budgets) from FTX funds, and treat those individuals as owing debt equal to the counterfactual funding received. Some individuals who actually received FTX funding might not have even benefited counterfactually. Then funders can decide which debts to take on, which might happen by cause or worldview, e.g. global health people might not feel like they should take on the debts of longtermists, although they might anyway towards a community pool.
(Also, I don’t think LDT really needs to come into it specifically. Are the different decision theories going to disagree dramatically here? Or, at least, I think they should all recognize some deterence value here, but maybe give it different relative weight.)