To me, there are at least two important differences here from the classic insurance moral hazard scenario:
In the classic scenario, moral hazard exists primarily because the act of insurance is transferring risk from the insured to the insurer. Where the insured/fraudster is insolvent and owes more than he can possibly pay, the effect of insurance here is to transfer risk from third parties (e.g., depositors) to the insurer [edited typo here].
That is generally going to be the case here because the “insurance” would only cover EA donations, not all victims of the fraud.
For example, it makes no practical difference for moral hazard purposes if SBF ends up owing $5B or $5.2B to his victims. As long as the sum is more than the person could ever pay, the deterrent effect created by risk of loss should be the same.[1]
In the classic scenario, the insured doesn’t care about the insurer’s interests. In contrast, someone who cared enough to donate big sums to EA probably does care that the monies to repay fraud-tinged donations are going to come out of future EA budgets.[2]
From the point of view of an act-utilitarian fraudster, the EV of fraudulent donations looks something like:
= odds of getting away with it * benefits from donations LESS
odds of getting caught * (EA reputational damage—funds EA will get to keep)
A disgorgment policy ensures the italicized term is $0, and can increase “odds of getting caught” (by extending the window in which funds will be returned if the fraud is detected after some delay).
Of course, that’s not going to perfectly deter anyone, but the claim was “at least some deterrent effect.” Ensuring that the “getting caught scenario” has as little possible upside for the would-be fraudster as possible doesn’t strike me as weird altruistic game theory stuff. Trying to ensure that the criminal accrues no net benefit from his crimes is garden-variety penological theory.
And it would be easy to get subrogation rights from the repaid victims if desired, such that the EA “insurers” could probably collect the marginal $200MM from the fraudster anyway if for some reason he was able to pay off the $5B.
I agree that insurance from an insurance firm would create a moral hazard risk. But it’s unlikely non-EA insurance could be obtained for a risk like this at a reasonable price.
I’m aware of this line of argument. I just don’t buy it, and find the thinking around this topic somewhat suspicious. For starters, you shouldn’t model potential fraudsters as pure act utilitarians, but as people with a broad range of motivations (or at least your uncertainty distribution should include a broad range of motivations).
Which motivations might someone be worried about if they were caught committing fraud, of which some of the fraudulent money went to charity?
They might go through the criminal justice system and face criminal penalties
They might get sued, and face civil penalties
They might have a low reputation and eg have mean things written about them on the internet
Their friends and/or family might disown them, making their direct social lives worse
Crazed people with nothing to lose might be violent towards the fraudster and/or their family
Their freedom to act might be restricted, reducing their future abilities to accomplish their goals (altruistic or otherwise).
The charitable funds they donated might be returned (with or without penalties), making it harder for the fraudsters altruistic goals to be accomplished.
Given this wide array of motivations, we should observe that while some motivations (or maybe just one) push in favor of credible commitments to return money having a deterrence effect, most motivations upon learning about such an insurance scheme should be neutral or favorable towards insurance. So it’s far from obvious how this all nets out, and I’m confused why others haven’t brought up the obvious counterarguments before.
As another quick example, the two positions
We should return all money for optics reasons, as the money is worth less than the PR hit and
We should credibly commit to always return donated grift money + interest when grift is caught, as this will create a strong deterrence for future would-be ‘aligned’ grifters
may well be internally consistent positions by themselves. But they’re close to mutually exclusive[1], and again it’s surprising that nobody else ever brought this up.
Generally, I find it rather suspicious that nobody else brought up the extremely obvious counterarguments before I did. I think I might be misunderstanding something, like perhaps there’s a social game people are playing where the correct move to play is rational irrationality and for some reason I didn’t “get the memo.” [2](I feel this way increasingly often about EA discussions).
Edit: to spell it out further, it seems like you are modeling SBF, or another theoretical fraudster, as genuinely interested in helping EA-related causes through donations. If you want to disincentivize a fraudster who wants to help the EA movement, you should theoretically precommit to doing something that would harm EA, not something that would help EA. So precommitting to returning donations doesn’t make sense if you also, separately, think that the optics of the precommitment make it the best choice for EA.[3] You are effectively telling the fraudster: “If I find out you’ve done fraud, I’ll do the thing you want me to do anyway.” For the precommitment to make sense, you have to additionally assume that the fraudster disagrees with you about returning funds being net-positive given the circumstances, or has motives other than “helping EA-related causes.”
An even stronger precommitment than not returning money (and therefore getting the bad optics) would be to attempt to destroy the movement via infighting after such a case of grift has been revealed; one way to make the precommitment credible is of course to publicly do something similar in earlier cases.
The claim was “at least some deterrent effect,” not “strong deterrence.” I don’t have to model a 100% act utilitarian to support the claim I actually made.
I am not convinced that partial “insurance” would diminish other reasons not to commit fraud. In my own field (law), arguing for a lesser sentence because a charity acted to cover a fraction of your victim’s losses[1] is just going to anger the sentencing judge. And as explained in my footnote above, the “insurers” could buy subrogation rights from the victims to the extent of the repayment and stand in their shoes to ensure civil consequences.
“Crazed people with nothing to lose” are unlikely to be meaningfully less crazed because the fraudster’s bankruptcy estate recovered (e.g.) 75% rather than 70% of their losses. Same for other social consequences. At some point, the legal, social, and other consequences for scamming your victims don’t meaningfully increase with further increases in the amount that your primary victims ultimately lose.
Although I haven’t studied this, I suspect that the base rate of charity-giving fraudsters who give all—or even most—of the amounts they steal from their victims to charity is pretty low.
To me, there are at least two important differences here from the classic insurance moral hazard scenario:
In the classic scenario, moral hazard exists primarily because the act of insurance is transferring risk from the insured to the insurer. Where the insured/fraudster is insolvent and owes more than he can possibly pay, the effect of insurance here is to transfer risk from third parties (e.g., depositors) to the insurer [edited typo here].
That is generally going to be the case here because the “insurance” would only cover EA donations, not all victims of the fraud.
For example, it makes no practical difference for moral hazard purposes if SBF ends up owing $5B or $5.2B to his victims. As long as the sum is more than the person could ever pay, the deterrent effect created by risk of loss should be the same.[1]
In the classic scenario, the insured doesn’t care about the insurer’s interests. In contrast, someone who cared enough to donate big sums to EA probably does care that the monies to repay fraud-tinged donations are going to come out of future EA budgets.[2]
From the point of view of an act-utilitarian fraudster, the EV of fraudulent donations looks something like:
= odds of getting away with it * benefits from donations LESS
odds of getting caught * (EA reputational damage—funds EA will get to keep)
A disgorgment policy ensures the italicized term is $0, and can increase “odds of getting caught” (by extending the window in which funds will be returned if the fraud is detected after some delay).
Of course, that’s not going to perfectly deter anyone, but the claim was “at least some deterrent effect.” Ensuring that the “getting caught scenario” has as little possible upside for the would-be fraudster as possible doesn’t strike me as weird altruistic game theory stuff. Trying to ensure that the criminal accrues no net benefit from his crimes is garden-variety penological theory.
And it would be easy to get subrogation rights from the repaid victims if desired, such that the EA “insurers” could probably collect the marginal $200MM from the fraudster anyway if for some reason he was able to pay off the $5B.
I agree that insurance from an insurance firm would create a moral hazard risk. But it’s unlikely non-EA insurance could be obtained for a risk like this at a reasonable price.
I’m aware of this line of argument. I just don’t buy it, and find the thinking around this topic somewhat suspicious. For starters, you shouldn’t model potential fraudsters as pure act utilitarians, but as people with a broad range of motivations (or at least your uncertainty distribution should include a broad range of motivations).
Which motivations might someone be worried about if they were caught committing fraud, of which some of the fraudulent money went to charity?
They might go through the criminal justice system and face criminal penalties
They might get sued, and face civil penalties
They might have a low reputation and eg have mean things written about them on the internet
Their friends and/or family might disown them, making their direct social lives worse
Crazed people with nothing to lose might be violent towards the fraudster and/or their family
Their freedom to act might be restricted, reducing their future abilities to accomplish their goals (altruistic or otherwise).
The charitable funds they donated might be returned (with or without penalties), making it harder for the fraudsters altruistic goals to be accomplished.
Given this wide array of motivations, we should observe that while some motivations (or maybe just one) push in favor of credible commitments to return money having a deterrence effect, most motivations upon learning about such an insurance scheme should be neutral or favorable towards insurance. So it’s far from obvious how this all nets out, and I’m confused why others haven’t brought up the obvious counterarguments before.
As another quick example, the two positions
We should return all money for optics reasons, as the money is worth less than the PR hit and
We should credibly commit to always return donated grift money + interest when grift is caught, as this will create a strong deterrence for future would-be ‘aligned’ grifters
may well be internally consistent positions by themselves. But they’re close to mutually exclusive[1], and again it’s surprising that nobody else ever brought this up.
Generally, I find it rather suspicious that nobody else brought up the extremely obvious counterarguments before I did. I think I might be misunderstanding something, like perhaps there’s a social game people are playing where the correct move to play is rational irrationality and for some reason I didn’t “get the memo.” [2](I feel this way increasingly often about EA discussions).
Edit: to spell it out further, it seems like you are modeling SBF, or another theoretical fraudster, as genuinely interested in helping EA-related causes through donations. If you want to disincentivize a fraudster who wants to help the EA movement, you should theoretically precommit to doing something that would harm EA, not something that would help EA. So precommitting to returning donations doesn’t make sense if you also, separately, think that the optics of the precommitment make it the best choice for EA.[3] You are effectively telling the fraudster: “If I find out you’ve done fraud, I’ll do the thing you want me to do anyway.” For the precommitment to make sense, you have to additionally assume that the fraudster disagrees with you about returning funds being net-positive given the circumstances, or has motives other than “helping EA-related causes.”
If someone who knows what’s up did get the memo, feel free to pass it along and I will delete my comments on this topic.
An even stronger precommitment than not returning money (and therefore getting the bad optics) would be to attempt to destroy the movement via infighting after such a case of grift has been revealed; one way to make the precommitment credible is of course to publicly do something similar in earlier cases.
The claim was “at least some deterrent effect,” not “strong deterrence.” I don’t have to model a 100% act utilitarian to support the claim I actually made.
I am not convinced that partial “insurance” would diminish other reasons not to commit fraud. In my own field (law), arguing for a lesser sentence because a charity acted to cover a fraction of your victim’s losses[1] is just going to anger the sentencing judge. And as explained in my footnote above, the “insurers” could buy subrogation rights from the victims to the extent of the repayment and stand in their shoes to ensure civil consequences.
“Crazed people with nothing to lose” are unlikely to be meaningfully less crazed because the fraudster’s bankruptcy estate recovered (e.g.) 75% rather than 70% of their losses. Same for other social consequences. At some point, the legal, social, and other consequences for scamming your victims don’t meaningfully increase with further increases in the amount that your primary victims ultimately lose.
Although I haven’t studied this, I suspect that the base rate of charity-giving fraudsters who give all—or even most—of the amounts they steal from their victims to charity is pretty low.