Even if they weren’t infinitely advantageous, it seems like you’d have to be unrealistically sure that you can get away with shadiness and no bad consequences before risking it. If the downsides of getting caught are bad enough, then you can never be sufficiently confident in practice. And if the downside risk of some action isn’t quite as devastating as “maybe the entire EA movement has its reputation ruined,” then it might anyway be the better move to come clean right away. For instance, if you’re only 0.5 billion in the hole out of 30 billion total assets (say), and you’ve conducted your business with integrity up to that point, why not admit that you fucked up and ask for a bailout? The fact that you come clean should lend you credibility and goodwill, which would mitigate the damage. Doubling down, on the other hand, makes things a lot worse. Gambling to get back multiple billions really doesn’t seem wise because if it was risk-free to make billions then a lot more people would be billionaires…
In any case, faced with the choice of whether to precommit to always act with integrity, it’s not necessary for the pro-integrity arguments to be “infinitely strong.” The relevant question is “is the precommitment better in EV or not?” (given the range of circumstances you expect in your future). And the answer here seems”yes.” (Somewhat separately, I think people tend to underestimate how powerful and motivating it can be to have leadership with high integrity – it opens doors that would otherwise stay closed.)
You might say “That’s a false dilemma, that choice sounds artificially narrow. What if I can make a sophisticated precommitment that says I’ll act with integrity under almost all circumstances, except if the value at stake is (e.g.) 100 billion and I’m ultra-sure I can get away with it?” Okay, decent argument. But I don’t think it goes through. Maybe if you were a perfect utilitarian robot with infinitely malleable psychology and perfect rationality, maybe then it would go through. Maybe you’d have some kind of psychological “backdoor” programmed in where you activate “deceitful mode” if you ever find yourself in a situation where you can get away with >100 billion in profits. The problem though, in practice, is “when do you notice whether it’s a good time to activate ‘deceitful mode’?” To know when to activate it, you have to think hypothetically-deceitful-thoughts even earlier than the point of actually triggering the backdoor. Moreover, you have to take actions to preserve your abilities to be a successful deceiver later on. (E.g., people who deceive others tend to have a habit of generally not proactively sharing a lot of information about their motives and “reasons for acting,” while high-integrity people do the opposite. This is a real tradeoff – so which side do you pick?) These things aren’t cost free! (Not even for perfect utilitarian robots, and certainly not for humans where parts of our cognition cannot be shut off at will.) In reality, the situation is like this: you either train your psychology, your “inner elephant in the brain,” to have integrity to the very best of your abilities (it’s already hard enough!), or you do not. Retaining the ability to turn into a liar and deceitful manipulator “later on” doesn’t come cost-free; it changes you. If you’re planning to do it when 100 billion are at stake, that’ll reflect on how you approach other issues, too. (See also my comment in this comment section for more reasons why I don’t think it’s psychologically plausible for people to simultaneously be great liars and deceivers but also act perfectly as though they have high integrity.)
Even if they weren’t infinitely advantageous, it seems like you’d have to be unrealistically sure that you can get away with shadiness and no bad consequences before risking it. If the downsides of getting caught are bad enough, then you can never be sufficiently confident in practice. And if the downside risk of some action isn’t quite as devastating as “maybe the entire EA movement has its reputation ruined,” then it might anyway be the better move to come clean right away. For instance, if you’re only 0.5 billion in the hole out of 30 billion total assets (say), and you’ve conducted your business with integrity up to that point, why not admit that you fucked up and ask for a bailout? The fact that you come clean should lend you credibility and goodwill, which would mitigate the damage. Doubling down, on the other hand, makes things a lot worse. Gambling to get back multiple billions really doesn’t seem wise because if it was risk-free to make billions then a lot more people would be billionaires…
In any case, faced with the choice of whether to precommit to always act with integrity, it’s not necessary for the pro-integrity arguments to be “infinitely strong.” The relevant question is “is the precommitment better in EV or not?” (given the range of circumstances you expect in your future). And the answer here seems”yes.” (Somewhat separately, I think people tend to underestimate how powerful and motivating it can be to have leadership with high integrity – it opens doors that would otherwise stay closed.)
You might say “That’s a false dilemma, that choice sounds artificially narrow. What if I can make a sophisticated precommitment that says I’ll act with integrity under almost all circumstances, except if the value at stake is (e.g.) 100 billion and I’m ultra-sure I can get away with it?” Okay, decent argument. But I don’t think it goes through. Maybe if you were a perfect utilitarian robot with infinitely malleable psychology and perfect rationality, maybe then it would go through. Maybe you’d have some kind of psychological “backdoor” programmed in where you activate “deceitful mode” if you ever find yourself in a situation where you can get away with >100 billion in profits. The problem though, in practice, is “when do you notice whether it’s a good time to activate ‘deceitful mode’?” To know when to activate it, you have to think hypothetically-deceitful-thoughts even earlier than the point of actually triggering the backdoor. Moreover, you have to take actions to preserve your abilities to be a successful deceiver later on. (E.g., people who deceive others tend to have a habit of generally not proactively sharing a lot of information about their motives and “reasons for acting,” while high-integrity people do the opposite. This is a real tradeoff – so which side do you pick?) These things aren’t cost free! (Not even for perfect utilitarian robots, and certainly not for humans where parts of our cognition cannot be shut off at will.) In reality, the situation is like this: you either train your psychology, your “inner elephant in the brain,” to have integrity to the very best of your abilities (it’s already hard enough!), or you do not. Retaining the ability to turn into a liar and deceitful manipulator “later on” doesn’t come cost-free; it changes you. If you’re planning to do it when 100 billion are at stake, that’ll reflect on how you approach other issues, too. (See also my comment in this comment section for more reasons why I don’t think it’s psychologically plausible for people to simultaneously be great liars and deceivers but also act perfectly as though they have high integrity.)