I think the Utilitarian arguments you presented are quite strong, such as precommiting to certain principles being very advantageous,
but surely they’re not infinitely advantageous right? A few billion is quite a lot.
TLDR because I got long-winded: If you ever find yourself planning to commit some morally horrible thing in the name of a good outcome, stop. Those kinds of choices aren’t made in the real world, they are a thought exercise (normally a really stupid one too.)
Long version:
Sorry that you got downvoted hard, keep in mind that knee-jerk reactions are probably pretty strong right now. While the disagrees are justified, the downvotes are probably not (I’m assuming this is a legit question.)
I’m constantly looking to learn more about ethics, philosophy, etc and I recently got introduced to this website: What is Utilitarianism? | Utilitarianism.net which I really liked. There are a few things that I disagree with or feel could have been more explored, but I think it’s overall good.
To restate and make sure that I understand where you’re coming from, I think that you’re framing the current objections like a trolley problem, or its more advanced version the transplant case. (Addressed in 8. Objections to Utilitarianism and Responses – Utilitarianism.net second paragraph under “General Ways of Responding to Objections to Utilitarianism”) if I was going to reword it, I would put it something like this:
“When considered in large enough situations, the ideal of precommitment would be swamped by the potential utility gains for defecting.”
This is the second response commonly used in defense of the utilitarian framework “debunk the moral intuition” (paragraph 5 in the same chapter and section.)
I believe, and I think most of us believe that this isn’t the appropriate response (to this situation) because in this case, the moral intuition is correct.Any misbehavior on this scale results in a weaker economic system, harms thousands if not millions of people, and erodes trust in society itself.
A response you might think would be something like “but if the stakes were even higher.”
And I agree, it would be pretty ridiculous if after the Avengers saved NYC from a chitauri invasion someone tried to sue the Hulk for using his car to crush an alien or something. We would all agree with you there, the illegal action (crushing a car) is justified by the alternative (aliens killing us all.)
The problem with that kind of scale, however, is that if you ever find yourself in a situation where you think “I’m the only one that can save everyone, all it takes is ‘insert thing that no one else wants me to do.’” stop what you’re doing and do what the people around you tell you to do.
If you think you’re Jesus, you’re probably not Jesus. (or in this case the Hulk.)
That’s why the discussions of corrupted hardware and the unilateralist’s curse (links provided by OP) are so important.
One must-read section says that “In contrast, to our knowledge no one has ever defended single-level utilitarianism, including the classical utilitarians.26 Deliberately calculating the expected consequences of our actions is error-prone and risks falling into decision paralysis.”
I would encourage you to read that whole section (and the one that follows it if you think much of rule utilitarianism) as I think one of the most common problems with most people’s understanding of utilitarianism is the single-level vs multi-level distinction.
…is what you tell yourself before you get exposed for committing massive fraud, costing far more billions than you ended up with.
If SBF did commit fraud, it looks like he did it to keep Alameda from going bankrupt. If that’s the case, he ended up destroying billions of dollars of potential donations from FTX. “Take a risk that might let you earn a billion dollars illegally” and “Make 0 dollars” are not your only options here! You could have taken not-illegal risks that might have won big instead. Those tend to have higher EV.
Even if they weren’t infinitely advantageous, it seems like you’d have to be unrealistically sure that you can get away with shadiness and no bad consequences before risking it. If the downsides of getting caught are bad enough, then you can never be sufficiently confident in practice. And if the downside risk of some action isn’t quite as devastating as “maybe the entire EA movement has its reputation ruined,” then it might anyway be the better move to come clean right away. For instance, if you’re only 0.5 billion in the hole out of 30 billion total assets (say), and you’ve conducted your business with integrity up to that point, why not admit that you fucked up and ask for a bailout? The fact that you come clean should lend you credibility and goodwill, which would mitigate the damage. Doubling down, on the other hand, makes things a lot worse. Gambling to get back multiple billions really doesn’t seem wise because if it was risk-free to make billions then a lot more people would be billionaires…
In any case, faced with the choice of whether to precommit to always act with integrity, it’s not necessary for the pro-integrity arguments to be “infinitely strong.” The relevant question is “is the precommitment better in EV or not?” (given the range of circumstances you expect in your future). And the answer here seems”yes.” (Somewhat separately, I think people tend to underestimate how powerful and motivating it can be to have leadership with high integrity – it opens doors that would otherwise stay closed.)
You might say “That’s a false dilemma, that choice sounds artificially narrow. What if I can make a sophisticated precommitment that says I’ll act with integrity under almost all circumstances, except if the value at stake is (e.g.) 100 billion and I’m ultra-sure I can get away with it?” Okay, decent argument. But I don’t think it goes through. Maybe if you were a perfect utilitarian robot with infinitely malleable psychology and perfect rationality, maybe then it would go through. Maybe you’d have some kind of psychological “backdoor” programmed in where you activate “deceitful mode” if you ever find yourself in a situation where you can get away with >100 billion in profits. The problem though, in practice, is “when do you notice whether it’s a good time to activate ‘deceitful mode’?” To know when to activate it, you have to think hypothetically-deceitful-thoughts even earlier than the point of actually triggering the backdoor. Moreover, you have to take actions to preserve your abilities to be a successful deceiver later on. (E.g., people who deceive others tend to have a habit of generally not proactively sharing a lot of information about their motives and “reasons for acting,” while high-integrity people do the opposite. This is a real tradeoff – so which side do you pick?) These things aren’t cost free! (Not even for perfect utilitarian robots, and certainly not for humans where parts of our cognition cannot be shut off at will.) In reality, the situation is like this: you either train your psychology, your “inner elephant in the brain,” to have integrity to the very best of your abilities (it’s already hard enough!), or you do not. Retaining the ability to turn into a liar and deceitful manipulator “later on” doesn’t come cost-free; it changes you. If you’re planning to do it when 100 billion are at stake, that’ll reflect on how you approach other issues, too. (See also my comment in this comment section for more reasons why I don’t think it’s psychologically plausible for people to simultaneously be great liars and deceivers but also act perfectly as though they have high integrity.)
I think this post is my favourite for laying out why a really convincing utilitarian argument for something which is common sense very bad shouldn’t move you. From memory Eliezer says something like ~Thinking there’s a really good utilitarian argument doesn’t mean the ends justify the means, it just means your flawed brain with weird motivations feels like there’s a really good utilitarian argument. Your uncertainty in that always dominates and leaves room for common sense arguments, even when you feel really extra super sure. Common sense morality rules like “the ends shouldn’t justify the means” arose because people in practice are very miscallibrated about when the ends actually justify the means so we should take the outside view and assume we are too.~
(By miscallibrated I think I could defend a claim like “90% of the time people think the ends definitely justify the means and this clashes with common sense morality they are wrong”)
I might be butchering the post though so you should definitely read it.
I agree that the arguments people are making here don’t seem very scope-sensitive, and I’m not about to make a very scope-sensitive argument either. However, it’s worth considering the possibility that the damage to the community and public trust in EA could be greater. How many potential EAs do we lose out on? How much harder is it to engage politically and with institutions? We’ve been having a hard time spending money well, at least within longtermism, so the extra donations (including extra donations had they not been caught) were plausibly worth much less per dollar than the marginal grants. Poor public perception will make using our resources effectively harder going forward.
I think the Utilitarian arguments you presented are quite strong, such as precommiting to certain principles being very advantageous, but surely they’re not infinitely advantageous right? A few billion is quite a lot.
To the people voting ‘disagree’, what OP said above is clearly true. Perhaps people are taking it to imply that the utilitarian course of action here is correct, but I see no such implication.
I think a better forum norm would be for someone to comment spelling out the perceived implication and why they disagree with it, and have other people upvote that.
I think the Utilitarian arguments you presented are quite strong, such as precommiting to certain principles being very advantageous, but surely they’re not infinitely advantageous right? A few billion is quite a lot.
TLDR because I got long-winded: If you ever find yourself planning to commit some morally horrible thing in the name of a good outcome, stop. Those kinds of choices aren’t made in the real world, they are a thought exercise (normally a really stupid one too.)
Long version:
Sorry that you got downvoted hard, keep in mind that knee-jerk reactions are probably pretty strong right now. While the disagrees are justified, the downvotes are probably not (I’m assuming this is a legit question.)
I’m constantly looking to learn more about ethics, philosophy, etc and I recently got introduced to this website: What is Utilitarianism? | Utilitarianism.net which I really liked. There are a few things that I disagree with or feel could have been more explored, but I think it’s overall good.
To restate and make sure that I understand where you’re coming from, I think that you’re framing the current objections like a trolley problem, or its more advanced version the transplant case. (Addressed in 8. Objections to Utilitarianism and Responses – Utilitarianism.net second paragraph under “General Ways of Responding to Objections to Utilitarianism”) if I was going to reword it, I would put it something like this:
“When considered in large enough situations, the ideal of precommitment would be swamped by the potential utility gains for defecting.”
This is the second response commonly used in defense of the utilitarian framework “debunk the moral intuition” (paragraph 5 in the same chapter and section.)
I believe, and I think most of us believe that this isn’t the appropriate response (to this situation) because in this case, the moral intuition is correct. Any misbehavior on this scale results in a weaker economic system, harms thousands if not millions of people, and erodes trust in society itself.
A response you might think would be something like “but if the stakes were even higher.”
And I agree, it would be pretty ridiculous if after the Avengers saved NYC from a chitauri invasion someone tried to sue the Hulk for using his car to crush an alien or something. We would all agree with you there, the illegal action (crushing a car) is justified by the alternative (aliens killing us all.)
The problem with that kind of scale, however, is that if you ever find yourself in a situation where you think “I’m the only one that can save everyone, all it takes is ‘insert thing that no one else wants me to do.’” stop what you’re doing and do what the people around you tell you to do.
If you think you’re Jesus, you’re probably not Jesus. (or in this case the Hulk.)
That’s why the discussions of corrupted hardware and the unilateralist’s curse (links provided by OP) are so important.
For more discussion on this you can look in Elements and Types of Utilitarianism – Utilitarianism.net “Multi-level Utilitarianism Versus Single-level Utilitarianism.”
One must-read section says that “In contrast, to our knowledge no one has ever defended single-level utilitarianism, including the classical utilitarians.26 Deliberately calculating the expected consequences of our actions is error-prone and risks falling into decision paralysis.”
I would encourage you to read that whole section (and the one that follows it if you think much of rule utilitarianism) as I think one of the most common problems with most people’s understanding of utilitarianism is the single-level vs multi-level distinction.
…is what you tell yourself before you get exposed for committing massive fraud, costing far more billions than you ended up with.
If SBF did commit fraud, it looks like he did it to keep Alameda from going bankrupt. If that’s the case, he ended up destroying billions of dollars of potential donations from FTX. “Take a risk that might let you earn a billion dollars illegally” and “Make 0 dollars” are not your only options here! You could have taken not-illegal risks that might have won big instead. Those tend to have higher EV.
Even if they weren’t infinitely advantageous, it seems like you’d have to be unrealistically sure that you can get away with shadiness and no bad consequences before risking it. If the downsides of getting caught are bad enough, then you can never be sufficiently confident in practice. And if the downside risk of some action isn’t quite as devastating as “maybe the entire EA movement has its reputation ruined,” then it might anyway be the better move to come clean right away. For instance, if you’re only 0.5 billion in the hole out of 30 billion total assets (say), and you’ve conducted your business with integrity up to that point, why not admit that you fucked up and ask for a bailout? The fact that you come clean should lend you credibility and goodwill, which would mitigate the damage. Doubling down, on the other hand, makes things a lot worse. Gambling to get back multiple billions really doesn’t seem wise because if it was risk-free to make billions then a lot more people would be billionaires…
In any case, faced with the choice of whether to precommit to always act with integrity, it’s not necessary for the pro-integrity arguments to be “infinitely strong.” The relevant question is “is the precommitment better in EV or not?” (given the range of circumstances you expect in your future). And the answer here seems”yes.” (Somewhat separately, I think people tend to underestimate how powerful and motivating it can be to have leadership with high integrity – it opens doors that would otherwise stay closed.)
You might say “That’s a false dilemma, that choice sounds artificially narrow. What if I can make a sophisticated precommitment that says I’ll act with integrity under almost all circumstances, except if the value at stake is (e.g.) 100 billion and I’m ultra-sure I can get away with it?” Okay, decent argument. But I don’t think it goes through. Maybe if you were a perfect utilitarian robot with infinitely malleable psychology and perfect rationality, maybe then it would go through. Maybe you’d have some kind of psychological “backdoor” programmed in where you activate “deceitful mode” if you ever find yourself in a situation where you can get away with >100 billion in profits. The problem though, in practice, is “when do you notice whether it’s a good time to activate ‘deceitful mode’?” To know when to activate it, you have to think hypothetically-deceitful-thoughts even earlier than the point of actually triggering the backdoor. Moreover, you have to take actions to preserve your abilities to be a successful deceiver later on. (E.g., people who deceive others tend to have a habit of generally not proactively sharing a lot of information about their motives and “reasons for acting,” while high-integrity people do the opposite. This is a real tradeoff – so which side do you pick?) These things aren’t cost free! (Not even for perfect utilitarian robots, and certainly not for humans where parts of our cognition cannot be shut off at will.) In reality, the situation is like this: you either train your psychology, your “inner elephant in the brain,” to have integrity to the very best of your abilities (it’s already hard enough!), or you do not. Retaining the ability to turn into a liar and deceitful manipulator “later on” doesn’t come cost-free; it changes you. If you’re planning to do it when 100 billion are at stake, that’ll reflect on how you approach other issues, too. (See also my comment in this comment section for more reasons why I don’t think it’s psychologically plausible for people to simultaneously be great liars and deceivers but also act perfectly as though they have high integrity.)
I think this post is my favourite for laying out why a really convincing utilitarian argument for something which is common sense very bad shouldn’t move you. From memory Eliezer says something like ~Thinking there’s a really good utilitarian argument doesn’t mean the ends justify the means, it just means your flawed brain with weird motivations feels like there’s a really good utilitarian argument. Your uncertainty in that always dominates and leaves room for common sense arguments, even when you feel really extra super sure. Common sense morality rules like “the ends shouldn’t justify the means” arose because people in practice are very miscallibrated about when the ends actually justify the means so we should take the outside view and assume we are too.~
(By miscallibrated I think I could defend a claim like “90% of the time people think the ends definitely justify the means and this clashes with common sense morality they are wrong”)
I might be butchering the post though so you should definitely read it.
https://www.lesswrong.com/posts/K9ZaZXDnL3SEmYZqB/ends-don-t-justify-means-among-humans
I agree that the arguments people are making here don’t seem very scope-sensitive, and I’m not about to make a very scope-sensitive argument either. However, it’s worth considering the possibility that the damage to the community and public trust in EA could be greater. How many potential EAs do we lose out on? How much harder is it to engage politically and with institutions? We’ve been having a hard time spending money well, at least within longtermism, so the extra donations (including extra donations had they not been caught) were plausibly worth much less per dollar than the marginal grants. Poor public perception will make using our resources effectively harder going forward.
To the people voting ‘disagree’, what OP said above is clearly true. Perhaps people are taking it to imply that the utilitarian course of action here is correct, but I see no such implication.
I think a better forum norm would be for someone to comment spelling out the perceived implication and why they disagree with it, and have other people upvote that.