The first half of your essay (your method of only deceiving when it would still make sense to deceive if people knew you were such a deceiver) looks entirely disjoint from second half. In what way do the graphs, the reasons for being honest, etc., support this particular mindset that you have chosen? They just give complicated consequentialist reasons for being honest, which seems to be what you were trying to avoid in the first place.
I don’t think the graph makes anything clearer. Are we assuming that you’re holding the benefits of deceit fixed? Because that changes a lot of things. We can’t decide whether or not deceit is a good idea without having the expected value of deceit.
Why are you marking the typical thought experiment as having a very low cost of discovery? I would think that many typical thought experiments could have a very high cost of discovery—they could reference serious transgressions where large amounts of money, national secrets, lives, etc are at stake and where you might be seen as very immoral for not being honest despite the greater good of your actions. So the cost of discovery would be high yet the probability of discovery would be zero in such a thought experiment. On the other hand, there could be plenty of instances in our lives where we are likely to be discovered yet the cost of discovery is low. For instance, Wikipedia canvassing, or something along those lines.
So I don’t see what this line is doing in the two-dimensional space of possibilities. Why do you assume that all instances of deceit take place along this line?
Maybe you’re saying that if you hold almost everything constant, then people’s reaction to somebody else’s deceit depends on how likely they were to be discovered? But it’s not clear that it’s a large factor. For one thing, people’s emotional attitudes to something like this are complex dispositions, not clear functions, and we’re contradictory and flawed reasoners. For another, I can’t even tell if we do care about someone’s expectation of being discovered in our judgements upon those who have committed deceit. Yes, technically it makes sense to deter people more from concealable behavior, but only on a utilitarian principle of punishment does that make sense—which is far from a close approximation of people’s emotional response to deceit. It’s not a factor in retributive accounts of punishment nor does it play into accounts of moral blameworthiness as far as I know.
I don’t see how you even arrived at the shape of the line. You draw it as upward-sloping, but in your bullet-points you give reasons to believe that it would be downward-sloping. You seem to think that these bullets make it more hyperbolic than linear but I don’t see how you arrived at that conclusion from the bullet points, which quite clearly imply that the line would just slope downward rather than upward. You assume that the bullet points modulate the interior of the line but not the end points, which is just weird to me.
Also, let me clarify how a thought experiment works. It’s not supposed to provide a guide to effective behavior in iterated games or anything like that. A thought experiment works as a philosophical investigation of an underlying principle. The philosophical investigation will leave us with a general principle about ethical value. Then we’ll look at empirical information in order to pursue the goal. Usually, however, people don’t use thought experiments to argue that consequentialists should lie. The argument for being deceitful would just be that it’s what consequentialism demands, so if consequentialism is true, then we ought to lie (sometimes). It doesn’t take a special argument from thought experiments to establish that. So let’s say we agree that we should do whatever maximizes the best consequences. We’ll conduct an empirical investigation of when and how lying maximizes consequences. To a large extent, it will depend on the expected benefit of lying. And it seems unlikely to me that you will be able to find a universal rule for summarizing the right way to behave.
You draw it as upward-sloping, but in your bullet-points you give reasons to believe that it would be downward-sloping.
The y axis is the cost of being a jerk, which is (presumably) higher if people are more likely to notice. In particular, it’s not the cost of being perceived as a jerk, which (I argue) should be downward sloping.
(It seems like your other confusions about the graphs come from the same miscommunication, sorry about that.)
Also, let me clarify how a thought experiment works.
This is a post about how I think people ought to act in plausible situations. Thought experiments can cast light on that question to the extent they bear relevant similarities to plausible situations. The relationship between thought experiments and plausible situations becomes relevant if we are trying to make inferences about what we should do in plausible situations.
I agree that there are other philosophical questions that this post does not speak to.
And it seems unlikely to me that you will be able to find a universal rule for summarizing the right way to behave.
I agree that we won’t be able to find universal rules. I tried to give a few arguments for why the correct behavior is less sensitive to context than you might expect, such that a simple approximation can be more robust than you would think. (I don’t seem to have successfully communicated to you, which is OK. If these aspects of the post are also confusing to others then I may revise them in an attempt to clarify.)
The y axis is the net cost of being a jerk, which is (presumably) higher if people are more likely to notice.
Okay, well the problem here is that it assumes that people have transparent knowledge about what the probability of being discovered is. In reality we can’t infer well at all how likely someone thought it was for them to get caught. I think we often see rule breakers as irrational people who just assume that they won’t get caught. So I take issue with the approach of taking the amount of disapproval you will get from being a jerk and whittling it down to such a narrow function based on a few ad hoc principles.
I’d suggest a more basic view of psychology and sociology. Trust is hard to build and once someone violates trust then the shadow of doing so stays with them for a long time. If you do one shady thing once and then apologize and make amends for it then you can be forgiven (e.g. Givewell) but if you do shady things repeatedly while also apologizing repeatedly then you’re hosed (e.g. Glebgate). So you get two strikes, essentially. Therefore, definitely don’t break your trust, but then again if you have the reputation for it anyway then it’s not as big a deal to keep it up.
But whichever way you explain it, you’re still just doing the consequentialist calculus. And you still have to think about things in individual situations which are unusual. Moreover, you’ve still done nothing to actually support the proposed rule in the first half of the post.
This is a post about how I think people ought to act in plausible situations. Thought experiments can cast light on that question to the extent they bear relevant similarities to plausible situations. The relationship between thought experiments and plausible situations becomes relevant if we are trying to make inferences about what we should do in plausible situations.
Ok, but you’re not actually answering the philosophical issue, and people don’t seem to think by way of thought experiment in their applied ethical reasoning so it’s a bit of an odd way of discussing it. You could just as easily ignore the idea of the thought experiment and simply say “here’s what the consequences of honesty and deceit are.”
The first half of your essay (your method of only deceiving when it would still make sense to deceive if people knew you were such a deceiver) looks entirely disjoint from second half. In what way do the graphs, the reasons for being honest, etc., support this particular mindset that you have chosen? They just give complicated consequentialist reasons for being honest, which seems to be what you were trying to avoid in the first place.
I don’t think the graph makes anything clearer. Are we assuming that you’re holding the benefits of deceit fixed? Because that changes a lot of things. We can’t decide whether or not deceit is a good idea without having the expected value of deceit.
Why are you marking the typical thought experiment as having a very low cost of discovery? I would think that many typical thought experiments could have a very high cost of discovery—they could reference serious transgressions where large amounts of money, national secrets, lives, etc are at stake and where you might be seen as very immoral for not being honest despite the greater good of your actions. So the cost of discovery would be high yet the probability of discovery would be zero in such a thought experiment. On the other hand, there could be plenty of instances in our lives where we are likely to be discovered yet the cost of discovery is low. For instance, Wikipedia canvassing, or something along those lines.
So I don’t see what this line is doing in the two-dimensional space of possibilities. Why do you assume that all instances of deceit take place along this line?
Maybe you’re saying that if you hold almost everything constant, then people’s reaction to somebody else’s deceit depends on how likely they were to be discovered? But it’s not clear that it’s a large factor. For one thing, people’s emotional attitudes to something like this are complex dispositions, not clear functions, and we’re contradictory and flawed reasoners. For another, I can’t even tell if we do care about someone’s expectation of being discovered in our judgements upon those who have committed deceit. Yes, technically it makes sense to deter people more from concealable behavior, but only on a utilitarian principle of punishment does that make sense—which is far from a close approximation of people’s emotional response to deceit. It’s not a factor in retributive accounts of punishment nor does it play into accounts of moral blameworthiness as far as I know.
I don’t see how you even arrived at the shape of the line. You draw it as upward-sloping, but in your bullet-points you give reasons to believe that it would be downward-sloping. You seem to think that these bullets make it more hyperbolic than linear but I don’t see how you arrived at that conclusion from the bullet points, which quite clearly imply that the line would just slope downward rather than upward. You assume that the bullet points modulate the interior of the line but not the end points, which is just weird to me.
Also, let me clarify how a thought experiment works. It’s not supposed to provide a guide to effective behavior in iterated games or anything like that. A thought experiment works as a philosophical investigation of an underlying principle. The philosophical investigation will leave us with a general principle about ethical value. Then we’ll look at empirical information in order to pursue the goal. Usually, however, people don’t use thought experiments to argue that consequentialists should lie. The argument for being deceitful would just be that it’s what consequentialism demands, so if consequentialism is true, then we ought to lie (sometimes). It doesn’t take a special argument from thought experiments to establish that. So let’s say we agree that we should do whatever maximizes the best consequences. We’ll conduct an empirical investigation of when and how lying maximizes consequences. To a large extent, it will depend on the expected benefit of lying. And it seems unlikely to me that you will be able to find a universal rule for summarizing the right way to behave.
The y axis is the cost of being a jerk, which is (presumably) higher if people are more likely to notice. In particular, it’s not the cost of being perceived as a jerk, which (I argue) should be downward sloping.
(It seems like your other confusions about the graphs come from the same miscommunication, sorry about that.)
This is a post about how I think people ought to act in plausible situations. Thought experiments can cast light on that question to the extent they bear relevant similarities to plausible situations. The relationship between thought experiments and plausible situations becomes relevant if we are trying to make inferences about what we should do in plausible situations.
I agree that there are other philosophical questions that this post does not speak to.
I agree that we won’t be able to find universal rules. I tried to give a few arguments for why the correct behavior is less sensitive to context than you might expect, such that a simple approximation can be more robust than you would think. (I don’t seem to have successfully communicated to you, which is OK. If these aspects of the post are also confusing to others then I may revise them in an attempt to clarify.)
Okay, well the problem here is that it assumes that people have transparent knowledge about what the probability of being discovered is. In reality we can’t infer well at all how likely someone thought it was for them to get caught. I think we often see rule breakers as irrational people who just assume that they won’t get caught. So I take issue with the approach of taking the amount of disapproval you will get from being a jerk and whittling it down to such a narrow function based on a few ad hoc principles.
I’d suggest a more basic view of psychology and sociology. Trust is hard to build and once someone violates trust then the shadow of doing so stays with them for a long time. If you do one shady thing once and then apologize and make amends for it then you can be forgiven (e.g. Givewell) but if you do shady things repeatedly while also apologizing repeatedly then you’re hosed (e.g. Glebgate). So you get two strikes, essentially. Therefore, definitely don’t break your trust, but then again if you have the reputation for it anyway then it’s not as big a deal to keep it up.
But whichever way you explain it, you’re still just doing the consequentialist calculus. And you still have to think about things in individual situations which are unusual. Moreover, you’ve still done nothing to actually support the proposed rule in the first half of the post.
Ok, but you’re not actually answering the philosophical issue, and people don’t seem to think by way of thought experiment in their applied ethical reasoning so it’s a bit of an odd way of discussing it. You could just as easily ignore the idea of the thought experiment and simply say “here’s what the consequences of honesty and deceit are.”