You draw it as upward-sloping, but in your bullet-points you give reasons to believe that it would be downward-sloping.
The y axis is the cost of being a jerk, which is (presumably) higher if people are more likely to notice. In particular, it’s not the cost of being perceived as a jerk, which (I argue) should be downward sloping.
(It seems like your other confusions about the graphs come from the same miscommunication, sorry about that.)
Also, let me clarify how a thought experiment works.
This is a post about how I think people ought to act in plausible situations. Thought experiments can cast light on that question to the extent they bear relevant similarities to plausible situations. The relationship between thought experiments and plausible situations becomes relevant if we are trying to make inferences about what we should do in plausible situations.
I agree that there are other philosophical questions that this post does not speak to.
And it seems unlikely to me that you will be able to find a universal rule for summarizing the right way to behave.
I agree that we won’t be able to find universal rules. I tried to give a few arguments for why the correct behavior is less sensitive to context than you might expect, such that a simple approximation can be more robust than you would think. (I don’t seem to have successfully communicated to you, which is OK. If these aspects of the post are also confusing to others then I may revise them in an attempt to clarify.)
The y axis is the net cost of being a jerk, which is (presumably) higher if people are more likely to notice.
Okay, well the problem here is that it assumes that people have transparent knowledge about what the probability of being discovered is. In reality we can’t infer well at all how likely someone thought it was for them to get caught. I think we often see rule breakers as irrational people who just assume that they won’t get caught. So I take issue with the approach of taking the amount of disapproval you will get from being a jerk and whittling it down to such a narrow function based on a few ad hoc principles.
I’d suggest a more basic view of psychology and sociology. Trust is hard to build and once someone violates trust then the shadow of doing so stays with them for a long time. If you do one shady thing once and then apologize and make amends for it then you can be forgiven (e.g. Givewell) but if you do shady things repeatedly while also apologizing repeatedly then you’re hosed (e.g. Glebgate). So you get two strikes, essentially. Therefore, definitely don’t break your trust, but then again if you have the reputation for it anyway then it’s not as big a deal to keep it up.
But whichever way you explain it, you’re still just doing the consequentialist calculus. And you still have to think about things in individual situations which are unusual. Moreover, you’ve still done nothing to actually support the proposed rule in the first half of the post.
This is a post about how I think people ought to act in plausible situations. Thought experiments can cast light on that question to the extent they bear relevant similarities to plausible situations. The relationship between thought experiments and plausible situations becomes relevant if we are trying to make inferences about what we should do in plausible situations.
Ok, but you’re not actually answering the philosophical issue, and people don’t seem to think by way of thought experiment in their applied ethical reasoning so it’s a bit of an odd way of discussing it. You could just as easily ignore the idea of the thought experiment and simply say “here’s what the consequences of honesty and deceit are.”
The y axis is the cost of being a jerk, which is (presumably) higher if people are more likely to notice. In particular, it’s not the cost of being perceived as a jerk, which (I argue) should be downward sloping.
(It seems like your other confusions about the graphs come from the same miscommunication, sorry about that.)
This is a post about how I think people ought to act in plausible situations. Thought experiments can cast light on that question to the extent they bear relevant similarities to plausible situations. The relationship between thought experiments and plausible situations becomes relevant if we are trying to make inferences about what we should do in plausible situations.
I agree that there are other philosophical questions that this post does not speak to.
I agree that we won’t be able to find universal rules. I tried to give a few arguments for why the correct behavior is less sensitive to context than you might expect, such that a simple approximation can be more robust than you would think. (I don’t seem to have successfully communicated to you, which is OK. If these aspects of the post are also confusing to others then I may revise them in an attempt to clarify.)
Okay, well the problem here is that it assumes that people have transparent knowledge about what the probability of being discovered is. In reality we can’t infer well at all how likely someone thought it was for them to get caught. I think we often see rule breakers as irrational people who just assume that they won’t get caught. So I take issue with the approach of taking the amount of disapproval you will get from being a jerk and whittling it down to such a narrow function based on a few ad hoc principles.
I’d suggest a more basic view of psychology and sociology. Trust is hard to build and once someone violates trust then the shadow of doing so stays with them for a long time. If you do one shady thing once and then apologize and make amends for it then you can be forgiven (e.g. Givewell) but if you do shady things repeatedly while also apologizing repeatedly then you’re hosed (e.g. Glebgate). So you get two strikes, essentially. Therefore, definitely don’t break your trust, but then again if you have the reputation for it anyway then it’s not as big a deal to keep it up.
But whichever way you explain it, you’re still just doing the consequentialist calculus. And you still have to think about things in individual situations which are unusual. Moreover, you’ve still done nothing to actually support the proposed rule in the first half of the post.
Ok, but you’re not actually answering the philosophical issue, and people don’t seem to think by way of thought experiment in their applied ethical reasoning so it’s a bit of an odd way of discussing it. You could just as easily ignore the idea of the thought experiment and simply say “here’s what the consequences of honesty and deceit are.”