The y axis is the net cost of being a jerk, which is (presumably) higher if people are more likely to notice.
Okay, well the problem here is that it assumes that people have transparent knowledge about what the probability of being discovered is. In reality we can’t infer well at all how likely someone thought it was for them to get caught. I think we often see rule breakers as irrational people who just assume that they won’t get caught. So I take issue with the approach of taking the amount of disapproval you will get from being a jerk and whittling it down to such a narrow function based on a few ad hoc principles.
I’d suggest a more basic view of psychology and sociology. Trust is hard to build and once someone violates trust then the shadow of doing so stays with them for a long time. If you do one shady thing once and then apologize and make amends for it then you can be forgiven (e.g. Givewell) but if you do shady things repeatedly while also apologizing repeatedly then you’re hosed (e.g. Glebgate). So you get two strikes, essentially. Therefore, definitely don’t break your trust, but then again if you have the reputation for it anyway then it’s not as big a deal to keep it up.
But whichever way you explain it, you’re still just doing the consequentialist calculus. And you still have to think about things in individual situations which are unusual. Moreover, you’ve still done nothing to actually support the proposed rule in the first half of the post.
This is a post about how I think people ought to act in plausible situations. Thought experiments can cast light on that question to the extent they bear relevant similarities to plausible situations. The relationship between thought experiments and plausible situations becomes relevant if we are trying to make inferences about what we should do in plausible situations.
Ok, but you’re not actually answering the philosophical issue, and people don’t seem to think by way of thought experiment in their applied ethical reasoning so it’s a bit of an odd way of discussing it. You could just as easily ignore the idea of the thought experiment and simply say “here’s what the consequences of honesty and deceit are.”
Okay, well the problem here is that it assumes that people have transparent knowledge about what the probability of being discovered is. In reality we can’t infer well at all how likely someone thought it was for them to get caught. I think we often see rule breakers as irrational people who just assume that they won’t get caught. So I take issue with the approach of taking the amount of disapproval you will get from being a jerk and whittling it down to such a narrow function based on a few ad hoc principles.
I’d suggest a more basic view of psychology and sociology. Trust is hard to build and once someone violates trust then the shadow of doing so stays with them for a long time. If you do one shady thing once and then apologize and make amends for it then you can be forgiven (e.g. Givewell) but if you do shady things repeatedly while also apologizing repeatedly then you’re hosed (e.g. Glebgate). So you get two strikes, essentially. Therefore, definitely don’t break your trust, but then again if you have the reputation for it anyway then it’s not as big a deal to keep it up.
But whichever way you explain it, you’re still just doing the consequentialist calculus. And you still have to think about things in individual situations which are unusual. Moreover, you’ve still done nothing to actually support the proposed rule in the first half of the post.
Ok, but you’re not actually answering the philosophical issue, and people don’t seem to think by way of thought experiment in their applied ethical reasoning so it’s a bit of an odd way of discussing it. You could just as easily ignore the idea of the thought experiment and simply say “here’s what the consequences of honesty and deceit are.”