This post is a good example of the risks of tying yourself in knots with consequentialist reasoning. There are a lot of potential consequences of leaving a review beyond āit makes people less likely to eat at this particular restaurant, and they might eat at a non-vegan restaurant insteadā. You get into this some, but three plausible effects of artificially inflated reviews would be:
Non-vegans looking for high-quality food go to the restaurant, get vegan food, think āeven highly rated vegan food is terribleā, donāt become vegan.
Actually good vegan restaurants have trouble distinguishing themselves, because āhelpfulā vegans rate everywhere five stars regardless of quality, and so the normal forces that push up the quality of food donāt work as well. Now the food tastes bad and fewer people are willing to sustain the sacrifice of being vegan.
People notice this and think āif vegans are lying to us about how good the food is, are they also lying to us about the health impacts?ā Overall trust in vegans (and utilitarians) decreases.
We need a morality for human beings with limited ability to know the impacts of their actions, and reasoning through the full impact of every decision is not possible. Youāll generally do a lot more to make the world better if you take a more ārule utilitarianā approach, at least in low stakes situations like restaurant reviewing. Promoting truth and accurate information is almost always the right thing to do.
Ah yes, I thought of other things like your first point, but there are good, longer term things you bring up in your other two points that was not something I was thinking enough about.
But I suppose one thing I didnāt make clear his is how tailored this was to just representing, without distortion, how I thought about this scenario. Iāve engaged with the different forms of utilitarianism, and Iāve engaged with other schools of thought as well, and I when doing this in an academic setting, I generally come away unconvinced by many (rule utilitarian are related approached included). So absent that sort of framework you mention in the first part of your second paragraph, itās hard for me to choose any one thing and stick with it.
But perhaps you would reply āsure you might not find rule utilitarianism totally convincing when you sit down and look at the arguments, but it seems like you donāt find anything totally convincing, and you are still an actor making decisions out in the world. Further, as this post evidences, youāre using frameworks Iād argue are worse, like some sort of flavor of classical utilitarianism here, that shows that despite what intellectual commitments you may have youāre still endorsing an approach. So what Iām saying is maybe try to employ rule utilitarianism the way deployed classical utilitarianism here, as the temporary voice to the consequentialist amenable side of yourself, because it might help you avoid some of these tricky knots, and some bad longer term decisions (because your current framework biases against noticing these). And who knows, maybe with this change youāll find a bit less tension between the deontologist and consequentialist inside youā
Does that seem like the sort of principle you would endorse?
That principle sounds about right! I do endorse thinking very hard about consequences sometimes, though, when youāre deciding things likely to have the most impact, like what your career should be in.
This post is a good example of the risks of tying yourself in knots with consequentialist reasoning. There are a lot of potential consequences of leaving a review beyond āit makes people less likely to eat at this particular restaurant, and they might eat at a non-vegan restaurant insteadā. You get into this some, but three plausible effects of artificially inflated reviews would be:
Non-vegans looking for high-quality food go to the restaurant, get vegan food, think āeven highly rated vegan food is terribleā, donāt become vegan.
Actually good vegan restaurants have trouble distinguishing themselves, because āhelpfulā vegans rate everywhere five stars regardless of quality, and so the normal forces that push up the quality of food donāt work as well. Now the food tastes bad and fewer people are willing to sustain the sacrifice of being vegan.
People notice this and think āif vegans are lying to us about how good the food is, are they also lying to us about the health impacts?ā Overall trust in vegans (and utilitarians) decreases.
We need a morality for human beings with limited ability to know the impacts of their actions, and reasoning through the full impact of every decision is not possible. Youāll generally do a lot more to make the world better if you take a more ārule utilitarianā approach, at least in low stakes situations like restaurant reviewing. Promoting truth and accurate information is almost always the right thing to do.
[EDIT: expanded this into a post]
Ah yes, I thought of other things like your first point, but there are good, longer term things you bring up in your other two points that was not something I was thinking enough about.
But I suppose one thing I didnāt make clear his is how tailored this was to just representing, without distortion, how I thought about this scenario. Iāve engaged with the different forms of utilitarianism, and Iāve engaged with other schools of thought as well, and I when doing this in an academic setting, I generally come away unconvinced by many (rule utilitarian are related approached included). So absent that sort of framework you mention in the first part of your second paragraph, itās hard for me to choose any one thing and stick with it.
But perhaps you would reply āsure you might not find rule utilitarianism totally convincing when you sit down and look at the arguments, but it seems like you donāt find anything totally convincing, and you are still an actor making decisions out in the world. Further, as this post evidences, youāre using frameworks Iād argue are worse, like some sort of flavor of classical utilitarianism here, that shows that despite what intellectual commitments you may have youāre still endorsing an approach. So what Iām saying is maybe try to employ rule utilitarianism the way deployed classical utilitarianism here, as the temporary voice to the consequentialist amenable side of yourself, because it might help you avoid some of these tricky knots, and some bad longer term decisions (because your current framework biases against noticing these). And who knows, maybe with this change youāll find a bit less tension between the deontologist and consequentialist inside youā
Does that seem like the sort of principle you would endorse?
That principle sounds about right! I do endorse thinking very hard about consequences sometimes, though, when youāre deciding things likely to have the most impact, like what your career should be in.