This post is a good example of the risks of tying yourself in knots with consequentialist reasoning. There are a lot of potential consequences of leaving a review beyond “it makes people less likely to eat at this particular restaurant, and they might eat at a non-vegan restaurant instead”. You get into this some, but three plausible effects of artificially inflated reviews would be:
Non-vegans looking for high-quality food go to the restaurant, get vegan food, think “even highly rated vegan food is terrible”, don’t become vegan.
Actually good vegan restaurants have trouble distinguishing themselves, because “helpful” vegans rate everywhere five stars regardless of quality, and so the normal forces that push up the quality of food don’t work as well. Now the food tastes bad and fewer people are willing to sustain the sacrifice of being vegan.
People notice this and think “if vegans are lying to us about how good the food is, are they also lying to us about the health impacts?” Overall trust in vegans (and utilitarians) decreases.
We need a morality for human beings with limited ability to know the impacts of their actions, and reasoning through the full impact of every decision is not possible. You’ll generally do a lot more to make the world better if you take a more “rule utilitarian” approach, at least in low stakes situations like restaurant reviewing. Promoting truth and accurate information is almost always the right thing to do.
Ah yes, I thought of other things like your first point, but there are good, longer term things you bring up in your other two points that was not something I was thinking enough about.
But I suppose one thing I didn’t make clear his is how tailored this was to just representing, without distortion, how I thought about this scenario. I’ve engaged with the different forms of utilitarianism, and I’ve engaged with other schools of thought as well, and I when doing this in an academic setting, I generally come away unconvinced by many (rule utilitarian are related approached included). So absent that sort of framework you mention in the first part of your second paragraph, it’s hard for me to choose any one thing and stick with it.
But perhaps you would reply “sure you might not find rule utilitarianism totally convincing when you sit down and look at the arguments, but it seems like you don’t find anything totally convincing, and you are still an actor making decisions out in the world. Further, as this post evidences, you’re using frameworks I’d argue are worse, like some sort of flavor of classical utilitarianism here, that shows that despite what intellectual commitments you may have you’re still endorsing an approach. So what I’m saying is maybe try to employ rule utilitarianism the way deployed classical utilitarianism here, as the temporary voice to the consequentialist amenable side of yourself, because it might help you avoid some of these tricky knots, and some bad longer term decisions (because your current framework biases against noticing these). And who knows, maybe with this change you’ll find a bit less tension between the deontologist and consequentialist inside you”
Does that seem like the sort of principle you would endorse?
That principle sounds about right! I do endorse thinking very hard about consequences sometimes, though, when you’re deciding things likely to have the most impact, like what your career should be in.
This post is a good example of the risks of tying yourself in knots with consequentialist reasoning. There are a lot of potential consequences of leaving a review beyond “it makes people less likely to eat at this particular restaurant, and they might eat at a non-vegan restaurant instead”. You get into this some, but three plausible effects of artificially inflated reviews would be:
Non-vegans looking for high-quality food go to the restaurant, get vegan food, think “even highly rated vegan food is terrible”, don’t become vegan.
Actually good vegan restaurants have trouble distinguishing themselves, because “helpful” vegans rate everywhere five stars regardless of quality, and so the normal forces that push up the quality of food don’t work as well. Now the food tastes bad and fewer people are willing to sustain the sacrifice of being vegan.
People notice this and think “if vegans are lying to us about how good the food is, are they also lying to us about the health impacts?” Overall trust in vegans (and utilitarians) decreases.
We need a morality for human beings with limited ability to know the impacts of their actions, and reasoning through the full impact of every decision is not possible. You’ll generally do a lot more to make the world better if you take a more “rule utilitarian” approach, at least in low stakes situations like restaurant reviewing. Promoting truth and accurate information is almost always the right thing to do.
[EDIT: expanded this into a post]
Ah yes, I thought of other things like your first point, but there are good, longer term things you bring up in your other two points that was not something I was thinking enough about.
But I suppose one thing I didn’t make clear his is how tailored this was to just representing, without distortion, how I thought about this scenario. I’ve engaged with the different forms of utilitarianism, and I’ve engaged with other schools of thought as well, and I when doing this in an academic setting, I generally come away unconvinced by many (rule utilitarian are related approached included). So absent that sort of framework you mention in the first part of your second paragraph, it’s hard for me to choose any one thing and stick with it.
But perhaps you would reply “sure you might not find rule utilitarianism totally convincing when you sit down and look at the arguments, but it seems like you don’t find anything totally convincing, and you are still an actor making decisions out in the world. Further, as this post evidences, you’re using frameworks I’d argue are worse, like some sort of flavor of classical utilitarianism here, that shows that despite what intellectual commitments you may have you’re still endorsing an approach. So what I’m saying is maybe try to employ rule utilitarianism the way deployed classical utilitarianism here, as the temporary voice to the consequentialist amenable side of yourself, because it might help you avoid some of these tricky knots, and some bad longer term decisions (because your current framework biases against noticing these). And who knows, maybe with this change you’ll find a bit less tension between the deontologist and consequentialist inside you”
Does that seem like the sort of principle you would endorse?
That principle sounds about right! I do endorse thinking very hard about consequences sometimes, though, when you’re deciding things likely to have the most impact, like what your career should be in.