Ah yes, I thought of other things like your first point, but there are good, longer term things you bring up in your other two points that was not something I was thinking enough about.
But I suppose one thing I didn’t make clear his is how tailored this was to just representing, without distortion, how I thought about this scenario. I’ve engaged with the different forms of utilitarianism, and I’ve engaged with other schools of thought as well, and I when doing this in an academic setting, I generally come away unconvinced by many (rule utilitarian are related approached included). So absent that sort of framework you mention in the first part of your second paragraph, it’s hard for me to choose any one thing and stick with it.
But perhaps you would reply “sure you might not find rule utilitarianism totally convincing when you sit down and look at the arguments, but it seems like you don’t find anything totally convincing, and you are still an actor making decisions out in the world. Further, as this post evidences, you’re using frameworks I’d argue are worse, like some sort of flavor of classical utilitarianism here, that shows that despite what intellectual commitments you may have you’re still endorsing an approach. So what I’m saying is maybe try to employ rule utilitarianism the way deployed classical utilitarianism here, as the temporary voice to the consequentialist amenable side of yourself, because it might help you avoid some of these tricky knots, and some bad longer term decisions (because your current framework biases against noticing these). And who knows, maybe with this change you’ll find a bit less tension between the deontologist and consequentialist inside you”
Does that seem like the sort of principle you would endorse?
That principle sounds about right! I do endorse thinking very hard about consequences sometimes, though, when you’re deciding things likely to have the most impact, like what your career should be in.
Ah yes, I thought of other things like your first point, but there are good, longer term things you bring up in your other two points that was not something I was thinking enough about.
But I suppose one thing I didn’t make clear his is how tailored this was to just representing, without distortion, how I thought about this scenario. I’ve engaged with the different forms of utilitarianism, and I’ve engaged with other schools of thought as well, and I when doing this in an academic setting, I generally come away unconvinced by many (rule utilitarian are related approached included). So absent that sort of framework you mention in the first part of your second paragraph, it’s hard for me to choose any one thing and stick with it.
But perhaps you would reply “sure you might not find rule utilitarianism totally convincing when you sit down and look at the arguments, but it seems like you don’t find anything totally convincing, and you are still an actor making decisions out in the world. Further, as this post evidences, you’re using frameworks I’d argue are worse, like some sort of flavor of classical utilitarianism here, that shows that despite what intellectual commitments you may have you’re still endorsing an approach. So what I’m saying is maybe try to employ rule utilitarianism the way deployed classical utilitarianism here, as the temporary voice to the consequentialist amenable side of yourself, because it might help you avoid some of these tricky knots, and some bad longer term decisions (because your current framework biases against noticing these). And who knows, maybe with this change you’ll find a bit less tension between the deontologist and consequentialist inside you”
Does that seem like the sort of principle you would endorse?
That principle sounds about right! I do endorse thinking very hard about consequences sometimes, though, when you’re deciding things likely to have the most impact, like what your career should be in.