No worries. It is interesting though that you think my comment is a great example when it was meant to be a rebuttal. What I’m trying to say is, I wouldn’t really identify as a ‘utilitarian’ myself, so I don’t think I really have a vested interest in this debate. Nonetheless, I don’t think utilitarianism ‘breaks down’ in this scenario, as you seem to be suggesting. I think very poorly-formulated versions do, but those are not commonly defended, and with some adjustments utilitarianism can accomodate most of our intuitions very well (including the ones that are relevant here). I’m also not sure what the basis is of the suggestion that utilitarianism works worse when a situation is more unique and there is more context to factor in.
To reiterate, I think the right move is (progressive) adjustments to a theory, and moral uncertainty (where relevant), which both seem significantly more rational than particularism. It’s very unclear to me how we can know that it’s ‘impossible or unworkable’ to find a system that would guide our thinking in all situations. Indeed some versions of moral uncertainty already seem to do this pretty well. I also would object to classifying moral uncertainty as an ‘ad-hoc patch’. It wasn’t initially developed to better accommodate our intuitions, but simply because as a matter of fact we find ourselves in the position of uncertainty with respect to what moral theory is correct (or ‘preferable’), just like with empirical uncertainty.
I think it was a good example (I changed the wording from ‘great’ to ‘good’) because my point was more about the role of abstract and formal theories of ethics rather than restricted to utilitarianism itself, and your response was defending abstract theories as the ultimate foundation for ethics. The point (which I am likely communicating badly to someone with different beliefs) is that formal systems have limits and are imperfectly applied by flawed humans with limited time, information, etc. It is all well and good to talk about making adjustments to theories to refine them, and indeed philosophers should do so, but applying them to real life is necessarily an imperfect process.
No worries. It is interesting though that you think my comment is a great example when it was meant to be a rebuttal. What I’m trying to say is, I wouldn’t really identify as a ‘utilitarian’ myself, so I don’t think I really have a vested interest in this debate. Nonetheless, I don’t think utilitarianism ‘breaks down’ in this scenario, as you seem to be suggesting. I think very poorly-formulated versions do, but those are not commonly defended, and with some adjustments utilitarianism can accomodate most of our intuitions very well (including the ones that are relevant here). I’m also not sure what the basis is of the suggestion that utilitarianism works worse when a situation is more unique and there is more context to factor in.
To reiterate, I think the right move is (progressive) adjustments to a theory, and moral uncertainty (where relevant), which both seem significantly more rational than particularism. It’s very unclear to me how we can know that it’s ‘impossible or unworkable’ to find a system that would guide our thinking in all situations. Indeed some versions of moral uncertainty already seem to do this pretty well. I also would object to classifying moral uncertainty as an ‘ad-hoc patch’. It wasn’t initially developed to better accommodate our intuitions, but simply because as a matter of fact we find ourselves in the position of uncertainty with respect to what moral theory is correct (or ‘preferable’), just like with empirical uncertainty.
I think it was a good example (I changed the wording from ‘great’ to ‘good’) because my point was more about the role of abstract and formal theories of ethics rather than restricted to utilitarianism itself, and your response was defending abstract theories as the ultimate foundation for ethics. The point (which I am likely communicating badly to someone with different beliefs) is that formal systems have limits and are imperfectly applied by flawed humans with limited time, information, etc. It is all well and good to talk about making adjustments to theories to refine them, and indeed philosophers should do so, but applying them to real life is necessarily an imperfect process.