I think that even the association between functional agency and preferences in a morally valuable sense is an open philosophical question that I am not happy taking as a given.
Regardless, it seems like our underlying crux is that we assign utility to different things. I somewhat object to you saying that your version of this is utilitarianism and notions of assigning utility that privilege things humans value are not
Regardless, it seems like our underlying crux is that we assign utility to different things. I somewhat object to you saying that your version of this is utilitarianism and notions of assigning utility that privilege things humans value are not
I agree that our main point of disagreement seems to be about what we ultimately care about.
For what it’s worth, I didn’t mean to suggest in my post that my moral perspective is inherently superior to others. For example, my argument is fully compatible with someone being a deontologist. My goal was simply to articulate what I saw standard impartial utilitarianism as saying in this context, and to point out how many people’s arguments for AI pause don’t seem to track what standard impartial utilitarianism actually says. However, this only matters insofar as one adheres to that specific moral framework.
As a matter of terminology, I do think that the way I’m using the words “impartial utilitarianism” aligns more strongly with common usage in academic philosophy, given the emphasis that many utilitarians have placed on antispeciesist principles. However, even if you think I’m wrong on the grounds of terminology, I don’t think this disagreement subtracts much from the substance of my post as I’m simply talking about the implications of a common moral theory (regardless of whatever we choose to call it).
I think that even the association between functional agency and preferences in a morally valuable sense is an open philosophical question that I am not happy taking as a given.
Regardless, it seems like our underlying crux is that we assign utility to different things. I somewhat object to you saying that your version of this is utilitarianism and notions of assigning utility that privilege things humans value are not
I agree that our main point of disagreement seems to be about what we ultimately care about.
For what it’s worth, I didn’t mean to suggest in my post that my moral perspective is inherently superior to others. For example, my argument is fully compatible with someone being a deontologist. My goal was simply to articulate what I saw standard impartial utilitarianism as saying in this context, and to point out how many people’s arguments for AI pause don’t seem to track what standard impartial utilitarianism actually says. However, this only matters insofar as one adheres to that specific moral framework.
As a matter of terminology, I do think that the way I’m using the words “impartial utilitarianism” aligns more strongly with common usage in academic philosophy, given the emphasis that many utilitarians have placed on antispeciesist principles. However, even if you think I’m wrong on the grounds of terminology, I don’t think this disagreement subtracts much from the substance of my post as I’m simply talking about the implications of a common moral theory (regardless of whatever we choose to call it).
Thanks for clarifying. In that case I think that we broadly agree