Aaaaahhhh, that’s it, “preference utilitarianism” is the concept I was missing! Or rather, I assumed that any utilitarianism is preference utilitarianism, in that it leaves definition of what’s “good” or “bad” to the agents involved. And apparently it’s not the case?
Only now I’m even more confused. What is “welfare” you’re referring to, if it is not achievement of agent’s goals? Saying things like “joy” or “happiness” or “maximum utility” doesn’t really clarify anything when we’re talking about non-human agents. How do you define utility in non-preference utilitarianism?
Aaaaahhhh, that’s it, “preference utilitarianism” is the concept I was missing! Or rather, I assumed that any utilitarianism is preference utilitarianism, in that it leaves definition of what’s “good” or “bad” to the agents involved. And apparently it’s not the case?
Only now I’m even more confused. What is “welfare” you’re referring to, if it is not achievement of agent’s goals? Saying things like “joy” or “happiness” or “maximum utility” doesn’t really clarify anything when we’re talking about non-human agents. How do you define utility in non-preference utilitarianism?