For two objections to global desire theory, a theory which would basically take for granted answers to questions very similar to the indifference and creation tests to compare a life to neutral, see Life Satisfaction and its Discontents by Michael Plant:
The first problem is automaximisation. Suppose you want to have maximally high well-being. According to subjectivism, you get to decide what makes your life go well. Therefore, if you decide that your life is going well then, hey presto, it really is. This result is absurd: how our lives go cannot be entirely determined by cognitive whimsy.
The second problem is too few subjects. Many entities, such as non-human animals or cognitively disabled humans, are not capable of making judgements about how their life is going overall. According to subjectivism, such entities are not welfare-subjects, that is, they cannot have well-being. Hence, if you set your pet dog on fire, that would not be bad for him because, on this view, nothing can be good (or bad) for him. Subjectivism therefore unacceptably entails there are too few welfare subjects.
Of course, if automaximization is a problem, then so would be directly manipulating brains. You can still have them make cognitive judgements weighing up good and bad in their lives, but you get to choose the weights for them and perhaps what they consider good or bad. Basically wireheading.
For two objections to global desire theory, a theory which would basically take for granted answers to questions very similar to the indifference and creation tests to compare a life to neutral, see Life Satisfaction and its Discontents by Michael Plant:
Of course, if automaximization is a problem, then so would be directly manipulating brains. You can still have them make cognitive judgements weighing up good and bad in their lives, but you get to choose the weights for them and perhaps what they consider good or bad. Basically wireheading.