If yefreitor is saying what I planned to say, the simpler version is just “there’s nothing ‘irrational’ about having a utility function that says ‘no experience matters every Tuesday.’” It certainly wouldn’t seem to be a good instrumental value, but if that’s your terminal value function that’s what it is.
They would recognize that the day of the week does not matter to the badness of their pains.
No, they literally have no negative (or positive) experience on Tuesdays, unless the experience on Tuesdays affects their experience on different days.
it would begin pursuing what is objectively worth pursuing.
??? “Objectively worth pursuing?” Where did that come from? Certainly not a Tuesday-impartial utility function, which is the only “objective” thing I’m seeing here? I didn’t see where you clearly explain this through a short ctrl+f for “objective.”
I agree one could have that value in theory. My claim is that if one were very rational, they would not. Note that, contrary to your indication, they do have experience on Tuesday, and their suffering feels just as bad on a Tuesday as on another day. They just have a higher order indifference to future suffering. I claim that what is objectively worth pursuing is indifferent to the day of the week.
If yefreitor is saying what I planned to say, the simpler version is just “there’s nothing ‘irrational’ about having a utility function that says ‘no experience matters every Tuesday.’”
Parfit’s position (and mine) is that Future Tuesday Indifference is manifestly irrational. But this has little to do with what sort of preferences sufficiently intelligent agents can have.
No, they literally have no negative (or positive) experience on Tuesdays
No, that’s explicitly ruled out in the setup. They have experiences on Tuesday, those experiences have the usual valence—they just fail to act accordingly. Here’s the full context from Reasons and Persons:
Consider next this imaginary case. A certain hedonist cares greatly about the quality of his future experiences. With one exception, he cares equally about all the parts of his future. The exception is that he has Future-Tuesday-Indifference. Throughout every Tuesday he cares in the normal way about what is happening to him. But he never cares about possible pains or pleasures on a future Tuesday. Thus he would choose a painful operation on the following Tuesday rather than a much less painful operation on the following Wednesday. This choice would not be the result of any false beliefs. This man knows that the operation will be much more painful if it is on Tuesday. Nor does he have false beliefs about personal identity. He agrees that it will be just as much him who will be suffering on Tuesday. Nor does he have false beliefs about time. He knows that Tuesday is merely part of a conventional calendar, with an arbitrary name taken from a false religion. Nor has he any other beliefs that might help to justify his indifference to pain on future Tuesdays. This indifference is a bare fact. When he is planning his future, it is simply true that he always prefers the prospect of great suffering on a Tuesday to the mildest pain on any other day.
This man’s pattern of concern is irrational. Why does he prefer agony on Tuesday to mild pain on any other day? Simply because the agony will be on a Tuesday. This is no reason. If someone must choose between suffering agony on Tuesday or mild pain on Wednesday, the fact that the agony will be on a Tuesday is no reason for preferring it. Preferring the worse of two pains, for no reason, is irrational.
I think our disagreement is that I think that superintelligences would be rational and avoid FTI for the same reason they’d be epistemically rational and good at reasoning in general.
I agree with everything you’ve said after the sentence “This is not what Parfit is arguing.” But how does that conflict with the things I said?
If yefreitor is saying what I planned to say, the simpler version is just “there’s nothing ‘irrational’ about having a utility function that says ‘no experience matters every Tuesday.’” It certainly wouldn’t seem to be a good instrumental value, but if that’s your terminal value function that’s what it is.
No, they literally have no negative (or positive) experience on Tuesdays, unless the experience on Tuesdays affects their experience on different days.
??? “Objectively worth pursuing?” Where did that come from? Certainly not a Tuesday-impartial utility function, which is the only “objective” thing I’m seeing here? I didn’t see where you clearly explain this through a short ctrl+f for “objective.”
I agree one could have that value in theory. My claim is that if one were very rational, they would not. Note that, contrary to your indication, they do have experience on Tuesday, and their suffering feels just as bad on a Tuesday as on another day. They just have a higher order indifference to future suffering. I claim that what is objectively worth pursuing is indifferent to the day of the week.
Parfit’s position (and mine) is that Future Tuesday Indifference is manifestly irrational. But this has little to do with what sort of preferences sufficiently intelligent agents can have.
No, that’s explicitly ruled out in the setup. They have experiences on Tuesday, those experiences have the usual valence—they just fail to act accordingly. Here’s the full context from Reasons and Persons:
I think our disagreement is that I think that superintelligences would be rational and avoid FTI for the same reason they’d be epistemically rational and good at reasoning in general.