You can think of the common-sense moral intuition (like Josh’s) as a heuristic rather than “a pure value” (whatever that means) - subtly tying together a value with empirical beliefs about how to achieve that value
Discarding this intuition might mean you are discarding empirical knowledge without realizing it
Even if the heuristic is a “pure value,” I’m not sure why it’s not allowed for that value to just discount things more the farther they are away from you. If this is the case, then valuing the people in your cases is consistent with not valuing humans in the very far future.
And if it is a “pure value,” I suppose you might say that some kind of “time egalitarianism” intuition might fight against the “future people don’t matter as much” intuition. I’m curious where the “time egalitarianism” intuition comes from in this case, and if it’s really an intuition or more of an abstract belief.
And if it is a “pure value,” perhaps the intuition shouldn’t be discarded or completely discarded since agents with utility functions generally don’t want those utility functions changed (though this has questionable relevance).
I think it is a heuristic rather than a pure value. My point in my conversation with Josh was to disentangle these two things — see Footnote 1! I probably should be more clear that these examples are Move 1 in a two-move case for longtermism: first, show that the normative “don’t care about future people” thing leads to conclusions you wouldn’t endorse, then argue about the empirical disagreement about our ability to benefit future people that actually lies at the heart of the issue.
I think I understood that’s what you were doing at the time of writing, and mostly my comment was about bullets 2-5. E.g. yes “don’t care about future people at all” leads to conclusions you wouldn’t endorse, but what about discounting future people with some discount rate? I think this is what the common-sense intuition does, and maybe this should be thought of as a “pure value” rather than a heuristic. I wouldn’t really know how to answer that question though, maybe it’s dissolvable and/or confused.
Some thoughts:
You can think of the common-sense moral intuition (like Josh’s) as a heuristic rather than “a pure value” (whatever that means) - subtly tying together a value with empirical beliefs about how to achieve that value
Discarding this intuition might mean you are discarding empirical knowledge without realizing it
Even if the heuristic is a “pure value,” I’m not sure why it’s not allowed for that value to just discount things more the farther they are away from you. If this is the case, then valuing the people in your cases is consistent with not valuing humans in the very far future.
And if it is a “pure value,” I suppose you might say that some kind of “time egalitarianism” intuition might fight against the “future people don’t matter as much” intuition. I’m curious where the “time egalitarianism” intuition comes from in this case, and if it’s really an intuition or more of an abstract belief.
And if it is a “pure value,” perhaps the intuition shouldn’t be discarded or completely discarded since agents with utility functions generally don’t want those utility functions changed (though this has questionable relevance).
I think it is a heuristic rather than a pure value. My point in my conversation with Josh was to disentangle these two things — see Footnote 1! I probably should be more clear that these examples are Move 1 in a two-move case for longtermism: first, show that the normative “don’t care about future people” thing leads to conclusions you wouldn’t endorse, then argue about the empirical disagreement about our ability to benefit future people that actually lies at the heart of the issue.
I think I understood that’s what you were doing at the time of writing, and mostly my comment was about bullets 2-5. E.g. yes “don’t care about future people at all” leads to conclusions you wouldn’t endorse, but what about discounting future people with some discount rate? I think this is what the common-sense intuition does, and maybe this should be thought of as a “pure value” rather than a heuristic. I wouldn’t really know how to answer that question though, maybe it’s dissolvable and/or confused.