predicting future (hopefully wiser and better-informed) values for moral antirealists
Any reason to believe moral realists would be less interested in this empirical work? You seem to assume the goal is to update our values based on those of future people. While this can be a motivation (this is among those of Danaher 2021), we might also worry—independently from whether we are moral realists or antirealists—that the expected future evolution of values doesn’t point towards something wiser and better-informed (since that’s not what evolution is “optimizing” for; relevant examples in this comment), and want to change this trajectory.
Anticipating what could happen seems instrumentally useful for anyone who has long-term goals, no matter their take on meta-ethics, right?
Ah OK, yes that seems right. I think the main context I have considered the values of future people previously is in trying to frontrun moral progress and get closer to the truth (if it exists) sooner than others, so that is where my mind most naturally went. But yes, if for instance, we were more in a Moloch style world where value was slowly disappearing in favour of ruthless efficiency then indeed that is good to know before it has happened so we can try to stop it.
Thanks Oscar!
Any reason to believe moral realists would be less interested in this empirical work? You seem to assume the goal is to update our values based on those of future people. While this can be a motivation (this is among those of Danaher 2021), we might also worry—independently from whether we are moral realists or antirealists—that the expected future evolution of values doesn’t point towards something wiser and better-informed (since that’s not what evolution is “optimizing” for; relevant examples in this comment), and want to change this trajectory.
Anticipating what could happen seems instrumentally useful for anyone who has long-term goals, no matter their take on meta-ethics, right?
Ah OK, yes that seems right. I think the main context I have considered the values of future people previously is in trying to frontrun moral progress and get closer to the truth (if it exists) sooner than others, so that is where my mind most naturally went. But yes, if for instance, we were more in a Moloch style world where value was slowly disappearing in favour of ruthless efficiency then indeed that is good to know before it has happened so we can try to stop it.