Nice post! I agree moral errors aren’t only a worry for moral realists. But they do seem especially concerning for realists, as the moral truth may be very hard to discover, even for superintelligences. For antirealists, the first 100 years of a long reflection may get you most of the way to where your views will converge towards after a billion years of reflecting on your values. But the first 100 years of a long reflection are less guaranteed to get you close to the realist moral truth. So a 100-years-reflection is e.g. 90% likely to avoid massive moral errors for antirealists, but maybe only 40% likely to do so for realists.
--
Often when there are long lists like this I find it useful for my conceptual understanding to try to create some scructure to fit each item into, here is my attempt.
A moral error is making a moral decision that is quite suboptimal. This can happen if:
The agent has correct moral views, but makes a failure of judgement/rationality/empirics/decision theory and so chooses badly by their own lights.
The agent is adequately rational, but has incorrect views about ethics, namely the mapping from {possible universe trajectories} to {impartial value}. This could take the form of:
A mistake in picking out who is a moral patients, {universe trajectory} --> {moral patients}. (animals, digital beings)
A mistake in assigning lifetime wellbeing scores to each moral patient {moral patients} --> {list of lifetime wellbeing}. (theories of wellbeing, happiness vs suffering)
A mistake in aggregating correct wellbeing scores over the correct list of moral patients into the overall impartial value of the universe {list of lifetime wellbeings + possibly other relevant facts} --> {impartial value}. (population ethics, diversity, interestingness)
--
Some minor points:
I think the fact that people wouldn’t take bets involving near-certain death and a 1-in-a-billion chance of a long amazing life is more evidence about people being risk averse than that lifetime wellbeing is bounded above.
As currently written, choosing Variety over Homogeneity would only be a small moral error, not a massive one, as epsilon is small.
Nice post! I agree moral errors aren’t only a worry for moral realists. But they do seem especially concerning for realists, as the moral truth may be very hard to discover, even for superintelligences. For antirealists, the first 100 years of a long reflection may get you most of the way to where your views will converge towards after a billion years of reflecting on your values. But the first 100 years of a long reflection are less guaranteed to get you close to the realist moral truth. So a 100-years-reflection is e.g. 90% likely to avoid massive moral errors for antirealists, but maybe only 40% likely to do so for realists.
--
Often when there are long lists like this I find it useful for my conceptual understanding to try to create some scructure to fit each item into, here is my attempt.
A moral error is making a moral decision that is quite suboptimal. This can happen if:
The agent has correct moral views, but makes a failure of judgement/rationality/empirics/decision theory and so chooses badly by their own lights.
The agent is adequately rational, but has incorrect views about ethics, namely the mapping from {possible universe trajectories} to {impartial value}. This could take the form of:
A mistake in picking out who is a moral patients, {universe trajectory} --> {moral patients}. (animals, digital beings)
A mistake in assigning lifetime wellbeing scores to each moral patient {moral patients} --> {list of lifetime wellbeing}. (theories of wellbeing, happiness vs suffering)
A mistake in aggregating correct wellbeing scores over the correct list of moral patients into the overall impartial value of the universe {list of lifetime wellbeings + possibly other relevant facts} --> {impartial value}. (population ethics, diversity, interestingness)
--
Some minor points:
I think the fact that people wouldn’t take bets involving near-certain death and a 1-in-a-billion chance of a long amazing life is more evidence about people being risk averse than that lifetime wellbeing is bounded above.
As currently written, choosing Variety over Homogeneity would only be a small moral error, not a massive one, as epsilon is small.