Sure! So I think most of our conceptual philosophical moral progress until now has been quite poor. If looked under the lens of moral consistency reasoning I outlined in point (3), cosmopolitanism, feminism, human rights, animal rights, and even longtermism all seem like slight variations on the same argument (“There are no morally relevant differences between Amy and Bob, so we should treat them equally”).
In contrast, I think the fact that we are starting to develop cases like population ethics, infinite ethics, complicated variations of thought experiments (there are infinite variations of the trolley problem we could conjure up), that really test our limits of our moral sense and moral intuitions, hints at the fact that we might need a more systematic, perhaps computerized approach to moral philosophy. I think the likely path is that most conceptual moral progress in the future (in the sense of figuring out new theories and thought experiments) will happen with the assistance of AI systems.
I can’t point to anything very concrete, since I can’t predict the future of moral philosophy in any concrete way, but I think philosophical ethics might become very conceptually advanced and depart heavily from common-sense morality. I think this has been an increasing gap since the enlightenment. Challenges to common-sense morality have been slowly increasing. We might be at the early beginning of that exponential takeoff.
Of course, many of the moral systems that AIs will develop we will consider to be ridiculous. And some might be! But in other cases, we might be too backwards or morally tied to our biologically and culturally shaped moral intuitions and taboos to realize that it is in fact an advancement. For example, the Repugnant Conclusion in population ethics might be true (or the optimal decision in some sense, if you’re a moral anti-realist), even if it goes against many of our moral intuitions.
The effort will take place in separating the wheat from the chaff. And I’m not sure if it will be AI or actual moral philosophers doing this effort of discriminating good from bad ethical systems and concepts.
You need a step beyond this though. Not just that we are coming up with harder moral problems, but that solving those problems is important to future moral progress.
Perhaps a structure as simple as the one that has worked historically will prove just as useful in the future, or, as you point out has happened in the past, wider societal changes (not progress in moral philosophy at an academic discipline) is the major driver. In either case, all this complex moral philosophy is not the important factor for practical moral progress across society.
Fair! I agree to that, at least until this point of time.
But I think there could be a time where we could have picked most of the “social low-hanging fruit” (cases like the abolition of slavery, universal suffrage, universal education), so there’s not a lot for easy social progress left to do. At least comparatively, then investing on the “moral philosophy low-hanging fruit” will look more worthwhile.
Some important cases of philosophical moral problems that might have great axiological moral importance, at least under consequentialism/utilitarianism could be population ethics (totalism vs averagism), our duties towards wild animals, and the moral status of digital beings.
I think figuring them out could have great importance. Of course, if we always just keep them as just an interesting philosophical thought experiment and we don’t do anything about promoting any outcomes, they might not matter that much. But I’m guessing people in the year 2100 might want to start implementing some of those ideas.
Sure! So I think most of our conceptual philosophical moral progress until now has been quite poor. If looked under the lens of moral consistency reasoning I outlined in point (3), cosmopolitanism, feminism, human rights, animal rights, and even longtermism all seem like slight variations on the same argument (“There are no morally relevant differences between Amy and Bob, so we should treat them equally”).
In contrast, I think the fact that we are starting to develop cases like population ethics, infinite ethics, complicated variations of thought experiments (there are infinite variations of the trolley problem we could conjure up), that really test our limits of our moral sense and moral intuitions, hints at the fact that we might need a more systematic, perhaps computerized approach to moral philosophy. I think the likely path is that most conceptual moral progress in the future (in the sense of figuring out new theories and thought experiments) will happen with the assistance of AI systems.
I can’t point to anything very concrete, since I can’t predict the future of moral philosophy in any concrete way, but I think philosophical ethics might become very conceptually advanced and depart heavily from common-sense morality. I think this has been an increasing gap since the enlightenment. Challenges to common-sense morality have been slowly increasing. We might be at the early beginning of that exponential takeoff.
Of course, many of the moral systems that AIs will develop we will consider to be ridiculous. And some might be! But in other cases, we might be too backwards or morally tied to our biologically and culturally shaped moral intuitions and taboos to realize that it is in fact an advancement. For example, the Repugnant Conclusion in population ethics might be true (or the optimal decision in some sense, if you’re a moral anti-realist), even if it goes against many of our moral intuitions.
The effort will take place in separating the wheat from the chaff. And I’m not sure if it will be AI or actual moral philosophers doing this effort of discriminating good from bad ethical systems and concepts.
You need a step beyond this though. Not just that we are coming up with harder moral problems, but that solving those problems is important to future moral progress.
Perhaps a structure as simple as the one that has worked historically will prove just as useful in the future, or, as you point out has happened in the past, wider societal changes (not progress in moral philosophy at an academic discipline) is the major driver. In either case, all this complex moral philosophy is not the important factor for practical moral progress across society.
Fair! I agree to that, at least until this point of time.
But I think there could be a time where we could have picked most of the “social low-hanging fruit” (cases like the abolition of slavery, universal suffrage, universal education), so there’s not a lot for easy social progress left to do. At least comparatively, then investing on the “moral philosophy low-hanging fruit” will look more worthwhile.
Some important cases of philosophical moral problems that might have great axiological moral importance, at least under consequentialism/utilitarianism could be population ethics (totalism vs averagism), our duties towards wild animals, and the moral status of digital beings.
I think figuring them out could have great importance. Of course, if we always just keep them as just an interesting philosophical thought experiment and we don’t do anything about promoting any outcomes, they might not matter that much. But I’m guessing people in the year 2100 might want to start implementing some of those ideas.