I think it’s really worth exploring the question of whether moral convergence is even necessarily a good thing.
I’d say it’s a good thing when we find a relatively good moral theory, and bad when we find a relatively bad moral theory.
Even beyond moral convergence, I think we need to call into question its antecedent of ‘moral purity’ (i.e. defining and sticking to clear-cut moral principles) is even a good thing either.
Not sure what you mean here. Acting morally all the time does not necessarily mean having clear cut moral principles; we might be particularists, pluralists or intuitionists. And having clear cut moral principles doesn’t imply that we will only have moral reasons for acting; we might have generally free and self-directed lives which only get restrained occasionally by morality.
but like kbog mentions many of the commonly cited moral schema don’t apply in every situation – which is why Nick Bostrom, for example, suggests adopting a moral parliament set-up.
I wouldn’t go so far as to say that they ‘don’t apply,’ rather that it’s not clear what they say. E.g., what utilitarianism tells us about computational life is unclear because we don’t know much about qualia and identity. What Ross’s duties tell us about wildlife antinatalism is unclear because we don’t know how benevolent it is to prevent wildlife from existing. Etc, etc.
I don’t see how the lack of being able to apply moral schema to certain situations is a motivation for acting with moral uncertainty. After all, if you actually couldn’t apply a moral theory in a certain situation, you wouldn’t necessarily need a moral parliament—you could just follow the next-most-likely or next-best theory.
Rather, the motivation for moral uncertainty comes from theories with conflicting judgements where we don’t know which one is correct.
I worry that pushing for convergence and moral clarity may oversimplify the nuance of reality, and may harm our effectiveness in the long run.
I’m not sure about that. This would have to be better clarified and explained.
In my own life, I’ve been particularly worried about the limits of moral purity in day-to-day moral decisions – which I’ve written about here.
You seem to be primarily concerned with empirical uncertainty. But moral theories aren’t supposed to answer questions like “do things generally work out better if transgressors are punished.” They answer questions about what we ought to achieve, and figuring out how is an empirical question.
While it is true that someone will err when trying to follow almost any moral theory, I’m not sure how this motivates the claim that we should obey non-moral reasons for action or the claim that we shouldn’t try to converge on a single moral theory.
There are a lot of different issues at play here; whether we act according to moral uncertainty is different from whether we act as moral saints; whether we act as moral saints is different from whether our moral principles are demanding; whether we follow morality is different from what morality tells us to do regarding our closest friends and family.
For a specific example that probably applies to many of us, utilitarianism sometimes suggests that should you work excessive overtime at the expense of your personal relationships – but is this really a good idea? Even beyond self-care, is there a learning aspect (in terms of personal mental growth, as well as helping you to understand how to work effectively in a messy world filled with people who aren’t in EA) that we could be missing out of?
In that case, utilitarianism would tell us to foster personal relationships, as they would provide mental growth and help us work effectively.
I’d say it’s a good thing when we find a relatively good moral theory, and bad when we find a relatively bad moral theory.
Not sure what you mean here. Acting morally all the time does not necessarily mean having clear cut moral principles; we might be particularists, pluralists or intuitionists. And having clear cut moral principles doesn’t imply that we will only have moral reasons for acting; we might have generally free and self-directed lives which only get restrained occasionally by morality.
I wouldn’t go so far as to say that they ‘don’t apply,’ rather that it’s not clear what they say. E.g., what utilitarianism tells us about computational life is unclear because we don’t know much about qualia and identity. What Ross’s duties tell us about wildlife antinatalism is unclear because we don’t know how benevolent it is to prevent wildlife from existing. Etc, etc.
I don’t see how the lack of being able to apply moral schema to certain situations is a motivation for acting with moral uncertainty. After all, if you actually couldn’t apply a moral theory in a certain situation, you wouldn’t necessarily need a moral parliament—you could just follow the next-most-likely or next-best theory.
Rather, the motivation for moral uncertainty comes from theories with conflicting judgements where we don’t know which one is correct.
I’m not sure about that. This would have to be better clarified and explained.
You seem to be primarily concerned with empirical uncertainty. But moral theories aren’t supposed to answer questions like “do things generally work out better if transgressors are punished.” They answer questions about what we ought to achieve, and figuring out how is an empirical question.
While it is true that someone will err when trying to follow almost any moral theory, I’m not sure how this motivates the claim that we should obey non-moral reasons for action or the claim that we shouldn’t try to converge on a single moral theory.
There are a lot of different issues at play here; whether we act according to moral uncertainty is different from whether we act as moral saints; whether we act as moral saints is different from whether our moral principles are demanding; whether we follow morality is different from what morality tells us to do regarding our closest friends and family.
In that case, utilitarianism would tell us to foster personal relationships, as they would provide mental growth and help us work effectively.