Well, there’s a whole field of moral philosophy which is trying to do this, and they haven’t been able to agree in the last couple thousand years of trying. They probably won’t finish until at least the whole field of metaethics sorts out some of its own issues, but those have been going on for at least a few centuries without resolve. So things don’t look too good!
There have certainly been paradigm shifts and trends in philosophy which we can point to for optimism. E.g., philosophers (whether religious or not) no longer consider deities to be a direct source of moral judgements and duties. Moral positions are successively pushed and refined by critiques from various directions, so a moral position formed these days—while still contentious—is at least prepared to defend itself from attacks and critiques from various directions and positions. And moral theories have generally grown more nuanced and complex in the history of philosophy.
Still, opinions are split and show no signs of resolving. The field has certainly learned to agree on certain issues like slavery and deviant sexuality, but new issues seem to crop up every time one of those gets solved, so that’s not much consolation. Social psychology and experimental philosophy also don’t do anything for resolving core disputes about morality, even though some people apparently think they do.
My suggestion is that we worry less about solving moral philosophy and worry more about solving the actual core issues at stake—how should the species continue, what sorts of lives are worth living, etc. Those are much more reasonable things to attack, and philosophical theories often agree on applied judgements even if the theories themselves differ. Moreover, many of our commonly held moral theories—having been developed in long-past social and historical contexts—don’t actually provide clear guidance on how we should resolve some of these new futuristic debates.
My suggestion is that we worry less about solving moral philosophy and worry more about solving the actual core issues at stake
Moreover, many of our commonly held moral theories—having been developed in long-past social and historical contexts—don’t actually provide clear guidance on how we should resolve some of these new futuristic debates.
Yes—thank you for posting this! I think it’s really worth exploring the question of whether moral convergence is even necessarily a good thing. Even beyond moral convergence, I think we need to call into question its antecedent of ‘moral purity’ (i.e. defining and sticking to clear-cut moral principles) is even a good thing either.
I don’t have a philosophy background, so please let me know if this take is way off course, but like kbog mentions many of the commonly cited moral schema don’t apply in every situation – which is why Nick Bostrom, for example, suggests adopting a moral parliament set-up. I worry that pushing for convergence and moral clarity may oversimplify the nuance of reality, and may harm our effectiveness in the long run.
In my own life, I’ve been particularly worried about the limits of moral purity in day-to-day moral decisions – which I’ve written about here. While it’s easy to applaud folk who rigorously keep to a strict moral code, I really wonder whether it’s really the best way forward? For a specific example that probably applies to many of us, utilitarianism sometimes suggests that should you work excessive overtime at the expense of your personal relationships – but is this really a good idea? Even beyond self-care, is there a learning aspect (in terms of personal mental growth, as well as helping you to understand how to work effectively in a messy world filled with people who aren’t in EA) that we could be missing out of?
Thanks for sharing the moral parliament set-up Rick. It looks good, but looks incredibly similar to MacAskill’s Expected Moral Value methodology!
I disagree a little with you though. I think that some moral frameworks are actually quite good at adapting to new and strange situations. Take, for example, a classical hedonistic utilitarian framework, which accounts for consciousness in any form (human, non-human, digital etc). If you come up with a new situation, you should still be able to work out which action is most ethical (in this case, which actions maximises pleasure and minimises pain). The answer may not be immediately clear, especially in tricky scenarios, and perhaps we can’t be 100% certain about which action is best, but that doesn’t mean there isn’t an answer.
Regarding your last point about the downsides of taking utilitarianism to its conclusion, I think that (in theory at least) utilitarianism should take these into account. If applying utilitarianism harms your personal relationships and mental growth and ends up in a bad outcome, you’re just not applying utilitarianism correctly.
Sometimes the best way to be a utilitarian is to pretend not to be a utilitarian, and there are heaps of examples of this in every day life (e.g. not donating 100% of your income because you may burn out, you may set an example that no one wants to reach… etc.).
Thank you Mike, all very good points. I agree that some frameworks, especially versions of utilitarianism, are quite good at adapting to new situations, but to be a little more formal about my original point, I worry that the resources and skills required to adapt these frameworks in order to make them ‘work’ makes them poor frameworks to rely on for a day-to-day basis. Expecting human beings to apply these frameworks ‘correctly’ is probably giving the forecasting and estimation ability of humans a little too much credit. For a reductive example, ‘do the most good possible’ technically is a ‘correct’ moral framework, but it really doesn’t ‘work’ well for day-to-day decisions unless you apply a lot of diligent thought to it (often forcing you to rely on ‘sub-frameworks’).
Imagine a 10 year old child who suddenly and religiously adopts a classical hedonistic utilitarian framework – I would have to imagine that this would not turn out for the best. Even though their overall framework is probably correct, their understanding of the world hampers their ability to live up to their ideals effectively. They will make decisions that will objectively be against their framework, simply because the information they are acting on is incomplete. 10 year olds with much simpler moral frameworks will most likely be ‘right’ from a utilitarian standpoint much more often than 10 year olds with a hedonistic utilitarian framework, simply because the latter requires a much more nuanced understanding of the world and forecasted effects in order to work.
My worry is that all humans (not just 10 year olds) are bad at forecasting the impacts of their actions, especially when dynamic effects are involved (as they invariably are). With this in mind, let’s pretend that, at most, the average person can semi-accurately estimate the first order effects of your actions (which is honestly a stretch already). A first order effect would be something like “each marginal hour I work creates more utility for the people I donate to than is lost among me and my family”. Under a utilitarian framework, you would go with whatever you estimate to be correct, which in turn (due to your inability to forecast) would be based on only a first order approximation. Other frameworks that aren’t as based on forecasting (e.g. some version of deontology) can see this first order approximation and still suggest another action (which may, in turn, create more ‘good’ in the long-run).
Going back to the overtime example, if you look past first-order effects in a utilitarian framework you can still build a reason against the whole ‘work overtime’ thing. A second order effect would be something like “but, if I do this too long, I’ll burn out, thus decreasing my long-term ability to donate”, and a third order effect would be something like “if I portray sacrificing my wellbeing as a virtue by continuing to do this throughout my life, it could change the views of those who see me as a role model in not-necessarily positive ways”, and so on. Luckily, as a movement, people have finally started to normalize an acceptance of some of the problematic second-order effects of the ‘work overtime’ thing, but it took a worryingly long time—and it certainly won’t be the only time that our first order estimations will be overturned by more diligent thinking!
So, yes, if you work really hard to figure out second, third, etc. order effects, then versions of utilitarianism can be great – but relying too heavily on it for day-to-day decisions (at the expense of sub-frameworks that rely less on forecasting ability) may not work out as well as we’d hope, since figuring out those effects is terribly complicated – in many decisions, relying on a sub-framework that relies less on forecasting ability (e.g. some version of deontology) may be the best way forward. Many EAs realize some version of this, but I think it’s something that we should be more explicit about.
To draw it back in to the “is the moral parliament basically the same as Expected Moral Value”, I would say that it’s not. They are similar, but a key difference is the forecasting ability required for each: moral parliament can easily be used as a mental heuristic in cases where forecasting is impossible or misleading by focusing on which framework applies best for given situations, whereas EMV requires quite a bit of forecasting ability and calculation, and most importantly is incredibly biased against moral frameworks that are unable to quantify the expected good to come out of decisions (yes, the discussion of how to deal with ordinal systems does some to mitigate this, but even then there is a need to forecast effects implicit in the decision process). Hopefully that helps clarify my position, I should’ve probably been a bit more formal in my reasoning in my original post, but better late than never I guess!
I think it’s really worth exploring the question of whether moral convergence is even necessarily a good thing.
I’d say it’s a good thing when we find a relatively good moral theory, and bad when we find a relatively bad moral theory.
Even beyond moral convergence, I think we need to call into question its antecedent of ‘moral purity’ (i.e. defining and sticking to clear-cut moral principles) is even a good thing either.
Not sure what you mean here. Acting morally all the time does not necessarily mean having clear cut moral principles; we might be particularists, pluralists or intuitionists. And having clear cut moral principles doesn’t imply that we will only have moral reasons for acting; we might have generally free and self-directed lives which only get restrained occasionally by morality.
but like kbog mentions many of the commonly cited moral schema don’t apply in every situation – which is why Nick Bostrom, for example, suggests adopting a moral parliament set-up.
I wouldn’t go so far as to say that they ‘don’t apply,’ rather that it’s not clear what they say. E.g., what utilitarianism tells us about computational life is unclear because we don’t know much about qualia and identity. What Ross’s duties tell us about wildlife antinatalism is unclear because we don’t know how benevolent it is to prevent wildlife from existing. Etc, etc.
I don’t see how the lack of being able to apply moral schema to certain situations is a motivation for acting with moral uncertainty. After all, if you actually couldn’t apply a moral theory in a certain situation, you wouldn’t necessarily need a moral parliament—you could just follow the next-most-likely or next-best theory.
Rather, the motivation for moral uncertainty comes from theories with conflicting judgements where we don’t know which one is correct.
I worry that pushing for convergence and moral clarity may oversimplify the nuance of reality, and may harm our effectiveness in the long run.
I’m not sure about that. This would have to be better clarified and explained.
In my own life, I’ve been particularly worried about the limits of moral purity in day-to-day moral decisions – which I’ve written about here.
You seem to be primarily concerned with empirical uncertainty. But moral theories aren’t supposed to answer questions like “do things generally work out better if transgressors are punished.” They answer questions about what we ought to achieve, and figuring out how is an empirical question.
While it is true that someone will err when trying to follow almost any moral theory, I’m not sure how this motivates the claim that we should obey non-moral reasons for action or the claim that we shouldn’t try to converge on a single moral theory.
There are a lot of different issues at play here; whether we act according to moral uncertainty is different from whether we act as moral saints; whether we act as moral saints is different from whether our moral principles are demanding; whether we follow morality is different from what morality tells us to do regarding our closest friends and family.
For a specific example that probably applies to many of us, utilitarianism sometimes suggests that should you work excessive overtime at the expense of your personal relationships – but is this really a good idea? Even beyond self-care, is there a learning aspect (in terms of personal mental growth, as well as helping you to understand how to work effectively in a messy world filled with people who aren’t in EA) that we could be missing out of?
In that case, utilitarianism would tell us to foster personal relationships, as they would provide mental growth and help us work effectively.
Well, there’s a whole field of moral philosophy which is trying to do this, and they haven’t been able to agree in the last couple thousand years of trying. They probably won’t finish until at least the whole field of metaethics sorts out some of its own issues, but those have been going on for at least a few centuries without resolve. So things don’t look too good!
There have certainly been paradigm shifts and trends in philosophy which we can point to for optimism. E.g., philosophers (whether religious or not) no longer consider deities to be a direct source of moral judgements and duties. Moral positions are successively pushed and refined by critiques from various directions, so a moral position formed these days—while still contentious—is at least prepared to defend itself from attacks and critiques from various directions and positions. And moral theories have generally grown more nuanced and complex in the history of philosophy.
Still, opinions are split and show no signs of resolving. The field has certainly learned to agree on certain issues like slavery and deviant sexuality, but new issues seem to crop up every time one of those gets solved, so that’s not much consolation. Social psychology and experimental philosophy also don’t do anything for resolving core disputes about morality, even though some people apparently think they do.
My suggestion is that we worry less about solving moral philosophy and worry more about solving the actual core issues at stake—how should the species continue, what sorts of lives are worth living, etc. Those are much more reasonable things to attack, and philosophical theories often agree on applied judgements even if the theories themselves differ. Moreover, many of our commonly held moral theories—having been developed in long-past social and historical contexts—don’t actually provide clear guidance on how we should resolve some of these new futuristic debates.
Yes—thank you for posting this! I think it’s really worth exploring the question of whether moral convergence is even necessarily a good thing. Even beyond moral convergence, I think we need to call into question its antecedent of ‘moral purity’ (i.e. defining and sticking to clear-cut moral principles) is even a good thing either.
I don’t have a philosophy background, so please let me know if this take is way off course, but like kbog mentions many of the commonly cited moral schema don’t apply in every situation – which is why Nick Bostrom, for example, suggests adopting a moral parliament set-up. I worry that pushing for convergence and moral clarity may oversimplify the nuance of reality, and may harm our effectiveness in the long run.
In my own life, I’ve been particularly worried about the limits of moral purity in day-to-day moral decisions – which I’ve written about here. While it’s easy to applaud folk who rigorously keep to a strict moral code, I really wonder whether it’s really the best way forward? For a specific example that probably applies to many of us, utilitarianism sometimes suggests that should you work excessive overtime at the expense of your personal relationships – but is this really a good idea? Even beyond self-care, is there a learning aspect (in terms of personal mental growth, as well as helping you to understand how to work effectively in a messy world filled with people who aren’t in EA) that we could be missing out of?
Thanks for sharing the moral parliament set-up Rick. It looks good, but looks incredibly similar to MacAskill’s Expected Moral Value methodology!
I disagree a little with you though. I think that some moral frameworks are actually quite good at adapting to new and strange situations. Take, for example, a classical hedonistic utilitarian framework, which accounts for consciousness in any form (human, non-human, digital etc). If you come up with a new situation, you should still be able to work out which action is most ethical (in this case, which actions maximises pleasure and minimises pain). The answer may not be immediately clear, especially in tricky scenarios, and perhaps we can’t be 100% certain about which action is best, but that doesn’t mean there isn’t an answer.
Regarding your last point about the downsides of taking utilitarianism to its conclusion, I think that (in theory at least) utilitarianism should take these into account. If applying utilitarianism harms your personal relationships and mental growth and ends up in a bad outcome, you’re just not applying utilitarianism correctly.
Sometimes the best way to be a utilitarian is to pretend not to be a utilitarian, and there are heaps of examples of this in every day life (e.g. not donating 100% of your income because you may burn out, you may set an example that no one wants to reach… etc.).
Thank you Mike, all very good points. I agree that some frameworks, especially versions of utilitarianism, are quite good at adapting to new situations, but to be a little more formal about my original point, I worry that the resources and skills required to adapt these frameworks in order to make them ‘work’ makes them poor frameworks to rely on for a day-to-day basis. Expecting human beings to apply these frameworks ‘correctly’ is probably giving the forecasting and estimation ability of humans a little too much credit. For a reductive example, ‘do the most good possible’ technically is a ‘correct’ moral framework, but it really doesn’t ‘work’ well for day-to-day decisions unless you apply a lot of diligent thought to it (often forcing you to rely on ‘sub-frameworks’).
Imagine a 10 year old child who suddenly and religiously adopts a classical hedonistic utilitarian framework – I would have to imagine that this would not turn out for the best. Even though their overall framework is probably correct, their understanding of the world hampers their ability to live up to their ideals effectively. They will make decisions that will objectively be against their framework, simply because the information they are acting on is incomplete. 10 year olds with much simpler moral frameworks will most likely be ‘right’ from a utilitarian standpoint much more often than 10 year olds with a hedonistic utilitarian framework, simply because the latter requires a much more nuanced understanding of the world and forecasted effects in order to work.
My worry is that all humans (not just 10 year olds) are bad at forecasting the impacts of their actions, especially when dynamic effects are involved (as they invariably are). With this in mind, let’s pretend that, at most, the average person can semi-accurately estimate the first order effects of your actions (which is honestly a stretch already). A first order effect would be something like “each marginal hour I work creates more utility for the people I donate to than is lost among me and my family”. Under a utilitarian framework, you would go with whatever you estimate to be correct, which in turn (due to your inability to forecast) would be based on only a first order approximation. Other frameworks that aren’t as based on forecasting (e.g. some version of deontology) can see this first order approximation and still suggest another action (which may, in turn, create more ‘good’ in the long-run). Going back to the overtime example, if you look past first-order effects in a utilitarian framework you can still build a reason against the whole ‘work overtime’ thing. A second order effect would be something like “but, if I do this too long, I’ll burn out, thus decreasing my long-term ability to donate”, and a third order effect would be something like “if I portray sacrificing my wellbeing as a virtue by continuing to do this throughout my life, it could change the views of those who see me as a role model in not-necessarily positive ways”, and so on. Luckily, as a movement, people have finally started to normalize an acceptance of some of the problematic second-order effects of the ‘work overtime’ thing, but it took a worryingly long time—and it certainly won’t be the only time that our first order estimations will be overturned by more diligent thinking!
So, yes, if you work really hard to figure out second, third, etc. order effects, then versions of utilitarianism can be great – but relying too heavily on it for day-to-day decisions (at the expense of sub-frameworks that rely less on forecasting ability) may not work out as well as we’d hope, since figuring out those effects is terribly complicated – in many decisions, relying on a sub-framework that relies less on forecasting ability (e.g. some version of deontology) may be the best way forward. Many EAs realize some version of this, but I think it’s something that we should be more explicit about.
To draw it back in to the “is the moral parliament basically the same as Expected Moral Value”, I would say that it’s not. They are similar, but a key difference is the forecasting ability required for each: moral parliament can easily be used as a mental heuristic in cases where forecasting is impossible or misleading by focusing on which framework applies best for given situations, whereas EMV requires quite a bit of forecasting ability and calculation, and most importantly is incredibly biased against moral frameworks that are unable to quantify the expected good to come out of decisions (yes, the discussion of how to deal with ordinal systems does some to mitigate this, but even then there is a need to forecast effects implicit in the decision process). Hopefully that helps clarify my position, I should’ve probably been a bit more formal in my reasoning in my original post, but better late than never I guess!
I’d say it’s a good thing when we find a relatively good moral theory, and bad when we find a relatively bad moral theory.
Not sure what you mean here. Acting morally all the time does not necessarily mean having clear cut moral principles; we might be particularists, pluralists or intuitionists. And having clear cut moral principles doesn’t imply that we will only have moral reasons for acting; we might have generally free and self-directed lives which only get restrained occasionally by morality.
I wouldn’t go so far as to say that they ‘don’t apply,’ rather that it’s not clear what they say. E.g., what utilitarianism tells us about computational life is unclear because we don’t know much about qualia and identity. What Ross’s duties tell us about wildlife antinatalism is unclear because we don’t know how benevolent it is to prevent wildlife from existing. Etc, etc.
I don’t see how the lack of being able to apply moral schema to certain situations is a motivation for acting with moral uncertainty. After all, if you actually couldn’t apply a moral theory in a certain situation, you wouldn’t necessarily need a moral parliament—you could just follow the next-most-likely or next-best theory.
Rather, the motivation for moral uncertainty comes from theories with conflicting judgements where we don’t know which one is correct.
I’m not sure about that. This would have to be better clarified and explained.
You seem to be primarily concerned with empirical uncertainty. But moral theories aren’t supposed to answer questions like “do things generally work out better if transgressors are punished.” They answer questions about what we ought to achieve, and figuring out how is an empirical question.
While it is true that someone will err when trying to follow almost any moral theory, I’m not sure how this motivates the claim that we should obey non-moral reasons for action or the claim that we shouldn’t try to converge on a single moral theory.
There are a lot of different issues at play here; whether we act according to moral uncertainty is different from whether we act as moral saints; whether we act as moral saints is different from whether our moral principles are demanding; whether we follow morality is different from what morality tells us to do regarding our closest friends and family.
In that case, utilitarianism would tell us to foster personal relationships, as they would provide mental growth and help us work effectively.