Thank you Mike, all very good points. I agree that some frameworks, especially versions of utilitarianism, are quite good at adapting to new situations, but to be a little more formal about my original point, I worry that the resources and skills required to adapt these frameworks in order to make them ‘work’ makes them poor frameworks to rely on for a day-to-day basis. Expecting human beings to apply these frameworks ‘correctly’ is probably giving the forecasting and estimation ability of humans a little too much credit. For a reductive example, ‘do the most good possible’ technically is a ‘correct’ moral framework, but it really doesn’t ‘work’ well for day-to-day decisions unless you apply a lot of diligent thought to it (often forcing you to rely on ‘sub-frameworks’).
Imagine a 10 year old child who suddenly and religiously adopts a classical hedonistic utilitarian framework – I would have to imagine that this would not turn out for the best. Even though their overall framework is probably correct, their understanding of the world hampers their ability to live up to their ideals effectively. They will make decisions that will objectively be against their framework, simply because the information they are acting on is incomplete. 10 year olds with much simpler moral frameworks will most likely be ‘right’ from a utilitarian standpoint much more often than 10 year olds with a hedonistic utilitarian framework, simply because the latter requires a much more nuanced understanding of the world and forecasted effects in order to work.
My worry is that all humans (not just 10 year olds) are bad at forecasting the impacts of their actions, especially when dynamic effects are involved (as they invariably are). With this in mind, let’s pretend that, at most, the average person can semi-accurately estimate the first order effects of your actions (which is honestly a stretch already). A first order effect would be something like “each marginal hour I work creates more utility for the people I donate to than is lost among me and my family”. Under a utilitarian framework, you would go with whatever you estimate to be correct, which in turn (due to your inability to forecast) would be based on only a first order approximation. Other frameworks that aren’t as based on forecasting (e.g. some version of deontology) can see this first order approximation and still suggest another action (which may, in turn, create more ‘good’ in the long-run).
Going back to the overtime example, if you look past first-order effects in a utilitarian framework you can still build a reason against the whole ‘work overtime’ thing. A second order effect would be something like “but, if I do this too long, I’ll burn out, thus decreasing my long-term ability to donate”, and a third order effect would be something like “if I portray sacrificing my wellbeing as a virtue by continuing to do this throughout my life, it could change the views of those who see me as a role model in not-necessarily positive ways”, and so on. Luckily, as a movement, people have finally started to normalize an acceptance of some of the problematic second-order effects of the ‘work overtime’ thing, but it took a worryingly long time—and it certainly won’t be the only time that our first order estimations will be overturned by more diligent thinking!
So, yes, if you work really hard to figure out second, third, etc. order effects, then versions of utilitarianism can be great – but relying too heavily on it for day-to-day decisions (at the expense of sub-frameworks that rely less on forecasting ability) may not work out as well as we’d hope, since figuring out those effects is terribly complicated – in many decisions, relying on a sub-framework that relies less on forecasting ability (e.g. some version of deontology) may be the best way forward. Many EAs realize some version of this, but I think it’s something that we should be more explicit about.
To draw it back in to the “is the moral parliament basically the same as Expected Moral Value”, I would say that it’s not. They are similar, but a key difference is the forecasting ability required for each: moral parliament can easily be used as a mental heuristic in cases where forecasting is impossible or misleading by focusing on which framework applies best for given situations, whereas EMV requires quite a bit of forecasting ability and calculation, and most importantly is incredibly biased against moral frameworks that are unable to quantify the expected good to come out of decisions (yes, the discussion of how to deal with ordinal systems does some to mitigate this, but even then there is a need to forecast effects implicit in the decision process). Hopefully that helps clarify my position, I should’ve probably been a bit more formal in my reasoning in my original post, but better late than never I guess!
Thank you Mike, all very good points. I agree that some frameworks, especially versions of utilitarianism, are quite good at adapting to new situations, but to be a little more formal about my original point, I worry that the resources and skills required to adapt these frameworks in order to make them ‘work’ makes them poor frameworks to rely on for a day-to-day basis. Expecting human beings to apply these frameworks ‘correctly’ is probably giving the forecasting and estimation ability of humans a little too much credit. For a reductive example, ‘do the most good possible’ technically is a ‘correct’ moral framework, but it really doesn’t ‘work’ well for day-to-day decisions unless you apply a lot of diligent thought to it (often forcing you to rely on ‘sub-frameworks’).
Imagine a 10 year old child who suddenly and religiously adopts a classical hedonistic utilitarian framework – I would have to imagine that this would not turn out for the best. Even though their overall framework is probably correct, their understanding of the world hampers their ability to live up to their ideals effectively. They will make decisions that will objectively be against their framework, simply because the information they are acting on is incomplete. 10 year olds with much simpler moral frameworks will most likely be ‘right’ from a utilitarian standpoint much more often than 10 year olds with a hedonistic utilitarian framework, simply because the latter requires a much more nuanced understanding of the world and forecasted effects in order to work.
My worry is that all humans (not just 10 year olds) are bad at forecasting the impacts of their actions, especially when dynamic effects are involved (as they invariably are). With this in mind, let’s pretend that, at most, the average person can semi-accurately estimate the first order effects of your actions (which is honestly a stretch already). A first order effect would be something like “each marginal hour I work creates more utility for the people I donate to than is lost among me and my family”. Under a utilitarian framework, you would go with whatever you estimate to be correct, which in turn (due to your inability to forecast) would be based on only a first order approximation. Other frameworks that aren’t as based on forecasting (e.g. some version of deontology) can see this first order approximation and still suggest another action (which may, in turn, create more ‘good’ in the long-run). Going back to the overtime example, if you look past first-order effects in a utilitarian framework you can still build a reason against the whole ‘work overtime’ thing. A second order effect would be something like “but, if I do this too long, I’ll burn out, thus decreasing my long-term ability to donate”, and a third order effect would be something like “if I portray sacrificing my wellbeing as a virtue by continuing to do this throughout my life, it could change the views of those who see me as a role model in not-necessarily positive ways”, and so on. Luckily, as a movement, people have finally started to normalize an acceptance of some of the problematic second-order effects of the ‘work overtime’ thing, but it took a worryingly long time—and it certainly won’t be the only time that our first order estimations will be overturned by more diligent thinking!
So, yes, if you work really hard to figure out second, third, etc. order effects, then versions of utilitarianism can be great – but relying too heavily on it for day-to-day decisions (at the expense of sub-frameworks that rely less on forecasting ability) may not work out as well as we’d hope, since figuring out those effects is terribly complicated – in many decisions, relying on a sub-framework that relies less on forecasting ability (e.g. some version of deontology) may be the best way forward. Many EAs realize some version of this, but I think it’s something that we should be more explicit about.
To draw it back in to the “is the moral parliament basically the same as Expected Moral Value”, I would say that it’s not. They are similar, but a key difference is the forecasting ability required for each: moral parliament can easily be used as a mental heuristic in cases where forecasting is impossible or misleading by focusing on which framework applies best for given situations, whereas EMV requires quite a bit of forecasting ability and calculation, and most importantly is incredibly biased against moral frameworks that are unable to quantify the expected good to come out of decisions (yes, the discussion of how to deal with ordinal systems does some to mitigate this, but even then there is a need to forecast effects implicit in the decision process). Hopefully that helps clarify my position, I should’ve probably been a bit more formal in my reasoning in my original post, but better late than never I guess!