“Cardinal” and “Ordinal” denote a certain extremely crude way in economics in which different utility functions can still be compared in certain cases. They gesture at a very important issue in EA which everybody who thinks about it encounters: that different people (/different philosophies) have different ideas of the good, which correspond to different utility functions.
But the two terms come from math and are essentially only useful in theoretical arguments. In practical applications they are extremely weak to the point of being essentially meaningless in any specific case—using them in a real-world analysis is like trying to do engineering by assuming every object is either a cube or a sphere. The real world is much more detailed, and a key assumption of people who do EA-style work, which turns out to be correct in most instances, is that most people’s models of utility overlap significantly, so that if you do an altruistic action that you think is has high utility, most other people will agree with your assessment to some degree, or at least not assign it negative utility.
Of course this breaks in certain situations, especially around zero-sum games like war. I think it would be interesting to analyze how this can break, but I recommend using less jargony terms like ordinal/cardinal (which I think aren’t very useful here) and more concrete examples (which can be speculative, but not too far removed from the real world or from the context of altruistic interventions).
“Cardinal” and “Ordinal” denote a certain extremely crude way in economics in which different utility functions can still be compared in certain cases. They gesture at a very important issue in EA which everybody who thinks about it encounters: that different people (/different philosophies) have different ideas of the good, which correspond to different utility functions.
But the two terms come from math and are essentially only useful in theoretical arguments. In practical applications they are extremely weak to the point of being essentially meaningless in any specific case—using them in a real-world analysis is like trying to do engineering by assuming every object is either a cube or a sphere. The real world is much more detailed, and a key assumption of people who do EA-style work, which turns out to be correct in most instances, is that most people’s models of utility overlap significantly, so that if you do an altruistic action that you think is has high utility, most other people will agree with your assessment to some degree, or at least not assign it negative utility.
Of course this breaks in certain situations, especially around zero-sum games like war. I think it would be interesting to analyze how this can break, but I recommend using less jargony terms like ordinal/cardinal (which I think aren’t very useful here) and more concrete examples (which can be speculative, but not too far removed from the real world or from the context of altruistic interventions).
The term comes from economics (the term was created by Pareto who pioneered the field of micro-economics...)