Apples, Oranges, and AGI: Why Incommensurability May be an Obstacle in AI Safety
By Allan McCay
Back in 2018, I began writing about a problem that may be fundamental to AI — one that still receives some but relatively little attention: the problem of incommensurability.
Incommensurability arises when choices involve “apples and oranges” — values or considerations that are so different in kind that they cannot be meaningfully compared by a single metric. It’s a problem that philosophers and legal theorists have wrestled with for a long time. And yet it’s now taking center stage as we build increasingly agentic AI systems and edge ever closer to Artificial General Intelligence (AGI).
If we want to align powerful AI systems with human values — especially in contexts involving radically different types of considerations — incommensurability may be a critical stumbling block.
In my new open access article Apples and oranges: AI’s incommensurability problem— (AI & Society, 2025) — I argue that incommensurability is not just a philosophical curiosity. It has urgent implications for AI alignment, superintelligence, and the future of work.
Three Key Claims
Less Automation?
Incommensurability may limit the kinds of cognitive tasks AI can do well. In domains like legal work, where decisions often involve apples-and-oranges trade-offs (justice vs efficiency, rights vs risk), this could slow down full automation — though not stop it entirely. See also:Superishintelligence, Not Superintelligence
A system that excels at certain cognitive tasks but fails to rationally resolve incommensurable value conflicts may only ever be “super-ish” — not truly superintelligent. This cognitive Achilles’ heel could reshape how we think about AI’s limits.Alignment Risk
If we can’t solve incommensurability then alignment itself becomes even trickier. How do we align an agent that must choose between outcomes it can’t compare on a shared scale?
So What Can Be Done?
In the article, I offer several concrete suggestions for developers and AI safety researchers.
We might need models that recognize when they are in apples-and-oranges territory, and mechanisms to address such decisions — especially in high-stakes environments.
We also need to think of ways of assessing how models are performing on incommensurability, perhaps by way of a sort of Turing Test of incommensurability.
Who Should Read This?
This post — and the article it introduces — is especially relevant for:
Researchers working on frontier AI models
Teams developing AI safety strategies and alignment frameworks
Philosophers, lawyers, and interdisciplinary thinkers concerned with value pluralism
Effective altruists thinking about long-term AI risk and how best to allocate research effort
Final Thought
In the race toward AGI, it’s tempting to focus only on scale, compute, and benchmarks. But maybe we also need to pause and ask: what happens when our systems face choices that resist comparison — when we give them apples and oranges?
If we don’t reckon with incommensurability now, we may end up with superishintelligence that’s misaligned in ways we never anticipated.
Feel free to read and share the full article here (Open Access).
I’d welcome feedback, collaboration, and critical engagement — especially from those working on AI safety, alignment, and policy.
From a philosophy standpoint, I find incommensurability pretty implausible (at least to act upon) for a couple reasons:
If two values are incommensurable, for every action that you take, there is some probability that you are making a trade-off between these actions. Given some version of expected value to be correct (where some probability of value is equivalent to trading off some lower value itself), this would mean that every action one takes is making a choice between two incommensurable goods. This seems to lock you into a constant state of decision paralysis (where every action you take is trading off two incommensurable goods), which, I believe, should just make incommensurable goods a non-viable option. (See this paper for more discussion)
Imagine you have some credence in two things being incommensurable (thereby making it such that you have no reasons to act in either way). Even if this is the case, however, you should still have some non-zero credence in these values / actions being commensurable as it is a contingent proposition. If the credence of incommensurability gives you no reason to act and the credence of commensurability does give you reason to act, this makes incommensurability irrelevant, making it so that your actions should entirely be informed by the case conditional on commensurability.
Happy to chat more about this, if you think that you’d find that helpful.
I meant to add that the first of the papers (2018) was a response to the work of Yuval Noah Harari in his book Homo Deus—in this Journal of Ethics and Emerging Technologies paper. See here:
The Value of Consciousness and Free Will in a Technological Dystopia