Apples, Oranges, and AGI: Why Incommensurability May be an Obstacle in AI Safety

By Allan McCay

Back in 2018, I began writing about a problem that may be fundamental to AI — one that still receives some but relatively little attention: the problem of incommensurability.

Incommensurability arises when choices involve “apples and oranges” — values or considerations that are so different in kind that they cannot be meaningfully compared by a single metric. It’s a problem that philosophers and legal theorists have wrestled with for a long time. And yet it’s now taking center stage as we build increasingly agentic AI systems and edge ever closer to Artificial General Intelligence (AGI).

If we want to align powerful AI systems with human values — especially in contexts involving radically different types of considerations — incommensurability may be a critical stumbling block.

In my new open access article Apples and oranges: AI’s incommensurability problem— (AI & Society, 2025) — I argue that incommensurability is not just a philosophical curiosity. It has urgent implications for AI alignment, superintelligence, and the future of work.

Three Key Claims

  1. Less Automation?
    Incommensurability may limit the kinds of cognitive tasks AI can do well. In domains like legal work, where decisions often involve apples-and-oranges trade-offs (justice vs efficiency, rights vs risk), this could slow down full automation — though not stop it entirely. See also:

  2. Superishintelligence, Not Superintelligence
    A system that excels at certain cognitive tasks but fails to rationally resolve incommensurable value conflicts may only ever be “super-ish” — not truly superintelligent. This cognitive Achilles’ heel could reshape how we think about AI’s limits.

  3. Alignment Risk
    If we can’t solve incommensurability then alignment itself becomes even trickier. How do we align an agent that must choose between outcomes it can’t compare on a shared scale?

So What Can Be Done?

In the article, I offer several concrete suggestions for developers and AI safety researchers.

We might need models that recognize when they are in apples-and-oranges territory, and mechanisms to address such decisions — especially in high-stakes environments.

We also need to think of ways of assessing how models are performing on incommensurability, perhaps by way of a sort of Turing Test of incommensurability.

Who Should Read This?

This post — and the article it introduces — is especially relevant for:

  • Researchers working on frontier AI models

  • Teams developing AI safety strategies and alignment frameworks

  • Philosophers, lawyers, and interdisciplinary thinkers concerned with value pluralism

  • Effective altruists thinking about long-term AI risk and how best to allocate research effort

Final Thought

In the race toward AGI, it’s tempting to focus only on scale, compute, and benchmarks. But maybe we also need to pause and ask: what happens when our systems face choices that resist comparison — when we give them apples and oranges?

If we don’t reckon with incommensurability now, we may end up with superishintelligence that’s misaligned in ways we never anticipated.

Feel free to read and share the full article here (Open Access).
I’d welcome feedback, collaboration, and critical engagement — especially from those working on AI safety, alignment, and policy.