[Link] “Moral understanding and moral illusions”

Abstract here, full text paywalled: https://​​onlinelibrary.wiley.com/​​doi/​​abs/​​10.1002/​​tht3.438

Please don’t use Sci-Hub to access the full text for free, and don’t use this trick (a) to easily redirect papers to Sci-Hub.

The abstract, perhaps relevant to how EAs think about allocating their time:


The central claim of this paper is that people who ignore recherche cases might actually understand ethics better than those who focus on them.
In order to establish this claim, I employ a relatively new account of understanding, to the effect that one understands to the extent that one has a representation/​process pair that allows one to efficiently compress and decode useful information. I argue that people who ignore odd cases have compressed better, understand better, and so can be just as ethical (if not more so) as those who focus on such cases.
The general idea is that our intuitive moral judgments only imprecisely track the moral truth – the function that maps possible decisions onto moral valuations – and when we try to specify the function precisely we end up overfitting what is basically a straightforward function to accommodate irrelevant data points.