The ’80% utilitarian’ approach you’re talking about makes more sense if you think of it as threshold deontology—which is basically where you’re utilitarian most of the time, but have strict ethical boundaries for extreme cases (e.g. don’t murder someone, even if that would somehow cause massive moral benefit). I think most EAs implicitly operate like this.
But I agree, most fields would benefit with less jargon.
The ’80% utilitarian’ approach you’re talking about makes more sense if you think of it as threshold deontology—which is basically where you’re utilitarian most of the time, but have strict ethical boundaries for extreme cases (e.g. don’t murder someone, even if that would somehow cause massive moral benefit). I think most EAs implicitly operate like this.
But I agree, most fields would benefit with less jargon.