Hey, I agree something like that might be worth adding.
The way I was trying to handle it is to define ‘common good’ in such a way that different contributions are comparable (e.g. if common good = welfare). However, it’s possible I should add something like “there don’t exist other values that typically outweigh differences in the common good thus defined”.
For instance, you might think that justice is incredibly intrinsically important, such that what you should do is mainly determined by which action is most just, even if there are also large differences in terms of the common good.
I was actually assuming a welfarist approach too.
But even under a welfarist approach, it’s not obvious how to compare campaigning for criminal justice reform in the US to bednet distribution in developing countries.
Perhaps it’s the case that this is not an issue if one accepts longtermism. But that would just mean that the hidden premise is actually longtermism.
Hmm in that case, I’d probably see it as a denial of identifiability.
I do think something along these lines is one of the best counteraguments to EA. I see it as the first step in the cluelessness debate.