I don’t think self-interest is relevant here if you believe that it is possible for an agent to have an altruistic goal.
Also, as with all words, “agentic” will have different meanings in different contexts, and my comment was based on its use when referring to people’s behaviour/psychology which is not an exact science, therefore words are not being used in very precise scientific ways :)
Is this trying to make a directional claim? Like people (in the EA community? in idealistic communities?) should on average be less afraid / more accepting of being morally compromised? (On first read, I assume no, it seems like just a descriptive post about the phenomenon).
FWIW, I think it’s worth thinking about the 2 forms of “compromise” separately. (Being associated with something you end up finding morally bad / directly doing something you end up finding morally bad). I think it’s easier and more worthwhile to focus on avoiding the latter, but overall I’m not sure whether I’ve found a strong tendency that people overdo either of these things.