So it could make a lot of sense to update your actions towards that funder, more than would be the case if you had all the power.
That makes a lot of sense. However, updating actions toward a funder because of their power is one thing; updating beliefs is another.
So there are several questions lurking for me here—you mentioned one, whether deference to OP is “explained more by the fact that OP is powerful than that it is respected” (the true cause of deference). But the other question is what people tell themselves (and others) about why they defer to OP’s views, and that could even be the more important question from an epistemic standpoint.
If Org A chooses to do X, Y, and Z in significant part because OP is powerful (and it would not have done so otherwise), it’s important for Org A to be eagle-eyed about its reasoning (at least internally). Cognitive dissonance reduction is a fairly powerful force, and it’s tempting to come out about to the view that X, Y, and Z are really important when you’re doing them for reasons other than an unbiased evaluation of their merits.
One could argue that we should give ~0 deference to OP’s opinions when updating our viewpoints, even if we alter our actions. These opinions already get great weight in terms of what gets done for obvious practical reasons, so updating our own opinions in that direction may (over?)weight them even more.
Moreover, non-OP views probably influence other people’s views even if they are not consciously given any weight. As noted above, there’s the cognitive dissonance reduction effect. There’s also the likelihood that X, Y, and Z are getting extra buzz due to OP’s support of those ideas (e.g., they are discussed more, people are influenced by seeing organizations that follow X, Y, and Z achieve results due to their favorable funding posture, etc.). Filtering out these kinds of effects on one’s nominally independent thinking is difficult. If people defer to what OP thinks on top of experiencing these indirect effects, then it’s reasonable to think they are functionally double-counting OP’s opinion.
I think that power/incentives often come first, then organizations and ecosystems shape their epidemics to some degree in order to be convenient. This makes it quite difficult what causally led to what.
At the same time, I’m similarly suspicious of a lot of epistemics. It’s obviously not just beliefs that OP likes that will be biased to favor convenience. Arguably a lot of these beliefs just replace other bad beliefs that were biased to favor other potential stakeholders or other bad incentives.
Generally I’m quite happy for people and institutions to be quite suspicious of their worldviews and beliefs, especially ones that are incentivized by their surroundings.
(I previously wrote about some of this in my conveniences post here, though that post didn’t get much attention.)
That makes a lot of sense. However, updating actions toward a funder because of their power is one thing; updating beliefs is another.
So there are several questions lurking for me here—you mentioned one, whether deference to OP is “explained more by the fact that OP is powerful than that it is respected” (the true cause of deference). But the other question is what people tell themselves (and others) about why they defer to OP’s views, and that could even be the more important question from an epistemic standpoint.
If Org A chooses to do X, Y, and Z in significant part because OP is powerful (and it would not have done so otherwise), it’s important for Org A to be eagle-eyed about its reasoning (at least internally). Cognitive dissonance reduction is a fairly powerful force, and it’s tempting to come out about to the view that X, Y, and Z are really important when you’re doing them for reasons other than an unbiased evaluation of their merits.
One could argue that we should give ~0 deference to OP’s opinions when updating our viewpoints, even if we alter our actions. These opinions already get great weight in terms of what gets done for obvious practical reasons, so updating our own opinions in that direction may (over?)weight them even more.
Moreover, non-OP views probably influence other people’s views even if they are not consciously given any weight. As noted above, there’s the cognitive dissonance reduction effect. There’s also the likelihood that X, Y, and Z are getting extra buzz due to OP’s support of those ideas (e.g., they are discussed more, people are influenced by seeing organizations that follow X, Y, and Z achieve results due to their favorable funding posture, etc.). Filtering out these kinds of effects on one’s nominally independent thinking is difficult. If people defer to what OP thinks on top of experiencing these indirect effects, then it’s reasonable to think they are functionally double-counting OP’s opinion.
That roughly sounds right to me.
I think that power/incentives often come first, then organizations and ecosystems shape their epidemics to some degree in order to be convenient. This makes it quite difficult what causally led to what.
At the same time, I’m similarly suspicious of a lot of epistemics. It’s obviously not just beliefs that OP likes that will be biased to favor convenience. Arguably a lot of these beliefs just replace other bad beliefs that were biased to favor other potential stakeholders or other bad incentives.
Generally I’m quite happy for people and institutions to be quite suspicious of their worldviews and beliefs, especially ones that are incentivized by their surroundings.
(I previously wrote about some of this in my conveniences post here, though that post didn’t get much attention.)