This is a great set of guidelines for integrity. Hopefully more grantmakers and other key individuals will take this point of view.
I’d still be interested in hearing how the existing level of COIs affects your judgement of EA epistemics. I think your motivated reasoning critique of EA is the strongest argument that current EA priorities do not accurately represent the most impactful causes available. I still think EA is the best bet available for maximizing my expected impact, but I have baseline uncertainty that many EA beliefs might be incorrect because they’re the result of imperfect processes with plenty of biases and failure modes. It’s a very hard topic to discuss, but I think it’s worth exploring (a) how to limit our epistemic risks and (b) how to discount our reasoning in light of those risks.
I’d still be interested in hearing how the existing level of COIs affects your judgement of EA epistemics.
I’m confused by this. My inside view guess is that this is just pretty small relative to other factors that can distort epistemics. And for this particular problem, I don’t have a strong coherent outside view because it’s hard to construct a reasonable reference class for what communities like us with similar levels of CoIs might look like.
Here’s my general stance on integrity, which I think is a superset of issues with CoI.
As noted by ofer, I also think investments are structurally different from grants.
This is a great set of guidelines for integrity. Hopefully more grantmakers and other key individuals will take this point of view.
I’d still be interested in hearing how the existing level of COIs affects your judgement of EA epistemics. I think your motivated reasoning critique of EA is the strongest argument that current EA priorities do not accurately represent the most impactful causes available. I still think EA is the best bet available for maximizing my expected impact, but I have baseline uncertainty that many EA beliefs might be incorrect because they’re the result of imperfect processes with plenty of biases and failure modes. It’s a very hard topic to discuss, but I think it’s worth exploring (a) how to limit our epistemic risks and (b) how to discount our reasoning in light of those risks.
I’m confused by this. My inside view guess is that this is just pretty small relative to other factors that can distort epistemics. And for this particular problem, I don’t have a strong coherent outside view because it’s hard to construct a reasonable reference class for what communities like us with similar levels of CoIs might look like.