Could you say a little more about how you think this paper might be relevant to EA?
There’s a point about scientific bias to be made, perhaps, but the methods of genetics research don’t seem especially similar to the kinds of research I associate with EA (development RCTs, theorizing about future technology, trying to understand the nature of animal consciousness, etc.).
Big parts of our institutional knowledge-generating machinery are broken. Knowing which parts & to what extent seems important for cause prioritization and epistemic hygiene.
Could you say a little more about how you think this paper might be relevant to EA?
There’s a point about scientific bias to be made, perhaps, but the methods of genetics research don’t seem especially similar to the kinds of research I associate with EA (development RCTs, theorizing about future technology, trying to understand the nature of animal consciousness, etc.).
It seems relevant in the same way that the 80,000 Hours replicability quiz is relevant.
Big parts of our institutional knowledge-generating machinery are broken. Knowing which parts & to what extent seems important for cause prioritization and epistemic hygiene.
Also see Open Phil’s science policy & infrastructure cause portfolio.
Also—shameless self-promotion—see Let’s Fund’s Better Science crowdfunding campaign that tries to tackle the replication crisis and make science better.