I sympathise with your NB at the beginning, but to be honest in the absence of specific examples or wider data it’s hard for me to ground this criticism or test its validity. Ironically, it’s almost as if this post is too fundamental instead of action-guiding for me.
Doesn’t mean you’re wrong per se, but this post is almost more of a hypothesis than an argument.
I agree that in the absence of specific examples the criticism is hard to understand. But I would go further and argue that the NB at the beginning is fundamentally misguided and that well-meaning and constructive criticism of EA orgs or people should very rarely be obscured to make it seem less antagonistic.
Came here to comment this. It’s the kind of paradigmatic criticism that Scott Alexander talks about, which everyone can nod and agree with when it’s an abstraction.
Right now it’s impossible to argue with this post- who doesn’t want research to be better? Even positive examples with specific pointers to what they did well would help.
I sympathise with your NB at the beginning, but to be honest in the absence of specific examples or wider data it’s hard for me to ground this criticism or test its validity. Ironically, it’s almost as if this post is too fundamental instead of action-guiding for me.
Doesn’t mean you’re wrong per se, but this post is almost more of a hypothesis than an argument.
I agree that in the absence of specific examples the criticism is hard to understand. But I would go further and argue that the NB at the beginning is fundamentally misguided and that well-meaning and constructive criticism of EA orgs or people should very rarely be obscured to make it seem less antagonistic.
Came here to comment this. It’s the kind of paradigmatic criticism that Scott Alexander talks about, which everyone can nod and agree with when it’s an abstraction.
Right now it’s impossible to argue with this post- who doesn’t want research to be better? Even positive examples with specific pointers to what they did well would help.