I think I’m probably sympathetic to your claims in “EA is open to some kinds of critique, but not to others”, but I think it would be helpful for there to be some discussion around Scott Alexander’s post on EA criticism. In it, he argued that “EA is open to some kinds of critique, but not to others” was an inevitable “narrative beat”, and that “shallow” criticisms which actually focus on the more actionable implications hit closer to home and are more valuable.
I was primed to dismiss your claims on the basis of Scott Alexander’s arguments, but on closer consideration I suspect that might be too quick.
I feel it would be easier for me to judge this if someone (not necessarily the authors of this post) provided some examples of the sorts of deep critiques (e.g. by pointing to examples of deep critiques made of things other than EA). The examples of deep critiques given in the post did help with this, but it’s easier to triangulate what’s really meant when there are more examples.
I also remember Scott’s post and already when reading it it though that the next narrative beat argument was bad.
The reason why it is the next narrative beat is because it is almost always true.
If I say that the sun will rise tomorrow, and you respond, “but you expect that the sun will raise every day, you have to give specific argument for this day in particular”, that don’t make sense.
My current model is that powerful EAs are mostly not open to critique at all, but only pretend to welcome it for PR reasons, but mainly ignores it. As long as your critique is polite enough everyone involved will pretend to appreciate it, but if you cross the line to hurting anyone’s feeling (which is individual and hard to predict) then there will be social and professional consequences.
My model might be completely wrong. It’s hard to know given the opaqueness around EA power. I offered critique and there is never any dialogue or noticeable effect.
My own observation has been that people are open to intellectual discussion (your discounting formula is off for x reasons) but not to more concrete practical criticism, or criticism that talks about specific individuals.
That was also Scott Alexander’s point if I understood it correctly.
Here are some differences I noticed between the experience of reading the more specific criminal justice criticism vs. the more paradigmatic structures-and-individualism criticism:
Before reading the specific criticism, I wouldn’t have been able to predict its conclusion. Was this program more effective than other programs? Less effective? But before reading the paradigmatic criticism, I could predict its conclusion pretty well. “We are all more interconnected than we think” is a typical piece of Profound Wisdom, and nobody ever says the opposite.
I can name several people who gain/lose status from the specific criticism, and I expect those people to be upset, push back, or otherwise have strong opinions. I can’t think of anyone like that for the paradigmatic criticism.
The specific criticism carries an obvious conclusion: cancel this one program! (in this case it had already been cancelled, so maybe the conclusion is more like reform various processes to make that happen sooner later on). The paradigmatic criticism is less actionable.
This isn’t to say that paradigmatic criticisms are always bad and useless, and specific criticism is always good.
But the specific claim at the end of Part I above—that the people in power prefer specific to paradigmatic criticism, because it’s less challenging—seems to me the exact opposite of the truth.
I don’t think that is correct because of the orthodoxy changing due to powerful EAs changing their minds, like switching to the high fidelity model, away from earning to give, towards longtermism, and towards more policy.
I think he’s arguing that you should have a little “fire alarm” in your head for when you’re regurgitating a narrative. Even if it’s 95% correct, that act of regurgitation is a time when you’re thinking less critically and it’s a perfect opportunity for error to slip through. Catching those errors has sufficiently high value that it’s worth taking the time to stop and assess, even if 19 out of 20 times you decide your first thought was correct.
As I and another said elsewhere, I think Holden’s is an example. And I think Will questioning the hinge of history would qualify as a deep critique of the prevailing view in X risk.
I think I’m probably sympathetic to your claims in “EA is open to some kinds of critique, but not to others”, but I think it would be helpful for there to be some discussion around Scott Alexander’s post on EA criticism. In it, he argued that “EA is open to some kinds of critique, but not to others” was an inevitable “narrative beat”, and that “shallow” criticisms which actually focus on the more actionable implications hit closer to home and are more valuable.
I was primed to dismiss your claims on the basis of Scott Alexander’s arguments, but on closer consideration I suspect that might be too quick.
I feel it would be easier for me to judge this if someone (not necessarily the authors of this post) provided some examples of the sorts of deep critiques (e.g. by pointing to examples of deep critiques made of things other than EA). The examples of deep critiques given in the post did help with this, but it’s easier to triangulate what’s really meant when there are more examples.
I also remember Scott’s post and already when reading it it though that the next narrative beat argument was bad.
The reason why it is the next narrative beat is because it is almost always true.
If I say that the sun will rise tomorrow, and you respond, “but you expect that the sun will raise every day, you have to give specific argument for this day in particular”, that don’t make sense.
I think it’s more or less true that “EA is open to some kinds of critique, but not to others”, but I don’t think the two categories exactly lines up with deep vs shallow critique.
My current model is that powerful EAs are mostly not open to critique at all, but only pretend to welcome it for PR reasons, but mainly ignores it. As long as your critique is polite enough everyone involved will pretend to appreciate it, but if you cross the line to hurting anyone’s feeling (which is individual and hard to predict) then there will be social and professional consequences.
My model might be completely wrong. It’s hard to know given the opaqueness around EA power. I offered critique and there is never any dialogue or noticeable effect.
My own observation has been that people are open to intellectual discussion (your discounting formula is off for x reasons) but not to more concrete practical criticism, or criticism that talks about specific individuals.
That was also Scott Alexander’s point if I understood it correctly.
I don’t think that is correct because of the orthodoxy changing due to powerful EAs changing their minds, like switching to the high fidelity model, away from earning to give, towards longtermism, and towards more policy.
I think he’s arguing that you should have a little “fire alarm” in your head for when you’re regurgitating a narrative. Even if it’s 95% correct, that act of regurgitation is a time when you’re thinking less critically and it’s a perfect opportunity for error to slip through. Catching those errors has sufficiently high value that it’s worth taking the time to stop and assess, even if 19 out of 20 times you decide your first thought was correct.
As I and another said elsewhere, I think Holden’s is an example. And I think Will questioning the hinge of history would qualify as a deep critique of the prevailing view in X risk.