I’ll add that EAs seem particularly bad at steelmanning criticisms - (eg—if a criticism doesn’t explicitly frame ideas on a spectrum and discuss trade offs, the comments tend to view the ideas as black and white and reject the criticisms because they don’t like the other extreme of the spectrum)
In the interests of taking your words to heart, I agree that EAs (and literally everyone) are bad at steelmanning criticisms.
However, I think that saying the ‘and literally everyone’ part out loud is important. Usually when people say ‘X is bad at Y’ they mean that X is worse than typical at Y. If I said, ‘Detroit-style pizza is unhealthy,’ then there is a Gricean implicature that Detroit-style pizza is less healthy than other pizzas. Otherwise, I should just say ‘pizza is unhealthy’.
Likewise, when you say ‘EAs seem particularly bad at steelmanning criticisms,’ the Gricean implication is that EAs are worse at this than average. In another thread above, you seemed to imply that you aren’t familiar with communities that are better at incorporating and steelmanning criticism (please correct me if I’m mistaken here).
There is an important difference between ‘everyone is bad at taking criticism’/‘EAs and everyone else are bad at taking criticism’/‘EAs are bad at taking criticism’. The first two statements implies that this is a widespread problem that we’ll have to work hard to address, as the default is getting it wrong. The last statement implies that we are making a surprising mistake, and it should be comparatively easy to fix (as others are doing better than us).
I don’t generally like steelmanning, for reasons that this blog post does a decent job of summarizing. When folks read what I write, I’d rather that they assume that I thought about using a weaker or stronger version of a statement, and instead went with the strength I did because I believe it to be true.
If an issue is framed as black or white, and I believe it to be grey, then I assume we have a disagreement. I try to assume that if an author decided to frame an issue in a particular way, it’s because that’s what they believe to be true.
Apologies, I don’t mean to imply that EA is unique in getting things wrong / being bad at steelmanning. Agree that the “and everyone else” part is important for clarity.
I think whether steelmanning makes sense depends on your immediate goal when reading things.
If the immediate goal is to improve the accuracy of your beliefs and work out how you can have more impact, then I think steelmanning makes sense.
If the immediate goal is to offer useful feedback to the author and better understand the author’s view, steelmanning isn’t a good idea.
There is a place for both of these goals, and importantly the second goal can be a means to achieving the first goal, but generally I think it makes sense for EAs to prioritise the first goal over the second.
Thanks, I think this is an excellent response and I agree both are important goals.
I’m curious to learn more about why you think that steelmanning is good for improving one’s beliefs/impact. It seems to me that that would be true if you believe yourself to be much more likely to be correct than the author of a post. Otherwise, it seems that trying to understand their original argument is better than trying to steelman it.
I could see that perhaps you should try to do both (ie, both the author’s literal intent and whether they are directionally correct)?
[EDIT: I’m particularly curious because I think that my current understanding seems to imply that steelmanning like this would be hubristic, and I think that probably that’s not what you’re going for. So almost certainly I’m missing some piece of what you’re saying!]
I’ll add that EAs seem particularly bad at steelmanning criticisms - (eg—if a criticism doesn’t explicitly frame ideas on a spectrum and discuss trade offs, the comments tend to view the ideas as black and white and reject the criticisms because they don’t like the other extreme of the spectrum)
In the interests of taking your words to heart, I agree that EAs (and literally everyone) are bad at steelmanning criticisms.
However, I think that saying the ‘and literally everyone’ part out loud is important. Usually when people say ‘X is bad at Y’ they mean that X is worse than typical at Y. If I said, ‘Detroit-style pizza is unhealthy,’ then there is a Gricean implicature that Detroit-style pizza is less healthy than other pizzas. Otherwise, I should just say ‘pizza is unhealthy’.
Likewise, when you say ‘EAs seem particularly bad at steelmanning criticisms,’ the Gricean implication is that EAs are worse at this than average. In another thread above, you seemed to imply that you aren’t familiar with communities that are better at incorporating and steelmanning criticism (please correct me if I’m mistaken here).
There is an important difference between ‘everyone is bad at taking criticism’/‘EAs and everyone else are bad at taking criticism’/‘EAs are bad at taking criticism’. The first two statements implies that this is a widespread problem that we’ll have to work hard to address, as the default is getting it wrong. The last statement implies that we are making a surprising mistake, and it should be comparatively easy to fix (as others are doing better than us).
I don’t generally like steelmanning, for reasons that this blog post does a decent job of summarizing. When folks read what I write, I’d rather that they assume that I thought about using a weaker or stronger version of a statement, and instead went with the strength I did because I believe it to be true.
If an issue is framed as black or white, and I believe it to be grey, then I assume we have a disagreement. I try to assume that if an author decided to frame an issue in a particular way, it’s because that’s what they believe to be true.
Apologies, I don’t mean to imply that EA is unique in getting things wrong / being bad at steelmanning. Agree that the “and everyone else” part is important for clarity.
I think whether steelmanning makes sense depends on your immediate goal when reading things.
If the immediate goal is to improve the accuracy of your beliefs and work out how you can have more impact, then I think steelmanning makes sense.
If the immediate goal is to offer useful feedback to the author and better understand the author’s view, steelmanning isn’t a good idea.
There is a place for both of these goals, and importantly the second goal can be a means to achieving the first goal, but generally I think it makes sense for EAs to prioritise the first goal over the second.
Thanks, I think this is an excellent response and I agree both are important goals.
I’m curious to learn more about why you think that steelmanning is good for improving one’s beliefs/impact. It seems to me that that would be true if you believe yourself to be much more likely to be correct than the author of a post. Otherwise, it seems that trying to understand their original argument is better than trying to steelman it.
I could see that perhaps you should try to do both (ie, both the author’s literal intent and whether they are directionally correct)?
[EDIT: I’m particularly curious because I think that my current understanding seems to imply that steelmanning like this would be hubristic, and I think that probably that’s not what you’re going for. So almost certainly I’m missing some piece of what you’re saying!]