Steelmanning is typically described as responding to the “strongest” version of an argument you can think of.
Recently, I heard someone describe it a slightly different way, as responding to the argument that you “agree with the most.”
I like this framing because it signals an extra layer of epistemic humility: I am not a perfect judge of what the best possible argument is for a claim. In fact, reasonable people often disagree on what constitutes a strong argument for a given claim.
This framing also helps avoid a tone of condescension that sometimes comes with steelmanning. I’ve been in a few conversations in which someone says they are “steelmanning” some claim X, but says it in a tone of voice that communicates two things:
The speaker thinks that X is crazy.
The speaker thinks that those who believe X need help coming up with a sane justification for X, because X-believers are either stupid or crazy.
It’s probably fine to have this tone of voice if you’re talking about flat earthers or young earth creationists, and are only “steelmanning” X as a silly intellectual exercise. But if you’re in a serious discussion, framing “steelmanning” as being about the argument you “agree with the most” rather than the “strongest” argument might help signal that you take the other side seriously.
Anyone have thoughts on this? Has this been discussed before?
Yeah. I think Bensinger has some posts about how steelmanning may be corrosive, and ideological turing tests (ITT) are the actual targeted social technology (but that unfortunately people think steelmanning is better than it is)
I think this condescension idea you discuss is actually a fatal criticism of steelmanning, and ITT is how to explore cooperation with alien (to you) minds. The difference is basically that steelmanning draws you toward “make up a guy”, it’s kinda fun to think “what if there was a guy who thought xyz” and then work backwards for a plausible chain of facts/logic. It’s not necessarily related to actually existing minds, so this activity is technically asocial! Whereas ITT has some of similar good properties yet shifts the emphasis on building bridges with a mind that in actuality, not just plausibly, exists.
I admit that there are numerous values of “outgroup” such that I think my simulation of them in my brain is way more sophisticated and justified than the actually existing proponents that I’ve ever met. This is not good. I may be more charitable and enthusiastic about cooperation under pluralism if I had a more ITT emphasis.
Steelmanning is typically described as responding to the “strongest” version of an argument you can think of.
Recently, I heard someone describe it a slightly different way, as responding to the argument that you “agree with the most.”
I like this framing because it signals an extra layer of epistemic humility: I am not a perfect judge of what the best possible argument is for a claim. In fact, reasonable people often disagree on what constitutes a strong argument for a given claim.
This framing also helps avoid a tone of condescension that sometimes comes with steelmanning. I’ve been in a few conversations in which someone says they are “steelmanning” some claim X, but says it in a tone of voice that communicates two things:
The speaker thinks that X is crazy.
The speaker thinks that those who believe X need help coming up with a sane justification for X, because X-believers are either stupid or crazy.
It’s probably fine to have this tone of voice if you’re talking about flat earthers or young earth creationists, and are only “steelmanning” X as a silly intellectual exercise. But if you’re in a serious discussion, framing “steelmanning” as being about the argument you “agree with the most” rather than the “strongest” argument might help signal that you take the other side seriously.
Anyone have thoughts on this? Has this been discussed before?
Yeah. I think Bensinger has some posts about how steelmanning may be corrosive, and ideological turing tests (ITT) are the actual targeted social technology (but that unfortunately people think steelmanning is better than it is)
I think this condescension idea you discuss is actually a fatal criticism of steelmanning, and ITT is how to explore cooperation with alien (to you) minds. The difference is basically that steelmanning draws you toward “make up a guy”, it’s kinda fun to think “what if there was a guy who thought xyz” and then work backwards for a plausible chain of facts/logic. It’s not necessarily related to actually existing minds, so this activity is technically asocial! Whereas ITT has some of similar good properties yet shifts the emphasis on building bridges with a mind that in actuality, not just plausibly, exists.
I admit that there are numerous values of “outgroup” such that I think my simulation of them in my brain is way more sophisticated and justified than the actually existing proponents that I’ve ever met. This is not good. I may be more charitable and enthusiastic about cooperation under pluralism if I had a more ITT emphasis.