Thank you, this is an excellent post. This style of transparent writing can often come across as very ‘ea’ and gets made fun of for its idiosyncrasies, but I think it’s a tremendous strength of our community.
I think it’s sometimes a strength and sometimes a weakness. It’s useful for communicating certain kinds of ideas, and not others. Contrary to Lizka, I personally wouldn’t want to see it as part of the core values of EA, but just as one available tool.
Disclaimer: I wrote this while tired, not entirely sure it’s coherent or relevant
It’s less productive for communicating ideas that are not as analytic and reductionist, or more subjective. One type of example would be ones that are more like an ideology, a [theory of something], a comprehensive worldview. In such cases, trying to apply this kind of reductionist approach is bound to miss important facets, or important connections between them, or a meaningful big picture.
Specific questions I think this would be ill-suited for:
Should you be altruistic?
What gives life meaning?
Should the EA movement have a democratic governance structure? (Should it be centralised at all?)
Is capitalism the right framework for EA and for society in general?
It should be noted that I’m a mathematician, so for me it usually is a comfortable way of communication. But for less STEM-y or analytical people, who can communicate ideas that I can’t, I think this might be limiting.
One benefit of reasoning transparency I’ve personally appreciated is that it helps the other party get a better sense of how much to update based on claims made.
I also think clear indication of the key cruxes of the stated argument and the level of supporting evidence can help hold us accountable in the claims we make and contribute to reducing miscommunication—how strong of a statement can be justified by the evidence I have? Am I aiming to explain, or persuade? How might my statements be misinterpreted as stronger or weaker than they are? (One example that comes to mind is the Bay of Pigs invasion which involved a miscommunication between JFK and the Joint Chiefs of Staff around understanding what “fair chance” meant).
It’s not clear to me on a quick read that the questions you’ve listed are worse off under reasoning transparency, or that actions like “clearly indicating key cruxes/level of support you have for the claims that hold your argument together” would lead to more missing facets/important connections/a meaningful big picture.
For example, if I made a claim about whether “capitalism is the right framework for EA/society in general”, would you find it less productive to know if I had done Nobel prize-winning research on this topic, if I’d run a single-country survey of 100 people, or if I was speaking just from non-expert personal experience?
If I made a claim about “What gives life meaning”, would you find it less productive if I laid out the various assumptions that I am making, or the most important considerations behind my key takeaways?
I agree with bruce that the specific questions you mentioned could benefit from some reasoning transparency. In general I think this is one of the best EA innovations/practices, although I agree that it’s only one tool among many and not always best suited to the task at hand.
Here are some potential downsides I see:
Not well suited for communicating worldviews, as you said
Thank you, this is an excellent post. This style of transparent writing can often come across as very ‘ea’ and gets made fun of for its idiosyncrasies, but I think it’s a tremendous strength of our community.
I think it’s sometimes a strength and sometimes a weakness. It’s useful for communicating certain kinds of ideas, and not others. Contrary to Lizka, I personally wouldn’t want to see it as part of the core values of EA, but just as one available tool.
Can you say more about when it’s a weakness and what kinds of ideas it’s not useful for communicating?
Disclaimer: I wrote this while tired, not entirely sure it’s coherent or relevant
It’s less productive for communicating ideas that are not as analytic and reductionist, or more subjective. One type of example would be ones that are more like an ideology, a [theory of something], a comprehensive worldview. In such cases, trying to apply this kind of reductionist approach is bound to miss important facets, or important connections between them, or a meaningful big picture.
Specific questions I think this would be ill-suited for:
Should you be altruistic?
What gives life meaning?
Should the EA movement have a democratic governance structure? (Should it be centralised at all?)
Is capitalism the right framework for EA and for society in general?
It should be noted that I’m a mathematician, so for me it usually is a comfortable way of communication. But for less STEM-y or analytical people, who can communicate ideas that I can’t, I think this might be limiting.
One benefit of reasoning transparency I’ve personally appreciated is that it helps the other party get a better sense of how much to update based on claims made.
I also think clear indication of the key cruxes of the stated argument and the level of supporting evidence can help hold us accountable in the claims we make and contribute to reducing miscommunication—how strong of a statement can be justified by the evidence I have? Am I aiming to explain, or persuade? How might my statements be misinterpreted as stronger or weaker than they are? (One example that comes to mind is the Bay of Pigs invasion which involved a miscommunication between JFK and the Joint Chiefs of Staff around understanding what “fair chance” meant).
It’s not clear to me on a quick read that the questions you’ve listed are worse off under reasoning transparency, or that actions like “clearly indicating key cruxes/level of support you have for the claims that hold your argument together” would lead to more missing facets/important connections/a meaningful big picture.
For example, if I made a claim about whether “capitalism is the right framework for EA/society in general”, would you find it less productive to know if I had done Nobel prize-winning research on this topic, if I’d run a single-country survey of 100 people, or if I was speaking just from non-expert personal experience?
If I made a claim about “What gives life meaning”, would you find it less productive if I laid out the various assumptions that I am making, or the most important considerations behind my key takeaways?
(Commenting in personal capacity etc)
I agree with bruce that the specific questions you mentioned could benefit from some reasoning transparency. In general I think this is one of the best EA innovations/practices, although I agree that it’s only one tool among many and not always best suited to the task at hand.
Here are some potential downsides I see:
Not well suited for communicating worldviews, as you said
Can effectively reduce exploration
Increases costs of writing