Or, more realistically, theyâre optimising for publishing esteemable papers, and since they canât reference non-legible sources of evidence, theyâll be less interested in attending to them.
I think this is broadly right.
The main reason academics suffer from âmyopic empiricismâ is that theyâre optimising for legibility (an information source is âlegibleâ if it can be easily trusted by others), both in their information intake and output.
I donât think this is quite right.
It seems pretty unclear to me whether the approach academics are taking is actually more legible than the approach Halstead recommends.
And this is whether we use âlegibleâ to mean:
âhow easily can others understand why they should trust this (even if they lack context on the speaker, lack a shared worldview, etc.)â, or
âhow easily can others simply understand how the speaker arrived at the conclusions theyâve arrived atâ
The second sense is similar to Luke Muehlhauserâs concept of âreasoning transparencyâ; I think that that post is great, and Iâd like it if more people followed its advice.
For example, academics often base their conclusions mostly on statistical methods that almost no laypeople, policymakers, etc. would understand; often even use datasets they havenât made public; and sometimes donât report key parts of their methods/âanalysis (e.g., what questions were used in a survey, how they coded the results, whether they tried other statistical techniques first). Sometimes the main way people will understand how they arrived at their conclusions and why to trust them is âtheyâre academics, so they must know what theyâre doingââbut then we have the replication crisis etc., so that by itself doesnât seem sufficient.
(To be clear, Iâm not exactly anti-academia. I published a paper myself, and think academia does produce a lot of value.)
Meanwhile, the sort of reasoning Halstead gives in this post is mostly very easyto understand and assess the reasonableness of. This even applies to potentially assessing Halsteadâs reasoning as not very goodâsome commenters disagreed with parts of the reasoning, and it was relatively easy for them to figure out and explain where they disagreed, as the points were made in quite âlegibleâ ways.
(Of course, Halstead probably deliberately chose relatively clear-cut cases, so this might not be a fair comparison.)
This comes back to me being a big fan of reasoning transparency.
That said, Iâm not necessarily said that those academics are just not virtuous and that if I were in their shoes Iâd be more virtuousâI understand that the incentives they face push against full reasoning transparency, and thatâs just an unfortunate situation thatâs not their fault. Though I do suspect that itâd be good for more academics to (1) increase their reasoning transparency a bit, in ways that donât conflict too much with the incentive structures they face, and to (2) try to advocate for more reasoning transparency by others and for tweaking the incentives. (But this is a quick hot take; I havenât spent a long time thinking about this.)
I think this is broadly right.
I donât think this is quite right.
It seems pretty unclear to me whether the approach academics are taking is actually more legible than the approach Halstead recommends.
And this is whether we use âlegibleâ to mean:
âhow easily can others understand why they should trust this (even if they lack context on the speaker, lack a shared worldview, etc.)â, or
âhow easily can others simply understand how the speaker arrived at the conclusions theyâve arrived atâ
The second sense is similar to Luke Muehlhauserâs concept of âreasoning transparencyâ; I think that that post is great, and Iâd like it if more people followed its advice.
For example, academics often base their conclusions mostly on statistical methods that almost no laypeople, policymakers, etc. would understand; often even use datasets they havenât made public; and sometimes donât report key parts of their methods/âanalysis (e.g., what questions were used in a survey, how they coded the results, whether they tried other statistical techniques first). Sometimes the main way people will understand how they arrived at their conclusions and why to trust them is âtheyâre academics, so they must know what theyâre doingââbut then we have the replication crisis etc., so that by itself doesnât seem sufficient.
(To be clear, Iâm not exactly anti-academia. I published a paper myself, and think academia does produce a lot of value.)
Meanwhile, the sort of reasoning Halstead gives in this post is mostly very easy to understand and assess the reasonableness of. This even applies to potentially assessing Halsteadâs reasoning as not very goodâsome commenters disagreed with parts of the reasoning, and it was relatively easy for them to figure out and explain where they disagreed, as the points were made in quite âlegibleâ ways.
(Of course, Halstead probably deliberately chose relatively clear-cut cases, so this might not be a fair comparison.)
This comes back to me being a big fan of reasoning transparency.
That said, Iâm not necessarily said that those academics are just not virtuous and that if I were in their shoes Iâd be more virtuousâI understand that the incentives they face push against full reasoning transparency, and thatâs just an unfortunate situation thatâs not their fault. Though I do suspect that itâd be good for more academics to (1) increase their reasoning transparency a bit, in ways that donât conflict too much with the incentive structures they face, and to (2) try to advocate for more reasoning transparency by others and for tweaking the incentives. (But this is a quick hot take; I havenât spent a long time thinking about this.)