Great post! The main reason academics suffer from “myopic empiricism” is that they’re optimising for legibility (an information source is “legible” if it can be easily trusted by others), both in their information intake and output. Or, more realistically, they’re optimising for publishing esteemable papers, and since they can’t reference non-legible sources of evidence, they’ll be less interested in attending to them. One way to think about it is that “myopic academics” are trapped in an information bubble that repels non-legible information.
And I think this is really important. We need a source of highly legible data, and academic journals provide exactly that (uh, in theory). It only starts being a big problem once those papers start offering conclusions about the real world while refusing to leave their legibility bubble. And that sums up all the failures you’ve listed in the article.
The moral of the story is this: scientists really should optimise for legibility in their data production, and this is a good thing, but if they’re going to offer real-world advice, they better be able to step out of their legibility bubble.
Or, more realistically, they’re optimising for publishing esteemable papers, and since they can’t reference non-legible sources of evidence, they’ll be less interested in attending to them.
I think this is broadly right.
The main reason academics suffer from “myopic empiricism” is that they’re optimising for legibility (an information source is “legible” if it can be easily trusted by others), both in their information intake and output.
I don’t think this is quite right.
It seems pretty unclear to me whether the approach academics are taking is actually more legible than the approach Halstead recommends.
And this is whether we use “legible” to mean:
“how easily can others understand why they should trust this (even if they lack context on the speaker, lack a shared worldview, etc.)”, or
“how easily can others simply understand how the speaker arrived at the conclusions they’ve arrived at”
The second sense is similar to Luke Muehlhauser’s concept of “reasoning transparency”; I think that that post is great, and I’d like it if more people followed its advice.
For example, academics often base their conclusions mostly on statistical methods that almost no laypeople, policymakers, etc. would understand; often even use datasets they haven’t made public; and sometimes don’t report key parts of their methods/analysis (e.g., what questions were used in a survey, how they coded the results, whether they tried other statistical techniques first). Sometimes the main way people will understand how they arrived at their conclusions and why to trust them is “they’re academics, so they must know what they’re doing”—but then we have the replication crisis etc., so that by itself doesn’t seem sufficient.
(To be clear, I’m not exactly anti-academia. I published a paper myself, and think academia does produce a lot of value.)
Meanwhile, the sort of reasoning Halstead gives in this post is mostly very easyto understand and assess the reasonableness of. This even applies to potentially assessing Halstead’s reasoning as not very good—some commenters disagreed with parts of the reasoning, and it was relatively easy for them to figure out and explain where they disagreed, as the points were made in quite “legible” ways.
(Of course, Halstead probably deliberately chose relatively clear-cut cases, so this might not be a fair comparison.)
This comes back to me being a big fan of reasoning transparency.
That said, I’m not necessarily said that those academics are just not virtuous and that if I were in their shoes I’d be more virtuous—I understand that the incentives they face push against full reasoning transparency, and that’s just an unfortunate situation that’s not their fault. Though I do suspect that it’d be good for more academics to (1) increase their reasoning transparency a bit, in ways that don’t conflict too much with the incentive structures they face, and to (2) try to advocate for more reasoning transparency by others and for tweaking the incentives. (But this is a quick hot take; I haven’t spent a long time thinking about this.)
Great post! The main reason academics suffer from “myopic empiricism” is that they’re optimising for legibility (an information source is “legible” if it can be easily trusted by others), both in their information intake and output. Or, more realistically, they’re optimising for publishing esteemable papers, and since they can’t reference non-legible sources of evidence, they’ll be less interested in attending to them. One way to think about it is that “myopic academics” are trapped in an information bubble that repels non-legible information.
And I think this is really important. We need a source of highly legible data, and academic journals provide exactly that (uh, in theory). It only starts being a big problem once those papers start offering conclusions about the real world while refusing to leave their legibility bubble. And that sums up all the failures you’ve listed in the article.
The moral of the story is this: scientists really should optimise for legibility in their data production, and this is a good thing, but if they’re going to offer real-world advice, they better be able to step out of their legibility bubble.
I think this is broadly right.
I don’t think this is quite right.
It seems pretty unclear to me whether the approach academics are taking is actually more legible than the approach Halstead recommends.
And this is whether we use “legible” to mean:
“how easily can others understand why they should trust this (even if they lack context on the speaker, lack a shared worldview, etc.)”, or
“how easily can others simply understand how the speaker arrived at the conclusions they’ve arrived at”
The second sense is similar to Luke Muehlhauser’s concept of “reasoning transparency”; I think that that post is great, and I’d like it if more people followed its advice.
For example, academics often base their conclusions mostly on statistical methods that almost no laypeople, policymakers, etc. would understand; often even use datasets they haven’t made public; and sometimes don’t report key parts of their methods/analysis (e.g., what questions were used in a survey, how they coded the results, whether they tried other statistical techniques first). Sometimes the main way people will understand how they arrived at their conclusions and why to trust them is “they’re academics, so they must know what they’re doing”—but then we have the replication crisis etc., so that by itself doesn’t seem sufficient.
(To be clear, I’m not exactly anti-academia. I published a paper myself, and think academia does produce a lot of value.)
Meanwhile, the sort of reasoning Halstead gives in this post is mostly very easy to understand and assess the reasonableness of. This even applies to potentially assessing Halstead’s reasoning as not very good—some commenters disagreed with parts of the reasoning, and it was relatively easy for them to figure out and explain where they disagreed, as the points were made in quite “legible” ways.
(Of course, Halstead probably deliberately chose relatively clear-cut cases, so this might not be a fair comparison.)
This comes back to me being a big fan of reasoning transparency.
That said, I’m not necessarily said that those academics are just not virtuous and that if I were in their shoes I’d be more virtuous—I understand that the incentives they face push against full reasoning transparency, and that’s just an unfortunate situation that’s not their fault. Though I do suspect that it’d be good for more academics to (1) increase their reasoning transparency a bit, in ways that don’t conflict too much with the incentive structures they face, and to (2) try to advocate for more reasoning transparency by others and for tweaking the incentives. (But this is a quick hot take; I haven’t spent a long time thinking about this.)