Roughly agreed. Although I’d want to distinguish between feedback and legibility-requirement loops. One is optimised for making research progress, the other is optimised for being paid and respected.
When you’re talking to your weekly colleagues, you have enough shared context and trust that you can ramble about your incomplete intuitions and say “oops, hang on” multiple times in an exposition. And medium-context peers are essential for sanity-checking. This is more about actually usefwl feedback than about paying a tax on speed to keep yourself legible to low-context funders.
Ah, but part of my point is that they’re inextricably linked—at least for pre-paradigmatic research that requires creativity and don’t have cheap empirical-legible measures of progress. Shorter legibility loops puts a heavy tax on the speed of progress, at least for the top of the competence distribution. I can’t make very general claims here given how different research fields and groups are, but I don’t want us to be blind to important considerations.
There are deeper models behind this claim, but one point is that the “legibility loops” you have to obey to receive funding requires you to compromise between optimisation criteria, and there are steeper invisible costs there than people realise.
Roughly agreed. Although I’d want to distinguish between feedback and legibility-requirement loops. One is optimised for making research progress, the other is optimised for being paid and respected.
When you’re talking to your weekly colleagues, you have enough shared context and trust that you can ramble about your incomplete intuitions and say “oops, hang on” multiple times in an exposition. And medium-context peers are essential for sanity-checking. This is more about actually usefwl feedback than about paying a tax on speed to keep yourself legible to low-context funders.
Thank you for chatting with me! ^^
(I’m only trying to talk about feedback here as it relates to research progress, not funding etc.)
Ah, but part of my point is that they’re inextricably linked—at least for pre-paradigmatic research that requires creativity and don’t have cheap empirical-legible measures of progress. Shorter legibility loops puts a heavy tax on the speed of progress, at least for the top of the competence distribution. I can’t make very general claims here given how different research fields and groups are, but I don’t want us to be blind to important considerations.
There are deeper models behind this claim, but one point is that the “legibility loops” you have to obey to receive funding requires you to compromise between optimisation criteria, and there are steeper invisible costs there than people realise.