Maybe we’re talking about different timescales here? I definitely think researchers need to be able to make progress without checking in with the community at every step, and most people won’t do well to try and publish their progress to a broad group, say, weekly. For a typical researcher in an area with poor natural feedback loops I’d guess the right frequency is something like:
Quarterly: medium-context peers (distant internal colleagues / close external colleagues)
Yearly: low-context peers and the general world
(I think there are a lot of advantages to writing for these, including being able to go back later, though there are also big advantages to verbal interaction and discussion.)
I think Leverage was primarily short on (3); from the outside I don’t know how much of (2) they were doing and I have the impression they were investing heavily in (1).
Roughly agreed. Although I’d want to distinguish between feedback and legibility-requirement loops. One is optimised for making research progress, the other is optimised for being paid and respected.
When you’re talking to your weekly colleagues, you have enough shared context and trust that you can ramble about your incomplete intuitions and say “oops, hang on” multiple times in an exposition. And medium-context peers are essential for sanity-checking. This is more about actually usefwl feedback than about paying a tax on speed to keep yourself legible to low-context funders.
Ah, but part of my point is that they’re inextricably linked—at least for pre-paradigmatic research that requires creativity and don’t have cheap empirical-legible measures of progress. Shorter legibility loops puts a heavy tax on the speed of progress, at least for the top of the competence distribution. I can’t make very general claims here given how different research fields and groups are, but I don’t want us to be blind to important considerations.
There are deeper models behind this claim, but one point is that the “legibility loops” you have to obey to receive funding requires you to compromise between optimisation criteria, and there are steeper invisible costs there than people realise.
Maybe we’re talking about different timescales here? I definitely think researchers need to be able to make progress without checking in with the community at every step, and most people won’t do well to try and publish their progress to a broad group, say, weekly. For a typical researcher in an area with poor natural feedback loops I’d guess the right frequency is something like:
Weekly: high-context peers (internal colleagues / advisor / manager)
Quarterly: medium-context peers (distant internal colleagues / close external colleagues)
Yearly: low-context peers and the general world
(I think there are a lot of advantages to writing for these, including being able to go back later, though there are also big advantages to verbal interaction and discussion.)
I think Leverage was primarily short on (3); from the outside I don’t know how much of (2) they were doing and I have the impression they were investing heavily in (1).
Roughly agreed. Although I’d want to distinguish between feedback and legibility-requirement loops. One is optimised for making research progress, the other is optimised for being paid and respected.
When you’re talking to your weekly colleagues, you have enough shared context and trust that you can ramble about your incomplete intuitions and say “oops, hang on” multiple times in an exposition. And medium-context peers are essential for sanity-checking. This is more about actually usefwl feedback than about paying a tax on speed to keep yourself legible to low-context funders.
Thank you for chatting with me! ^^
(I’m only trying to talk about feedback here as it relates to research progress, not funding etc.)
Ah, but part of my point is that they’re inextricably linked—at least for pre-paradigmatic research that requires creativity and don’t have cheap empirical-legible measures of progress. Shorter legibility loops puts a heavy tax on the speed of progress, at least for the top of the competence distribution. I can’t make very general claims here given how different research fields and groups are, but I don’t want us to be blind to important considerations.
There are deeper models behind this claim, but one point is that the “legibility loops” you have to obey to receive funding requires you to compromise between optimisation criteria, and there are steeper invisible costs there than people realise.