I edited my original comment to point out my specific disagreements. I’m now going to say a selection of plausibly false-but-interesting things, and there’s much more nuance here that I won’t explicitly cover because that’d take too long. It’s definitely going to seem very wrong at first glance without the nuance that communicates the intended domain.
I feel like I’m in a somewhat similar situation to Leverage, only in the sense that I feel like having to frequently publish would hinder my effectiveness. It would make it easier for others to see the value of my work, but in my own estimation that trades off against maximising actual value.
This isn’t generally the case for most research, and I might be delusional (ime 10%) to think it’s the case for my own, but I should be following the gradient of what I expect will be the most usefwl. It would be selfish of me to do the legible thing motivated just by my wish for people to respect me.
The thing I’m arguing for is not that people like me shouldn’t publish at all, it’s that we should be very reluctant to punish gambling sailors for a shortage of signals. They’ll get our attention once they can demonstrate their product.
The thing about having to frequently communicate your results is that it incentivises you to adopt research strategies that lets you publish frequently. This usually means forward-chaining to incremental progress without much strategic guidance. Plus, if you get into the habit of spending your intrinsic motivation on distilling your progress to the community, now your brain’s shifted to searching for ideas that fitinto the community, instead of aiming your search to solve the highest-priority confusion points in your own head.
To be an effective explorer, you have to get to the point where you can start to iterate on top of your own ideas. If you timidly “check in” with the community every time you think you have a novel thought, before you let yourself stand on it in order to explore further down the branch, then 1) you’re wasting their time, and 2) no one’s ever gonna stray far from home.
When you go from—
A) “huh, I wonder how this thing works, and how it fits into other things I have models of.” to B) “hmm, the community seems to behave as if Y is true, but I have a suspicion that ¬X, so I should research it and provide them with information they find valuable.”
—then a pattern for generating thoughts will mostly be rewarded based on your prediction about whether the community is likely to be persuaded by those thoughts. This makes it hard to have intrinsic motivation to explore anything that doesn’t immediately seem relevant to the community.
And while B is still reasonably aligned with producing value as long as the community is roughly as good at evaluating the claims as you are, it breaks down for researchers who are much better than their expected audience at what they specialise in. If the most competent researchers have brains that optimise for communal persuasiveness, they’re wasting their potential when they could be searching for ideas that optimise for persuading themselves—a much harder criteria to meet given that they’re more competent.
I think it’s unhealthy to–within your own brain–constantly try to “advance the communal frontier”. Sure, that could ultimately be the goal, but if you’re greedily and myopically only able to optimise for specifically that at every step, then that is like a chess player who’s compulsively only able to look for checkmate patterns–unable to see forks that merely win material or positional advantage.
How frequently do you have to make your progress legible to measurable or consensus criteria? How lenient is your legibility loop?
I’m not saying it’s easy to even start trying to feel intrinsic motivation for building models in your own mind based on your own criteria for success, but being stuck in a short legibility loop certainly doesn’t help.
If you’ve learned to play an instrument, or studied painting under a mentor, you may have heard the advice “you need to learn you trust in your own sense of aesthetics.” Think of the kid who, while learning the piano, expectantly looks to their parent after every key they press. They’re not learning to listen. Sort of like a GAN with a discriminator trusted so infrequently that it never learns anything. Training to both generate and discriminate within yourself, using your own observations, will be pretty embarrassing at first, but you’re running a much shorter feedback loop.
Maybe we’re talking about different timescales here? I definitely think researchers need to be able to make progress without checking in with the community at every step, and most people won’t do well to try and publish their progress to a broad group, say, weekly. For a typical researcher in an area with poor natural feedback loops I’d guess the right frequency is something like:
Quarterly: medium-context peers (distant internal colleagues / close external colleagues)
Yearly: low-context peers and the general world
(I think there are a lot of advantages to writing for these, including being able to go back later, though there are also big advantages to verbal interaction and discussion.)
I think Leverage was primarily short on (3); from the outside I don’t know how much of (2) they were doing and I have the impression they were investing heavily in (1).
Roughly agreed. Although I’d want to distinguish between feedback and legibility-requirement loops. One is optimised for making research progress, the other is optimised for being paid and respected.
When you’re talking to your weekly colleagues, you have enough shared context and trust that you can ramble about your incomplete intuitions and say “oops, hang on” multiple times in an exposition. And medium-context peers are essential for sanity-checking. This is more about actually usefwl feedback than about paying a tax on speed to keep yourself legible to low-context funders.
Ah, but part of my point is that they’re inextricably linked—at least for pre-paradigmatic research that requires creativity and don’t have cheap empirical-legible measures of progress. Shorter legibility loops puts a heavy tax on the speed of progress, at least for the top of the competence distribution. I can’t make very general claims here given how different research fields and groups are, but I don’t want us to be blind to important considerations.
There are deeper models behind this claim, but one point is that the “legibility loops” you have to obey to receive funding requires you to compromise between optimisation criteria, and there are steeper invisible costs there than people realise.
I edited my original comment to point out my specific disagreements. I’m now going to say a selection of plausibly false-but-interesting things, and there’s much more nuance here that I won’t explicitly cover because that’d take too long. It’s definitely going to seem very wrong at first glance without the nuance that communicates the intended domain.
I feel like I’m in a somewhat similar situation to Leverage, only in the sense that I feel like having to frequently publish would hinder my effectiveness. It would make it easier for others to see the value of my work, but in my own estimation that trades off against maximising actual value.
This isn’t generally the case for most research, and I might be delusional (ime 10%) to think it’s the case for my own, but I should be following the gradient of what I expect will be the most usefwl. It would be selfish of me to do the legible thing motivated just by my wish for people to respect me.
The thing I’m arguing for is not that people like me shouldn’t publish at all, it’s that we should be very reluctant to punish gambling sailors for a shortage of signals. They’ll get our attention once they can demonstrate their product.
The thing about having to frequently communicate your results is that it incentivises you to adopt research strategies that lets you publish frequently. This usually means forward-chaining to incremental progress without much strategic guidance. Plus, if you get into the habit of spending your intrinsic motivation on distilling your progress to the community, now your brain’s shifted to searching for ideas that fit into the community, instead of aiming your search to solve the highest-priority confusion points in your own head.
To be an effective explorer, you have to get to the point where you can start to iterate on top of your own ideas. If you timidly “check in” with the community every time you think you have a novel thought, before you let yourself stand on it in order to explore further down the branch, then 1) you’re wasting their time, and 2) no one’s ever gonna stray far from home.
When you go from—
A) “huh, I wonder how this thing works, and how it fits into other things I have models of.”
to
B) “hmm, the community seems to behave as if Y is true, but I have a suspicion that ¬X,
so I should research it and provide them with information they find valuable.”
—then a pattern for generating thoughts will mostly be rewarded based on your prediction about whether the community is likely to be persuaded by those thoughts. This makes it hard to have intrinsic motivation to explore anything that doesn’t immediately seem relevant to the community.
And while B is still reasonably aligned with producing value as long as the community is roughly as good at evaluating the claims as you are, it breaks down for researchers who are much better than their expected audience at what they specialise in. If the most competent researchers have brains that optimise for communal persuasiveness, they’re wasting their potential when they could be searching for ideas that optimise for persuading themselves—a much harder criteria to meet given that they’re more competent.
I think it’s unhealthy to–within your own brain–constantly try to “advance the communal frontier”. Sure, that could ultimately be the goal, but if you’re greedily and myopically only able to optimise for specifically that at every step, then that is like a chess player who’s compulsively only able to look for checkmate patterns–unable to see forks that merely win material or positional advantage.
I’m not saying it’s easy to even start trying to feel intrinsic motivation for building models in your own mind based on your own criteria for success, but being stuck in a short legibility loop certainly doesn’t help.
If you’ve learned to play an instrument, or studied painting under a mentor, you may have heard the advice “you need to learn you trust in your own sense of aesthetics.” Think of the kid who, while learning the piano, expectantly looks to their parent after every key they press. They’re not learning to listen. Sort of like a GAN with a discriminator trusted so infrequently that it never learns anything. Training to both generate and discriminate within yourself, using your own observations, will be pretty embarrassing at first, but you’re running a much shorter feedback loop.
Maybe we’re talking about different timescales here? I definitely think researchers need to be able to make progress without checking in with the community at every step, and most people won’t do well to try and publish their progress to a broad group, say, weekly. For a typical researcher in an area with poor natural feedback loops I’d guess the right frequency is something like:
Weekly: high-context peers (internal colleagues / advisor / manager)
Quarterly: medium-context peers (distant internal colleagues / close external colleagues)
Yearly: low-context peers and the general world
(I think there are a lot of advantages to writing for these, including being able to go back later, though there are also big advantages to verbal interaction and discussion.)
I think Leverage was primarily short on (3); from the outside I don’t know how much of (2) they were doing and I have the impression they were investing heavily in (1).
Roughly agreed. Although I’d want to distinguish between feedback and legibility-requirement loops. One is optimised for making research progress, the other is optimised for being paid and respected.
When you’re talking to your weekly colleagues, you have enough shared context and trust that you can ramble about your incomplete intuitions and say “oops, hang on” multiple times in an exposition. And medium-context peers are essential for sanity-checking. This is more about actually usefwl feedback than about paying a tax on speed to keep yourself legible to low-context funders.
Thank you for chatting with me! ^^
(I’m only trying to talk about feedback here as it relates to research progress, not funding etc.)
Ah, but part of my point is that they’re inextricably linked—at least for pre-paradigmatic research that requires creativity and don’t have cheap empirical-legible measures of progress. Shorter legibility loops puts a heavy tax on the speed of progress, at least for the top of the competence distribution. I can’t make very general claims here given how different research fields and groups are, but I don’t want us to be blind to important considerations.
There are deeper models behind this claim, but one point is that the “legibility loops” you have to obey to receive funding requires you to compromise between optimisation criteria, and there are steeper invisible costs there than people realise.