I think it depends on what role you’re trying to play in your epistemic community.
If you’re trying to be a maverick,[1] you’re betting on a small chance of producing large advances, and then you want to be capable of building and iterating on your own independent models without having to wait on outside verification or social approval at every step. Psychologically, the most effective way I know to achieve this is to act as if you’re overconfident.[2] If you’re lucky, you could revolutionise the field, but most likely people will just treat you as a crackpot unless you already have very high social status.
On the other hand, if you’re trying to specialise in giving advice, you’ll have a different set of optima on several methodological trade-offs. On my model at least, the impact of a maverick depends mostly on the speed at which they’re able to produce and look through novel ideas, whereas advice-givers depend much more on their ability to assign accurate probability estimates on ideas that already exist. They have less freedom to tweak their psychology to feel more motivated, given that it’s likely to affect their estimates.
“We consider three different search strategies scientists can adopt for exploring the landscape. In the first, scientists work alone and do not let the discoveries of the community as a whole influence their actions. This is compared with two social research strategies, which we call the follower and maverick strategies. Followers are biased towards what others have already discovered, and we find that pure populations of these scientists do less well than scientists acting independently. However, pure populations of mavericks, who try to avoid research approaches that have already been taken, vastly outperform both of the other strategies.”[3]
I’m skipping important caveats here, but one aspect is that, as a maverick, I mainly try to increase how much I “alieve” in my own abilities while preserving what I can about the fidelity of my “beliefs”.
I’ll note that simplistic computer simulations of epistemic communities that have been specifically designed to demonstrate an idea is very weak evidence for that idea, and you’re probably better off thinking about it theoretically.
I think it depends on what role you’re trying to play in your epistemic community.
If you’re trying to be a maverick,[1] you’re betting on a small chance of producing large advances, and then you want to be capable of building and iterating on your own independent models without having to wait on outside verification or social approval at every step. Psychologically, the most effective way I know to achieve this is to act as if you’re overconfident.[2] If you’re lucky, you could revolutionise the field, but most likely people will just treat you as a crackpot unless you already have very high social status.
On the other hand, if you’re trying to specialise in giving advice, you’ll have a different set of optima on several methodological trade-offs. On my model at least, the impact of a maverick depends mostly on the speed at which they’re able to produce and look through novel ideas, whereas advice-givers depend much more on their ability to assign accurate probability estimates on ideas that already exist. They have less freedom to tweak their psychology to feel more motivated, given that it’s likely to affect their estimates.
I’m skipping important caveats here, but one aspect is that, as a maverick, I mainly try to increase how much I “alieve” in my own abilities while preserving what I can about the fidelity of my “beliefs”.
I’ll note that simplistic computer simulations of epistemic communities that have been specifically designed to demonstrate an idea is very weak evidence for that idea, and you’re probably better off thinking about it theoretically.