Rational Overconfidence: Why “Good Judgment” Fails Strategy
TL;DR: Calibration is essential for observers but dangerous for actors. in decision-dependent environments, “accurate” forecasting creates self-fulfilling prophecies of failure. To maximize expected value, you often need to strategically break your own calibration.
Effective Altruism prioritizes calibration, using metrics like Brier scores to ensure our internal maps match the territory. This “Scout Mindset” is the gold standard for Observers—those predicting events they cannot influence, like elections or weather patterns. However, this framework fails for Actors, such as founders or wartime leaders, whose actions directly alter the probability of the outcome. For an Observer, a prediction is a statistic; for an Actor, it is an intervention.
This distinction creates a trap in high-stakes scenarios. Consider a leader undertaking a project with a base rate of success of 20%. A passive forecaster would accurately predict this low probability and act conservatively. Consequently, allies and investors would sense this low confidence, withhold resources, and the actual probability of success would collapse to zero. By striving for informational accuracy, the leader guarantees failure. To avoid this self-fulfilling prophecy, a Rational Actor must often signal absolute confidence, diverging from the “true” probability not out of delusion, but as a structural necessity to coordinate others.
Escaping this low-probability equilibrium requires more than just increased effort, which typically yields diminishing returns. It requires non-linear moves that shift the parameters of the game itself. Our model identifies Strategic Surprise as a necessary “shock”—a discontinuous action that appears irrational on a standard cost-benefit curve but is required to reset the odds. Similarly, actors must employ Theatrical Indignation, authenticating their signal by amplifying a genuine principle until it becomes reputationaly costly to back down. This “burning of the boats” forces the probability of success upward by removing the option of failure.
The critical distinction lies in the elasticity of the outcome. If a result is insensitive to effort—a “Quagmire”—then overconfidence is merely waste, and one should remain a Scout. But if the outcome depends on coordination, morale, or funding, then “accurate” forecasting is fatal. In these “Pivotal Moments,” we must stop conflating informational accuracy with instrumental impact. If you want to predict the future, calibrate; if you want to change it, you must rationally overreact.
Here is a link to the mathematical paper that provides game theoretic proofs: https://www.bloomsburytech.com/whitepapers
We are building Bloomsbury Tech, a causal AI lab for alternative investments! Email me on eugene.shcherbinin@bloomsburytech.com if you’re interested.
I think you have a point. However, I strongly disagree with the framing of your post, for several reasons.
One is advertising your hedge fund here, that made me doubt of the entire post.
Second, is that the link does not go to a mathematical paper, rather to the whitepapers section of your startup. Nevertheless, I believe the first PDF there appears to be the math behind your post.
Third, calling that PDF a mathematical proof is a stretch (at least, from my pov as a math researcher). Expressions like “it is plausible that” never belong in a mathematical proof.
And most importantly, the substance of the argument:
In your model, you assume that effort by allies depends on the actor’s confidence signal (sigma), and that allies’ contribution is monotonic (larger if the actor is more confident). I find this assumption questionable, since, from an ally/investor perspective, unwarranted high confidence can undermine trust.
Then, you conflate the fact that the optimal signal is higher (when optimizing for outcomes) than the optimal forecast (when optimizing for accuracy) as an indication against calibration. I would take it as an indication for calibration, but including possible actions (such as signaling) as variables to optimize for success.
In my view, your model is a nice toy model to explain why, in certain situations, signaling more confidence than what would be accurate can be instrumental.
Ironically, your post and your whitepaper do what they recommend, using expressions like “demonstrate” and “proof” without properly acknowledging that most of the load of the argument rests on the modelling assumptions.
I have a challenge to write 10 whitepapers by the end of the week to apply to YC. It is indeed not a rigorous proof—thus why it’s called a whitepaper :) However, I thought it is interesting enough that it is worth sharing. And the “overconfident lingo” is because the post is fully written by AI—still I believe people would prefer interesting AI slop to confusing mathematical paper. Thank you for your feedback though!
Also, we are not a hedge—we are a causal AI research lab!!! For now. Associated with hedgefund maynardmetrics.com :)
I like the core point and think it’s very important — though I don’t really vibe with statements about calibration being actively dangerous.
I think EA culture can make it seem like being calibrated is the most important thing ever. But I think on the topic of “will my ambitious projects succeed?” it seems very difficult to be calibrated and fairly cursed overall, and it may overall be unhelpful to try super hard at this vs. just executing.
For example, I’m guessing that Norman Borlaug didn’t feed a billion people primarily by being extremely well-calibrated. I think he did it via being a good scientist, dedicating himself fully to something impactful-in-principle even when the way forward was unclear, and being willing to do things outside his normal wheelhouse — like bureaucracy or engaging with Indian government officials. I’d guess he was well-calibrated about micro-aspects of his wheat germination work, such as which experiments were likely to work out, or perhaps which politicians would listen to him (but on the other hand, he could simply have been uncalibrated and very persistent). I wouldn’t expect he’d be well-calibrated about the overall shape of his career early on, and it doesn’t seem very important for him to have been calibrated about that.
One often hears about successful political candidates that they always had unwarranted-seeming confidence in themselves and always thought they’d win office. I’ve noticed that the most successful researchers tend to seem a bit ‘crazy’ and have unwarranted confidence in their own work. Successful startup founders too are not exactly known for realistic ex-ante estimates of their own success. (Of course this all applies to many unsuccessful political candidates, researchers and founders as well.)
I think something psychologically important is going on here; my guess is that “part of you” really needs to believe in outsized success in order to have a chance of achieving it. This old Nate Soares post is relevant.
These are great thoughts, thank you so much for the comment Eli!
Yea, I agree with this a lot.
One thing to warn, though: one can be inclined to say something like “I’m an actor, so I don’t have to do any of the accuracy-based forecasting, etc,” which I think is definitely wrong.
One should choose which direction they act based on some somewhat accuracy/EV + vibes. Otherwise, there are infinite ways to act and most of them, without any foresight, fail even with the type of agency you’re describing.
In addition, once you are in the project you are acting in, you (ofc) shouldn’t constantly be doing the accuracy/EV thing. Sometimes, though, it’s probably worth taking a step back and doing it to consider opportunity cost (on some marked date every x amount of time to avoid decision/acting paralysis).
Nice to hear from you, Noah! one can be inclined to say something like “I’m an actor, so I don’t have to do any of the accuracy-based forecasting, etc,” which I think is definitely wrong. totally agree
One should choose which direction they act based on some somewhat accuracy/EV + vibes. Otherwise, there are infinite ways to act and most of them, without any foresight, fail even with the type of agency you’re describing. lets write something on bounded rationality together
In addition, once you are in the project you are acting in, you (ofc) shouldn’t constantly be doing the accuracy/EV thing. Sometimes, though, it’s probably worth taking a step back and doing it to consider opportunity cost (on some marked date every x amount of time to avoid decision/acting paralysis). agree
i am solving art market today! lets chat https://www.twitch.tv/meugenn1924