I think you have a point. However, I strongly disagree with the framing of your post, for several reasons.
One is advertising your hedge fund here, that made me doubt of the entire post.
Second, is that the link does not go to a mathematical paper, rather to the whitepapers section of your startup. Nevertheless, I believe the first PDF there appears to be the math behind your post.
Third, calling that PDF a mathematical proof is a stretch (at least, from my pov as a math researcher). Expressions like âit is plausible thatâ never belong in a mathematical proof.
And most importantly, the substance of the argument:
In your model, you assume that effort by allies depends on the actorâs confidence signal (sigma), and that alliesâ contribution is monotonic (larger if the actor is more confident). I find this assumption questionable, since, from an ally/âinvestor perspective, unwarranted high confidence can undermine trust.
Then, you conflate the fact that the optimal signal is higher (when optimizing for outcomes) than the optimal forecast (when optimizing for accuracy) as an indication against calibration. I would take it as an indication for calibration, but including possible actions (such as signaling) as variables to optimize for success.
In my view, your model is a nice toy model to explain why, in certain situations, signaling more confidence than what would be accurate can be instrumental.
Ironically, your post and your whitepaper do what they recommend, using expressions like âdemonstrateâ and âproofâ without properly acknowledging that most of the load of the argument rests on the modelling assumptions.
I have a challenge to write 10 whitepapers by the end of the week to apply to YC. It is indeed not a rigorous proofâthus why itâs called a whitepaper :) However, I thought it is interesting enough that it is worth sharing. And the âoverconfident lingoâ is because the post is fully written by AIâstill I believe people would prefer interesting AI slop to confusing mathematical paper. Thank you for your feedback though!
I think you have a point. However, I strongly disagree with the framing of your post, for several reasons.
One is advertising your hedge fund here, that made me doubt of the entire post.
Second, is that the link does not go to a mathematical paper, rather to the whitepapers section of your startup. Nevertheless, I believe the first PDF there appears to be the math behind your post.
Third, calling that PDF a mathematical proof is a stretch (at least, from my pov as a math researcher). Expressions like âit is plausible thatâ never belong in a mathematical proof.
And most importantly, the substance of the argument:
In your model, you assume that effort by allies depends on the actorâs confidence signal (sigma), and that alliesâ contribution is monotonic (larger if the actor is more confident). I find this assumption questionable, since, from an ally/âinvestor perspective, unwarranted high confidence can undermine trust.
Then, you conflate the fact that the optimal signal is higher (when optimizing for outcomes) than the optimal forecast (when optimizing for accuracy) as an indication against calibration. I would take it as an indication for calibration, but including possible actions (such as signaling) as variables to optimize for success.
In my view, your model is a nice toy model to explain why, in certain situations, signaling more confidence than what would be accurate can be instrumental.
Ironically, your post and your whitepaper do what they recommend, using expressions like âdemonstrateâ and âproofâ without properly acknowledging that most of the load of the argument rests on the modelling assumptions.
I have a challenge to write 10 whitepapers by the end of the week to apply to YC. It is indeed not a rigorous proofâthus why itâs called a whitepaper :) However, I thought it is interesting enough that it is worth sharing. And the âoverconfident lingoâ is because the post is fully written by AIâstill I believe people would prefer interesting AI slop to confusing mathematical paper. Thank you for your feedback though!
Also, we are not a hedgeâwe are a causal AI research lab!!! For now. Associated with hedgefund maynardmetrics.com :)