# Lukas_Finnveden comments on Announcing the Future Fund’s AI Worldview Prize

• I think this particular example requires an assumption of logarithmically diminishing returns, but is right with that.

(I think the point about roughly quadratic value of information applies more broadly than just for logarithmically diminishing returns. And I hadn’t realised it before. Seems important + underappreciated!)

One quirk to note: If a funder (who I want to be well-informed) is 5050 on S vs L, but my all-things-considered belief is 6040, then I would value the first 1% they shift towards my position much more than they do (maybe 10x more?) and will put comparatively little value on shifting them all the way (ie the last percent from 59% to 60% is much less important). You can get this from a pretty similar argument as in the above example.

(In fact, the funder’s own much greater valuation of shifting 10% than 1% can be seen as a two-step process where (i) they shift to 6040 beliefs, and then (ii) they first get a lot of value from shifting their allocation from 50 to 51, then slightly less from shifting from 51 to 52, etc...)

• I agree with all this. I meant to state that I was assuming logarithmic returns for the example, although I do think some smoothness argument should be enough to get it to work for small shifts.