On the face of it an update 10% of the way towards a threshold should only be about 1% as valuable to decision-makers as an update all the way to the threshold.
(Two intuition pumps for why this is quadratic: a tiny shift in probabilities only affects a tiny fraction of prioritization decisions and only improves them by a tiny amount; or getting 100 updates of the size 1% of the way to a threshold is super unlikely to actually get you to a threshold since many of them are likely to cancel out.)
However you might well want to pay for information that leaves you better informed even if it doesn’t change decisions (in expectation it could change future decisions).
Re. arguments split across multiple posts, perhaps it would be ideal to first decide the total prize pool depending on the value/magnitude of the total updates, and then decide on the share of credit allocation for the updates. I think that would avoid the weirdness about post order or incentivizing either bundling/unbundling considerations, while still paying out appropriately more for very large updates.
Sorry I don’t have a link. Here’s an example that’s a bit more spelled out (but still written too quickly to be careful):
Suppose there are two possible worlds, S and L (e.g. “short timelines” and “long timelines”). You currently assign 50% probability to each. You invest in actions which help with either until your expected marginal returns from investment in either are equal. If the two worlds have the same returns curves for actions on both, then you’ll want a portfolio which is split 50⁄50 across the two (if you’re the only investor; otherwise you’ll want to push the global portfolio towards that).
Now you update either that S is 1% more likely (51%, with L at 49%).
This changes your estimate of the value of marginal returns on S and on L. You rebalance the portfolio until the marginal returns are equal again—which has 51% spending on S and 49% spending on L.
So you eliminated the marginal 1% spending on L and shifted it to a marginal 1% spending on S. How much better spent, on average, was the reallocated capital compared to before? Around 1%. So you got a 1% improvement on 1% of your spending.
If you’d made a 10% update you’d get roughly a 10% improvement on 10% of your spending. If you updated all the way to certainty on S you’d get to shift all of your money into S, and it would be a big improvement for each dollar shifted.
I think this particular example requires an assumption of logarithmically diminishing returns, but is right with that.
(I think the point about roughly quadratic value of information applies more broadly than just for logarithmically diminishing returns. And I hadn’t realised it before. Seems important + underappreciated!)
One quirk to note: If a funder (who I want to be well-informed) is 50⁄50 on S vs L, but my all-things-considered belief is 60⁄40, then I would value the first 1% they shift towards my position much more than they do (maybe 10x more?) and will put comparatively little value on shifting them all the way (ie the last percent from 59% to 60% is much less important). You can get this from a pretty similar argument as in the above example.
(In fact, the funder’s own much greater valuation of shifting 10% than 1% can be seen as a two-step process where (i) they shift to 60⁄40 beliefs, and then (ii) they first get a lot of value from shifting their allocation from 50 to 51, then slightly less from shifting from 51 to 52, etc...)
I agree with all this. I meant to state that I was assuming logarithmic returns for the example, although I do think some smoothness argument should be enough to get it to work for small shifts.
On the face of it an update 10% of the way towards a threshold should only be about 1% as valuable to decision-makers as an update all the way to the threshold.
(Two intuition pumps for why this is quadratic: a tiny shift in probabilities only affects a tiny fraction of prioritization decisions and only improves them by a tiny amount; or getting 100 updates of the size 1% of the way to a threshold is super unlikely to actually get you to a threshold since many of them are likely to cancel out.)
However you might well want to pay for information that leaves you better informed even if it doesn’t change decisions (in expectation it could change future decisions).
Re. arguments split across multiple posts, perhaps it would be ideal to first decide the total prize pool depending on the value/magnitude of the total updates, and then decide on the share of credit allocation for the updates. I think that would avoid the weirdness about post order or incentivizing either bundling/unbundling considerations, while still paying out appropriately more for very large updates.
So I don’t disagree that big shifts might be (much) more valuable that small shifts. But I do have the intuition that there is a split between:
What would the FTX foundation find most valuable
What should they be incentivizing
because incentivizing providing information is more robust to various artifacts than incentivizing changing minds.
I don’t understand this. Have you written about this or have a link that explains it?
Sorry I don’t have a link. Here’s an example that’s a bit more spelled out (but still written too quickly to be careful):
Suppose there are two possible worlds, S and L (e.g. “short timelines” and “long timelines”). You currently assign 50% probability to each. You invest in actions which help with either until your expected marginal returns from investment in either are equal. If the two worlds have the same returns curves for actions on both, then you’ll want a portfolio which is split 50⁄50 across the two (if you’re the only investor; otherwise you’ll want to push the global portfolio towards that).
Now you update either that S is 1% more likely (51%, with L at 49%).
This changes your estimate of the value of marginal returns on S and on L. You rebalance the portfolio until the marginal returns are equal again—which has 51% spending on S and 49% spending on L.
So you eliminated the marginal 1% spending on L and shifted it to a marginal 1% spending on S. How much better spent, on average, was the reallocated capital compared to before? Around 1%. So you got a 1% improvement on 1% of your spending.
If you’d made a 10% update you’d get roughly a 10% improvement on 10% of your spending. If you updated all the way to certainty on S you’d get to shift all of your money into S, and it would be a big improvement for each dollar shifted.
I think this particular example requires an assumption of logarithmically diminishing returns, but is right with that.
(I think the point about roughly quadratic value of information applies more broadly than just for logarithmically diminishing returns. And I hadn’t realised it before. Seems important + underappreciated!)
One quirk to note: If a funder (who I want to be well-informed) is 50⁄50 on S vs L, but my all-things-considered belief is 60⁄40, then I would value the first 1% they shift towards my position much more than they do (maybe 10x more?) and will put comparatively little value on shifting them all the way (ie the last percent from 59% to 60% is much less important). You can get this from a pretty similar argument as in the above example.
(In fact, the funder’s own much greater valuation of shifting 10% than 1% can be seen as a two-step process where (i) they shift to 60⁄40 beliefs, and then (ii) they first get a lot of value from shifting their allocation from 50 to 51, then slightly less from shifting from 51 to 52, etc...)
I agree with all this. I meant to state that I was assuming logarithmic returns for the example, although I do think some smoothness argument should be enough to get it to work for small shifts.