Great post, David! Probably my favourite of the series so far.
With these assumptions in place, we can derive the agentâs posterior expected value of the intervention at each time period (see Proposition 1 in Gabaix and Laibson (2022) and proof of Proposition 1 in Appendix A).
E(u(t)|s(t))=D(t)s(t)=s(t)1+f(t)Ď2u
D(t)âĄ11+f(t)Ď2u
D(t) captures how much weight the agent puts on the signal. Alternatively, it captures how much the agent âdiscountsâ the signal solely due to the uncertainty surrounding the signal. In the specific case where the prior is mean 0 and the signal is mean 1, D(t) represents the posterior expected value.
One can think about this along the lines of the inverse-variance weighting. The formula above is equivalent to âexpected posteriorâ = âsignal precisionâ/â(âprior precisionâ + âsignal precisionâ)*âsignalâ, where âprecisionâ = 1/ââvarianceâ. This is a particular case (null prior) of Dario Amodeiâs conclusion that âexpected posteriorâ = (âprior precisionâ*âpriorâ + âsignal precisionâ*âsignalâ)/â(âprior precisionâ + âsignal precisionâ). As the precision of the signal:
Increases, the expected posterior tends to the signal.
Decreases, the expected posterior tends to the prior.
This gives us a mean error of â0.054 as shown in Table 2. In other words, on average the surrogate index approach underestimates the benefits of treatment by 0.047 standard deviations.
-0.054 is supposed to be â0.047.
Constant variance prior: We assume that the variance of the prior was the same for each time horizon whereas the variance of the signal increases with time horizon for simplicity.
[...]
If the variance of the prior grows at the same speed as the variance of the signal then the expected value of the posterior will not change with time horizon.
I think the rate of increase of the variance of the prior is a crucial consideration. Intuitively, I would say the variance of the prior grows at the same speed as the variance of the signal, in which case the signal would not be discounted.
Normal distributions: We use normal distributions for mathematical convenience but non-normal distributions with fat tails are everywhere in the real world.
If the signal and prior follow a lognormal distribution, their logarithms will follow normal distributions, so I think one can interpret the results of your analysis as applying to the logarithm of the posterior.
Thanks Vasco, Iâm glad you enjoyed it! I corrected the typo and your points about inverse-variance weighting and lognormal distributions are well-taken.
I agree that doing more work to specify what our priors should be in this sort of situation is valuable although Iâm unsure if it rises to the level of a crucial consideration. Our ability to predict long-run effects has been an important crux for me hence the work Iâve been doing on it, but in general, it seems to be more of an important consideration for people who lean neartermist than those who lean longtermist.
Great post, David! Probably my favourite of the series so far.
One can think about this along the lines of the inverse-variance weighting. The formula above is equivalent to âexpected posteriorâ = âsignal precisionâ/â(âprior precisionâ + âsignal precisionâ)*âsignalâ, where âprecisionâ = 1/ââvarianceâ. This is a particular case (null prior) of Dario Amodeiâs conclusion that âexpected posteriorâ = (âprior precisionâ*âpriorâ + âsignal precisionâ*âsignalâ)/â(âprior precisionâ + âsignal precisionâ). As the precision of the signal:
Increases, the expected posterior tends to the signal.
Decreases, the expected posterior tends to the prior.
-0.054 is supposed to be â0.047.
I think the rate of increase of the variance of the prior is a crucial consideration. Intuitively, I would say the variance of the prior grows at the same speed as the variance of the signal, in which case the signal would not be discounted.
If the signal and prior follow a lognormal distribution, their logarithms will follow normal distributions, so I think one can interpret the results of your analysis as applying to the logarithm of the posterior.
Thanks Vasco, Iâm glad you enjoyed it! I corrected the typo and your points about inverse-variance weighting and lognormal distributions are well-taken.
I agree that doing more work to specify what our priors should be in this sort of situation is valuable although Iâm unsure if it rises to the level of a crucial consideration. Our ability to predict long-run effects has been an important crux for me hence the work Iâve been doing on it, but in general, it seems to be more of an important consideration for people who lean neartermist than those who lean longtermist.