treating the pre-post data as evidence in a Bayesian update on conservative priors
Is there any way we can get more details on this? I recently made a blogpost using Bayesian updates to correct for post-decision surprise in GiveWell’s estimates, which led to a change in the ranking of New Incentives from 2nd to last in terms of cost effectiveness among Top Charities. I’d imagine (though I haven’t read the studies) that the uncertainty in the Strong Minds CEA is / should be much larger.
For that reason, I would have guessed that Strong Minds would not fare well post-Bayesian adjustment, but it’s possible you just used a different (reasonable) prior than I did, or there is some other consideration I’m missing?
Also, evenrisk neutral evaluators really should be using Bayesian updates (formally or informally) in order to correct for post-decision surprise. (I don’t think you necessarily disagree with me on this, but it’s worth emphasizing that valuing GW-tier levels of confidence doesn’t imply risk aversion.)
Is there any way we can get more details on this? I recently made a blogpost using Bayesian updates to correct for post-decision surprise in GiveWell’s estimates, which led to a change in the ranking of New Incentives from 2nd to last in terms of cost effectiveness among Top Charities. I’d imagine (though I haven’t read the studies) that the uncertainty in the Strong Minds CEA is / should be much larger.
For that reason, I would have guessed that Strong Minds would not fare well post-Bayesian adjustment, but it’s possible you just used a different (reasonable) prior than I did, or there is some other consideration I’m missing?
Also, even risk neutral evaluators really should be using Bayesian updates (formally or informally) in order to correct for post-decision surprise. (I don’t think you necessarily disagree with me on this, but it’s worth emphasizing that valuing GW-tier levels of confidence doesn’t imply risk aversion.)