I’m struggling to understand how your proposed new group avoids the optimizer’s curse,
Because I’m not optimizing!
Of course it is still the case that the highest-scoring estimates will probably be overestimates in my new group. The difference is, I don’t care about getting the right scores on the highest-scoring estimates. Now I care about getting the best scores on all my estimates.
Or to phrase it another way, suppose that the intervention will be randomly selected rather than picked from the top.
The position I’m taking is that the scope of real-world problems that those methods are useful for is limited because our ability to precisely quantify things is severely limited in many real-world scenarios. In my post, I try to build the case for why attempting Bayesian approaches in scenarios where things are really hard to quantify might be misguided.
Well yes, but I think the methods work better than anything else for all these scenarios.
Because I’m not optimizing!
Of course it is still the case that the highest-scoring estimates will probably be overestimates in my new group. The difference is, I don’t care about getting the right scores on the highest-scoring estimates. Now I care about getting the best scores on all my estimates.
Or to phrase it another way, suppose that the intervention will be randomly selected rather than picked from the top.
Well yes, but I think the methods work better than anything else for all these scenarios.