And I think that, even when one is extremely uncertain, the optimizer’s curse doesn’t mean you should change your preference ordering (just that you should be far less certain about it, as you’re probably greatlyoverestimating the value of best-seeming option).
Ok, I’ll flag this too. I’m sure there are statistical situations where an extreme outcome implies that an adjustment for correlation goodharting would make it seem worse than other options; i.e. change order.
That said, I’d guess this isn’t likely to happen that often for realistic cases, especially when there aren’t highly extreme outliers (which, to be fair, we do have with EA).
I think one mistake someone could make here would be to say that because the ordering may be preserved, the problem wouldn’t be “fixed” at all. But, the uncertainties and relationships themselves are often useful information outside of ordering. So a natural conclusion in the case of intense noise (which leads to the optimizer’s curse) would be to accept a large amount of uncertainty, and maybe use that knowledge to be more conservative; for instance, trying to get more data before going all-in on anything in particular.
Yeah, I think all of that’s right. I ended up coincidentally finding my way to a bunch of stuff about Goodhart on LW that I think is what you were referring to in another comment, and I’ve realised my explanation of the curse moved too fast and left out details. I think I was implicitly imagining that we’d already adjusted for what we know about the uncertainties of the estimates of the different options—but that wasn’t made clear.
I’ve now removed the sentence you quote (as I think it was unnecessary there anyway), and changed my earlier claims to:
*The optimizer’s curse is likely to be a pervasive problem and is worth taking seriously.
*In many situations, the curse will just indicate that we’re probably overestimating how much better (compared to the alternatives) the option we estimate is best is—it won’t indicate that we should actually change what option we pick.
*But the curse can indicate that we should pick an option other than that which we estimate is best, if we have reason to believe that our estimate of the value of the best option is especially uncertain, and we don’t model that information.
I’ve deliberately kept the above points brief (again, see the sources linked to for further explanations and justifications). This is because those claims are only relevant to the question of when to use EPs if the optimizer’s curse is a larger problem when using EPs than when using alternative approaches, and I don’t think it necessarily is.
Now, that’s not very clear, but I think it’s more accurate, at least :D
I think that makes sense. Some of it is a matter of interpretation.
From one perspective, the optimizer’s curse is a dramatic and challenging dilemma facing modern analysis. From another perspective, it’s a rather obvious and simple artifact from poorly-done estimates.
I.E. they sometimes say that if mathamaticians realize something is possible, they consider the problem trivial. Here the optimizer’s curse is considered a reasonably-well-understood phenomena, unlike some other estimation-theory questions currently being faced.
Ok, I’ll flag this too. I’m sure there are statistical situations where an extreme outcome implies that an adjustment for correlation goodharting would make it seem worse than other options; i.e. change order.
That said, I’d guess this isn’t likely to happen that often for realistic cases, especially when there aren’t highly extreme outliers (which, to be fair, we do have with EA).
I think one mistake someone could make here would be to say that because the ordering may be preserved, the problem wouldn’t be “fixed” at all. But, the uncertainties and relationships themselves are often useful information outside of ordering. So a natural conclusion in the case of intense noise (which leads to the optimizer’s curse) would be to accept a large amount of uncertainty, and maybe use that knowledge to be more conservative; for instance, trying to get more data before going all-in on anything in particular.
Yeah, I think all of that’s right. I ended up coincidentally finding my way to a bunch of stuff about Goodhart on LW that I think is what you were referring to in another comment, and I’ve realised my explanation of the curse moved too fast and left out details. I think I was implicitly imagining that we’d already adjusted for what we know about the uncertainties of the estimates of the different options—but that wasn’t made clear.
I’ve now removed the sentence you quote (as I think it was unnecessary there anyway), and changed my earlier claims to:
Now, that’s not very clear, but I think it’s more accurate, at least :D
I think that makes sense. Some of it is a matter of interpretation.
From one perspective, the optimizer’s curse is a dramatic and challenging dilemma facing modern analysis. From another perspective, it’s a rather obvious and simple artifact from poorly-done estimates.
I.E. they sometimes say that if mathamaticians realize something is possible, they consider the problem trivial. Here the optimizer’s curse is considered a reasonably-well-understood phenomena, unlike some other estimation-theory questions currently being faced.