The proposed solution of using priors just pushes the problem to selecting good priors.

The problem of the optimizer’s curse is that the EV estimates of high-EV-options are predictably over-optimistic in proportion with how unreliable the estimates are. That problem doesn’t exist anymore.

The fact that you don’t have guaranteed accurate information doesn’t mean the optimizer’s curse still exists.

I don’t think there’s any complete solution to the optimizer’s curse

Well there is, just spend too much time worrying about model uncertainty and other people’s priors and too little time worrying about expected value estimation. Then you’re solving the optimizer’s curse too much, so that your charity selections will be less accurate and predictably biased in favor of low EV, high reliability options. So it’s a bad idea, but you’ve solved the optimizer’s curse.

If you’re presented with multiple priors, and they all seem similarly reasonable to you, but depending on which ones you choose, different actions will be favoured, how would you choose how to act?

Maximize the expected outcome over the distribution of possibilities.

If one action is preferred with almost all of the priors (perhaps rare in practice), isn’t that a reason (perhaps insufficient) to prefer it?

What do you mean by “the priors”? Other people’s priors? Well if they’re other people’s priors and I don’t have reason to update my beliefs based on their priors, then it’s trivially true that this doesn’t give me a reason to prefer the action. But you seem to think that other people’s priors will be “reasonable”, so obviously I should update based on their priors, in which case of course this is true—but only in a banal, trivial sense that has nothing to do with the optimizer’s curse.

To me, using this could be an improvement over just using priors

Hm? You’re just suggesting updating one’s prior by looking at other people’s priors. Assuming that other people’s priors might be rational, this is banal—of course we should be reasonable, epistemically modest, etc. But this has nothing to do with the optimizer’s curse in particular, it’s equally true either way.

I ask the same question I asked of OP: give me some guidance that applies for estimating the impact of maximizing actions that doesn’t apply for estimating the impact of randomly selected actions. So far it still seems like there is none—aside from the basic idea given by Muelhauser.

just using priors never fully solved the problem in practice in the first place

Is the problem the lack of guaranteed knowledge about charity impacts, or is the problem the optimizer’s curse? You seem to (incorrectly) think that chipping away at the former necessarily means chipping away at the latter.

It’s always worth entertaining multiple models if you can do that at no cost. However, doing that often comes at some cost (money, time, etc). In situations with lots of uncertainty (where the optimizer’s curse is liable to cause significant problems), it’s worth paying much higher costs to entertain multiple models (or do other things I suggested) than it is in cases where the optimizer’s curse is unlikely to cause serious problems.

In situations with lots of uncertainty (where the optimizer’s curse is liable to cause significant problems), it’s worth paying much higher costs to entertain multiple models (or do other things I suggested) than it is in cases where the optimizer’s curse is unlikely to cause serious problems.

I don’t agree. Why is the uncertainty that comes from model uncertainty—as opposed to any other kind of uncertainty—uniquely important for the optimizer’s curse? The optimizer’s curse does not discriminate between estimates that are too high for modeling reasons, versus estimates that are too high for any other reason.

The mere fact that there’s more uncertainty is not relevant, because we are talking about how much time we should spend worrying about one kind of uncertainty versus another. “Do more to reduce uncertainty” is just a platitude, we always want to reduce uncertainty.

I made a long top-level comment that I hope will clarify some problems with the solution proposed in the original paper.

I ask the same question I asked of OP: give me some guidance that applies for estimating the impact of maximizing actions that doesn’t apply for estimating the impact of randomly selected actions.

This is a good point. Somehow, I think you’d want to adjust your posterior downward based on the set or the number of options under consideration and how unlikely the data that makes the intervention look good. This is not really useful, since I don’t know how much you should adjust these. Maybe there’s a way to model this explicitly, but it seems like you’d be trying to model your selection process itself before you’ve defined it, and then you look for a selection process which satisfies some properties.

You might also want to spend more effort looking for arguments and evidence against each option the more options you’re considering.

When considering a larger number of options, you could use some randomness in your selection process or spread funding further (although the latter will be vulnerable to the satisficer’s curse if you’re using cutoffs).

What do you mean by “the priors”?

If I haven’t decided on a prior, and multiple different priors (even an infinite set of them) seem equally reasonable to me.

Somehow, I think you’d want to adjust your posterior downward based on the set or the number of options under consideration and how unlikely the data that makes the intervention look good.

That’s the basic idea given by Muelhauser. Corrected posterior EV estimates.

You might also want to spend more effort looking for arguments and evidence against each option the more options you’re considering.

As opposed to equal effort for and against? OK, I’m satisfied. However, if I’ve done the corrected posterior EV estimation, and then my specific search for arguments-against turns up short, then I should increase my EV estimates back towards the original naive estimate.

As I recall, that post found that randomized funding doesn’t make sense. Which 100% matches my presumptions, I do not see how it could improve funding outcomes.

or spread funding further

I don’t see how that would improve funding outcomes.

If I haven’t decided on a prior, and multiple different priors (even an infinite set of them) seem equally reasonable to me.

In Bayesian rationality, you always have a prior. You seem to be considering or defining things differently.

Here we would probably say that your actual prior exists and is simply some kind of aggregate of these possible priors, therefore it’s not the case that we should leap outside our own priors in some sort of violation of standard Bayesian rationality.

The problem of the optimizer’s curse is that the EV estimates of high-EV-options are predictably over-optimistic in proportion with how unreliable the estimates are. That problem doesn’t exist anymore.

The fact that you don’t have guaranteed accurate information doesn’t mean the optimizer’s curse still exists.

Well there is, just spend too much time worrying about model uncertainty and other people’s priors and too little time worrying about expected value estimation. Then you’re solving the optimizer’s curse

too much, so that your charity selections will be less accurate and predictably biased in favor of low EV, high reliability options. So it’s a bad idea, but you’ve solved the optimizer’s curse.Maximize the expected outcome over the distribution of possibilities.

What do you mean by “the priors”? Other people’s priors? Well if they’re other people’s priors and I don’t have reason to update my beliefs based on their priors, then it’s trivially true that this doesn’t give me a reason to prefer the action. But you seem to think that other people’s priors will be “reasonable”, so obviously I

shouldupdate based on their priors, in which case of course this is true—but only in a banal, trivial sense that has nothing to do with the optimizer’s curse.Hm? You’re just suggesting updating one’s prior by looking at other people’s priors. Assuming that other people’s priors might be rational, this is banal—of course we should be reasonable, epistemically modest, etc. But this has nothing to do with the optimizer’s curse in particular, it’s equally true either way.

I ask the same question I asked of OP: give me some guidance that applies for estimating the impact of maximizing actions that doesn’t apply for estimating the impact of randomly selected actions. So far it still seems like there is none—aside from the basic idea given by Muelhauser.

Is the problem the lack of guaranteed knowledge about charity impacts, or is the problem the optimizer’s curse? You seem to (incorrectly) think that chipping away at the former necessarily means chipping away at the latter.

It’s always worth entertaining multiple models if you can do that at no cost. However, doing that often comes at some cost (money, time, etc). In situations with lots of uncertainty (where the optimizer’s curse is liable to cause significant problems), it’s worth paying much higher costs to entertain multiple models (or do other things I suggested) than it is in cases where the optimizer’s curse is unlikely to cause serious problems.

I don’t agree. Why is the uncertainty that comes from model uncertainty—as opposed to any other kind of uncertainty—uniquely important for the optimizer’s curse? The optimizer’s curse does not discriminate between estimates that are too high for modeling reasons, versus estimates that are too high for any other reason.

The mere fact that there’s more uncertainty is not relevant, because we are talking about how much time we should spend worrying about one kind of uncertainty versus another. “Do more to reduce uncertainty” is just a platitude, we always want to reduce uncertainty.

I made a long top-level comment that I hope will clarify some problems with the solution proposed in the original paper.

This is a good point. Somehow, I think you’d want to adjust your posterior downward based on the set or the number of options under consideration and how unlikely the data that makes the intervention look good. This is not really useful, since I don’t know how much you should adjust these. Maybe there’s a way to model this explicitly, but it seems like you’d be trying to model your selection process itself before you’ve defined it, and then you look for a selection process which satisfies some properties.

You might also want to spend

moreeffort looking for arguments and evidence against each option the more options you’re considering.When considering a larger number of options, you could use some randomness in your selection process or spread funding further (although the latter will be vulnerable to the satisficer’s curse if you’re using cutoffs).

If I haven’t decided on a prior, and multiple different priors (even an infinite set of them) seem equally reasonable to me.

That’s the basic idea given by Muelhauser. Corrected posterior EV estimates.

As opposed to equal effort for and against? OK, I’m satisfied. However, if I’ve done the corrected posterior EV estimation, and then my specific search for arguments-against turns up short, then I should increase my EV estimates back towards the original naive estimate.

As I recall, that post found that randomized funding doesn’t make sense. Which 100% matches my presumptions, I do not see how it could improve funding outcomes.

I don’t see how that would improve funding outcomes.

In Bayesian rationality, you always have a prior. You seem to be considering or defining things differently.

Here we would probably say that your actual prior exists and is simply some kind of aggregate of these possible priors, therefore it’s not the case that we should leap outside our own priors in some sort of violation of standard Bayesian rationality.