Have you even read Bostrom’s paper? This isn’t the unilateralist’s curse. You are not extending a principle of the paper, you are rejecting its premise from the start. I don’t understand how this is not obvious.
You are merely restating the optimizer’s curse, and the easy solution there is for people to read Givewell’s blog post about it. If someone has, then the only way their decisions can be statistically biased is if they have the wrong prior distributions, which is something that nobody can be sure about anyway, and therefore is wholly inappropriate as the grounds for any sort of overruling of donations. But even if it were appropriate, having a veto would simply be the wrong thing to do, since (as noted above) the unilateralist’s curse is no longer present, and you’re going to have to find a better strategy that corrects for improper priors in accordance with the actual situation.
But it seems fairly clear to me that it can apply.
It also seems fairly clear to me that the opposite can apply - e.g., if giving opportunities are normally distributed and people falsely believe them to be lognormal, then they will give too much to the easy-to-evaluate opportunity.
I find your comments painfully uncharitable, which really reduces my inclination to engage. If you can’t find an interpretation of my comment which isn’t just about the optimizer’s curse I don’t feel like helping you right now.
Agree that vetoes aren’t the right solution, though (indeed they are themselves subject to a unilateralist’s curse, perhaps of a worse type).
I find your comments painfully uncharitable, which really reduces my inclination to engage.
Really? I haven’t misinterpreted you in any way. I think the issue is that you don’t like my comments because I’m not being very nice. But you should be able to deal with comments which aren’t very nice.
If you can’t find an interpretation of my comment which isn’t just about the optimizer’s curse I don’t feel like helping you right now.
Yes, it’s specifically the effect of the optimizer’s curse in situations where the better options have more uncertainty regarding their EV estimates, but that’s the only time that the optimizer’s curse is decision relevant anyway, since all other instantiations of the optimizer’s curse modify expected utilities without doing anything to change the ordinal ranking. And the fact that this happens to be a case with 100 uncertain options rather than 1, or a large group of donors rather than just one, doesn’t modify the basic issue that people’s choices will be suboptimal, so the fact that you specified a very particular scenario doesn’t make it about anything other than the basic optimizer’s curse.
Have you even read Bostrom’s paper? This isn’t the unilateralist’s curse. You are not extending a principle of the paper, you are rejecting its premise from the start. I don’t understand how this is not obvious.
You are merely restating the optimizer’s curse, and the easy solution there is for people to read Givewell’s blog post about it. If someone has, then the only way their decisions can be statistically biased is if they have the wrong prior distributions, which is something that nobody can be sure about anyway, and therefore is wholly inappropriate as the grounds for any sort of overruling of donations. But even if it were appropriate, having a veto would simply be the wrong thing to do, since (as noted above) the unilateralist’s curse is no longer present, and you’re going to have to find a better strategy that corrects for improper priors in accordance with the actual situation.
It also seems fairly clear to me that the opposite can apply - e.g., if giving opportunities are normally distributed and people falsely believe them to be lognormal, then they will give too much to the easy-to-evaluate opportunity.
I find your comments painfully uncharitable, which really reduces my inclination to engage. If you can’t find an interpretation of my comment which isn’t just about the optimizer’s curse I don’t feel like helping you right now.
Agree that vetoes aren’t the right solution, though (indeed they are themselves subject to a unilateralist’s curse, perhaps of a worse type).
Really? I haven’t misinterpreted you in any way. I think the issue is that you don’t like my comments because I’m not being very nice. But you should be able to deal with comments which aren’t very nice.
Yes, it’s specifically the effect of the optimizer’s curse in situations where the better options have more uncertainty regarding their EV estimates, but that’s the only time that the optimizer’s curse is decision relevant anyway, since all other instantiations of the optimizer’s curse modify expected utilities without doing anything to change the ordinal ranking. And the fact that this happens to be a case with 100 uncertain options rather than 1, or a large group of donors rather than just one, doesn’t modify the basic issue that people’s choices will be suboptimal, so the fact that you specified a very particular scenario doesn’t make it about anything other than the basic optimizer’s curse.