The basic dynamic applies. Think it’s pretty reasonable to use the name to point loosely in such cases, even if the original paper didn’t discuss this extension.
The basic dynamic doesn’t apply. This isn’t about the name, it’s about the concept. You can’t make an extension from the literature without mathematically showing that the concept is still relevant!
If there’s potential utility to be had in multiple people taking the same action, then people are just as likely to err in the form of donating too little money as they are to donate too much. The only reason the unilateralist’s curse is a problem is that there is no benefit to be had from lots of agents taking the same action, which prevents the expected value of a marginal naive EV-maximizing agent’s action from being positive.
An easy-to-evaluate opportunity which produces 1 util/$, which everyone correctly evaluates
100 hard-to-evaluate opportunities each of which actually produces 0.1 util / $, but where everyone has an independent estimate of cost-effectiveness which is a log-normal centered on the truth
Then for any individual the’re likely to think one of the 100 is best and donate there. If they all pooled their info they would instead all donate to the first opportunity.
Obviously the numbers and functional form here are implausible—I chose them for legibility of the example. It’s a legitimate question how strongly the dynamic applies in practice. But it seems fairly clear to me that it can apply. You suggested there’s a symmetry with donating too little—I think this is broken because people are selecting the top option, so they are individually running into the optimizer’s curse.
Have you even read Bostrom’s paper? This isn’t the unilateralist’s curse. You are not extending a principle of the paper, you are rejecting its premise from the start. I don’t understand how this is not obvious.
You are merely restating the optimizer’s curse, and the easy solution there is for people to read Givewell’s blog post about it. If someone has, then the only way their decisions can be statistically biased is if they have the wrong prior distributions, which is something that nobody can be sure about anyway, and therefore is wholly inappropriate as the grounds for any sort of overruling of donations. But even if it were appropriate, having a veto would simply be the wrong thing to do, since (as noted above) the unilateralist’s curse is no longer present, and you’re going to have to find a better strategy that corrects for improper priors in accordance with the actual situation.
But it seems fairly clear to me that it can apply.
It also seems fairly clear to me that the opposite can apply - e.g., if giving opportunities are normally distributed and people falsely believe them to be lognormal, then they will give too much to the easy-to-evaluate opportunity.
I find your comments painfully uncharitable, which really reduces my inclination to engage. If you can’t find an interpretation of my comment which isn’t just about the optimizer’s curse I don’t feel like helping you right now.
Agree that vetoes aren’t the right solution, though (indeed they are themselves subject to a unilateralist’s curse, perhaps of a worse type).
I find your comments painfully uncharitable, which really reduces my inclination to engage.
Really? I haven’t misinterpreted you in any way. I think the issue is that you don’t like my comments because I’m not being very nice. But you should be able to deal with comments which aren’t very nice.
If you can’t find an interpretation of my comment which isn’t just about the optimizer’s curse I don’t feel like helping you right now.
Yes, it’s specifically the effect of the optimizer’s curse in situations where the better options have more uncertainty regarding their EV estimates, but that’s the only time that the optimizer’s curse is decision relevant anyway, since all other instantiations of the optimizer’s curse modify expected utilities without doing anything to change the ordinal ranking. And the fact that this happens to be a case with 100 uncertain options rather than 1, or a large group of donors rather than just one, doesn’t modify the basic issue that people’s choices will be suboptimal, so the fact that you specified a very particular scenario doesn’t make it about anything other than the basic optimizer’s curse.
The basic dynamic applies. Think it’s pretty reasonable to use the name to point loosely in such cases, even if the original paper didn’t discuss this extension.
The basic dynamic doesn’t apply. This isn’t about the name, it’s about the concept. You can’t make an extension from the literature without mathematically showing that the concept is still relevant!
If there’s potential utility to be had in multiple people taking the same action, then people are just as likely to err in the form of donating too little money as they are to donate too much. The only reason the unilateralist’s curse is a problem is that there is no benefit to be had from lots of agents taking the same action, which prevents the expected value of a marginal naive EV-maximizing agent’s action from being positive.
The kind of set-up where it would apply:
An easy-to-evaluate opportunity which produces 1 util/$, which everyone correctly evaluates
100 hard-to-evaluate opportunities each of which actually produces 0.1 util / $, but where everyone has an independent estimate of cost-effectiveness which is a log-normal centered on the truth
Then for any individual the’re likely to think one of the 100 is best and donate there. If they all pooled their info they would instead all donate to the first opportunity.
Obviously the numbers and functional form here are implausible—I chose them for legibility of the example. It’s a legitimate question how strongly the dynamic applies in practice. But it seems fairly clear to me that it can apply. You suggested there’s a symmetry with donating too little—I think this is broken because people are selecting the top option, so they are individually running into the optimizer’s curse.
Have you even read Bostrom’s paper? This isn’t the unilateralist’s curse. You are not extending a principle of the paper, you are rejecting its premise from the start. I don’t understand how this is not obvious.
You are merely restating the optimizer’s curse, and the easy solution there is for people to read Givewell’s blog post about it. If someone has, then the only way their decisions can be statistically biased is if they have the wrong prior distributions, which is something that nobody can be sure about anyway, and therefore is wholly inappropriate as the grounds for any sort of overruling of donations. But even if it were appropriate, having a veto would simply be the wrong thing to do, since (as noted above) the unilateralist’s curse is no longer present, and you’re going to have to find a better strategy that corrects for improper priors in accordance with the actual situation.
It also seems fairly clear to me that the opposite can apply - e.g., if giving opportunities are normally distributed and people falsely believe them to be lognormal, then they will give too much to the easy-to-evaluate opportunity.
I find your comments painfully uncharitable, which really reduces my inclination to engage. If you can’t find an interpretation of my comment which isn’t just about the optimizer’s curse I don’t feel like helping you right now.
Agree that vetoes aren’t the right solution, though (indeed they are themselves subject to a unilateralist’s curse, perhaps of a worse type).
Really? I haven’t misinterpreted you in any way. I think the issue is that you don’t like my comments because I’m not being very nice. But you should be able to deal with comments which aren’t very nice.
Yes, it’s specifically the effect of the optimizer’s curse in situations where the better options have more uncertainty regarding their EV estimates, but that’s the only time that the optimizer’s curse is decision relevant anyway, since all other instantiations of the optimizer’s curse modify expected utilities without doing anything to change the ordinal ranking. And the fact that this happens to be a case with 100 uncertain options rather than 1, or a large group of donors rather than just one, doesn’t modify the basic issue that people’s choices will be suboptimal, so the fact that you specified a very particular scenario doesn’t make it about anything other than the basic optimizer’s curse.