The upside of centralization is that it helps prevent the unilateralist curse for funding bad projects.
This is an interesting point.
It seems to me like mere veto power is sufficient to defeat the unilateralist’s curse. The curse doesn’t apply in situations where 99% of the thinkers believe an intervention is useless and 1% believe it’s useful, only in situations where the 99% think the intervention is harmful and would want to veto it. So technically speaking we don’t need to centralize power of action, just power of veto.
That said, my impression is that the EA community has such a strong allergic reaction to authority that anything that looks like an official decisionmaking group with an official veto would be resisted. So it seems like the result is that we go past centralization of veto in to centralization of action, because (ironically) it seems less authority-ish.
On second thought, perhaps it’s just an issue of framing.
Would you be interested in an “EA donors league” that tried to overcome the unilateralist’s curse by giving people in the league some kind of power to collectively veto the donations made by other people in the league? You’d get the power to veto the donations of other people in exchange for giving others the power to veto your donations (details to be worked out)
[pollid:7]
(I guess the biggest detail to work out is how to prevent people from simply quitting the league when they want to make a non-kosher donation. Perhaps a cash deposit of some sort would work.)
Every choice to fund has false positives (funding something that should not have been funded) and false negatives (not funding something that should have been funded). Veto power only guards against the first one.
Kerry’s argument was that centralization helps prevent false positives. I was trying to show that there are other ways to prevent false positives.
With regard to false negatives, I would guess that centralization exacerbates that problem—a decentralized group of funders are more likely to make decisions using a diverse set of paradigms.
The unilateralist’s curse does not apply to donations, since funding a project can be done at a range of levels and is not a single, replaceable decision.
The basic dynamic applies. Think it’s pretty reasonable to use the name to point loosely in such cases, even if the original paper didn’t discuss this extension.
The basic dynamic doesn’t apply. This isn’t about the name, it’s about the concept. You can’t make an extension from the literature without mathematically showing that the concept is still relevant!
If there’s potential utility to be had in multiple people taking the same action, then people are just as likely to err in the form of donating too little money as they are to donate too much. The only reason the unilateralist’s curse is a problem is that there is no benefit to be had from lots of agents taking the same action, which prevents the expected value of a marginal naive EV-maximizing agent’s action from being positive.
An easy-to-evaluate opportunity which produces 1 util/$, which everyone correctly evaluates
100 hard-to-evaluate opportunities each of which actually produces 0.1 util / $, but where everyone has an independent estimate of cost-effectiveness which is a log-normal centered on the truth
Then for any individual the’re likely to think one of the 100 is best and donate there. If they all pooled their info they would instead all donate to the first opportunity.
Obviously the numbers and functional form here are implausible—I chose them for legibility of the example. It’s a legitimate question how strongly the dynamic applies in practice. But it seems fairly clear to me that it can apply. You suggested there’s a symmetry with donating too little—I think this is broken because people are selecting the top option, so they are individually running into the optimizer’s curse.
Have you even read Bostrom’s paper? This isn’t the unilateralist’s curse. You are not extending a principle of the paper, you are rejecting its premise from the start. I don’t understand how this is not obvious.
You are merely restating the optimizer’s curse, and the easy solution there is for people to read Givewell’s blog post about it. If someone has, then the only way their decisions can be statistically biased is if they have the wrong prior distributions, which is something that nobody can be sure about anyway, and therefore is wholly inappropriate as the grounds for any sort of overruling of donations. But even if it were appropriate, having a veto would simply be the wrong thing to do, since (as noted above) the unilateralist’s curse is no longer present, and you’re going to have to find a better strategy that corrects for improper priors in accordance with the actual situation.
But it seems fairly clear to me that it can apply.
It also seems fairly clear to me that the opposite can apply - e.g., if giving opportunities are normally distributed and people falsely believe them to be lognormal, then they will give too much to the easy-to-evaluate opportunity.
I find your comments painfully uncharitable, which really reduces my inclination to engage. If you can’t find an interpretation of my comment which isn’t just about the optimizer’s curse I don’t feel like helping you right now.
Agree that vetoes aren’t the right solution, though (indeed they are themselves subject to a unilateralist’s curse, perhaps of a worse type).
I find your comments painfully uncharitable, which really reduces my inclination to engage.
Really? I haven’t misinterpreted you in any way. I think the issue is that you don’t like my comments because I’m not being very nice. But you should be able to deal with comments which aren’t very nice.
If you can’t find an interpretation of my comment which isn’t just about the optimizer’s curse I don’t feel like helping you right now.
Yes, it’s specifically the effect of the optimizer’s curse in situations where the better options have more uncertainty regarding their EV estimates, but that’s the only time that the optimizer’s curse is decision relevant anyway, since all other instantiations of the optimizer’s curse modify expected utilities without doing anything to change the ordinal ranking. And the fact that this happens to be a case with 100 uncertain options rather than 1, or a large group of donors rather than just one, doesn’t modify the basic issue that people’s choices will be suboptimal, so the fact that you specified a very particular scenario doesn’t make it about anything other than the basic optimizer’s curse.
This is an interesting point.
It seems to me like mere veto power is sufficient to defeat the unilateralist’s curse. The curse doesn’t apply in situations where 99% of the thinkers believe an intervention is useless and 1% believe it’s useful, only in situations where the 99% think the intervention is harmful and would want to veto it. So technically speaking we don’t need to centralize power of action, just power of veto.
That said, my impression is that the EA community has such a strong allergic reaction to authority that anything that looks like an official decisionmaking group with an official veto would be resisted. So it seems like the result is that we go past centralization of veto in to centralization of action, because (ironically) it seems less authority-ish.
On second thought, perhaps it’s just an issue of framing.
Would you be interested in an “EA donors league” that tried to overcome the unilateralist’s curse by giving people in the league some kind of power to collectively veto the donations made by other people in the league? You’d get the power to veto the donations of other people in exchange for giving others the power to veto your donations (details to be worked out)
[pollid:7]
(I guess the biggest detail to work out is how to prevent people from simply quitting the league when they want to make a non-kosher donation. Perhaps a cash deposit of some sort would work.)
Every choice to fund has false positives (funding something that should not have been funded) and false negatives (not funding something that should have been funded). Veto power only guards against the first one.
Kerry’s argument was that centralization helps prevent false positives. I was trying to show that there are other ways to prevent false positives.
With regard to false negatives, I would guess that centralization exacerbates that problem—a decentralized group of funders are more likely to make decisions using a diverse set of paradigms.
The unilateralist’s curse does not apply to donations, since funding a project can be done at a range of levels and is not a single, replaceable decision.
The basic dynamic applies. Think it’s pretty reasonable to use the name to point loosely in such cases, even if the original paper didn’t discuss this extension.
The basic dynamic doesn’t apply. This isn’t about the name, it’s about the concept. You can’t make an extension from the literature without mathematically showing that the concept is still relevant!
If there’s potential utility to be had in multiple people taking the same action, then people are just as likely to err in the form of donating too little money as they are to donate too much. The only reason the unilateralist’s curse is a problem is that there is no benefit to be had from lots of agents taking the same action, which prevents the expected value of a marginal naive EV-maximizing agent’s action from being positive.
The kind of set-up where it would apply:
An easy-to-evaluate opportunity which produces 1 util/$, which everyone correctly evaluates
100 hard-to-evaluate opportunities each of which actually produces 0.1 util / $, but where everyone has an independent estimate of cost-effectiveness which is a log-normal centered on the truth
Then for any individual the’re likely to think one of the 100 is best and donate there. If they all pooled their info they would instead all donate to the first opportunity.
Obviously the numbers and functional form here are implausible—I chose them for legibility of the example. It’s a legitimate question how strongly the dynamic applies in practice. But it seems fairly clear to me that it can apply. You suggested there’s a symmetry with donating too little—I think this is broken because people are selecting the top option, so they are individually running into the optimizer’s curse.
Have you even read Bostrom’s paper? This isn’t the unilateralist’s curse. You are not extending a principle of the paper, you are rejecting its premise from the start. I don’t understand how this is not obvious.
You are merely restating the optimizer’s curse, and the easy solution there is for people to read Givewell’s blog post about it. If someone has, then the only way their decisions can be statistically biased is if they have the wrong prior distributions, which is something that nobody can be sure about anyway, and therefore is wholly inappropriate as the grounds for any sort of overruling of donations. But even if it were appropriate, having a veto would simply be the wrong thing to do, since (as noted above) the unilateralist’s curse is no longer present, and you’re going to have to find a better strategy that corrects for improper priors in accordance with the actual situation.
It also seems fairly clear to me that the opposite can apply - e.g., if giving opportunities are normally distributed and people falsely believe them to be lognormal, then they will give too much to the easy-to-evaluate opportunity.
I find your comments painfully uncharitable, which really reduces my inclination to engage. If you can’t find an interpretation of my comment which isn’t just about the optimizer’s curse I don’t feel like helping you right now.
Agree that vetoes aren’t the right solution, though (indeed they are themselves subject to a unilateralist’s curse, perhaps of a worse type).
Really? I haven’t misinterpreted you in any way. I think the issue is that you don’t like my comments because I’m not being very nice. But you should be able to deal with comments which aren’t very nice.
Yes, it’s specifically the effect of the optimizer’s curse in situations where the better options have more uncertainty regarding their EV estimates, but that’s the only time that the optimizer’s curse is decision relevant anyway, since all other instantiations of the optimizer’s curse modify expected utilities without doing anything to change the ordinal ranking. And the fact that this happens to be a case with 100 uncertain options rather than 1, or a large group of donors rather than just one, doesn’t modify the basic issue that people’s choices will be suboptimal, so the fact that you specified a very particular scenario doesn’t make it about anything other than the basic optimizer’s curse.