But if I can’t convince them to fund me for some reason and I think they’re making a mistake, there are no other donors to appeal to anymore. It’s all or nothing.
The upside of centralization is that it helps prevent the unilateralist curse for funding bad projects. As the number of funders increases, it becomes increasingly easy for the bad projects to find someone who will fund them.
That said, I share the concern that EA Funds will become a single point of failure for projects such that if EA Funds doesn’t fund you, the project is dead. We probably want some centralization but we also want worldview diversification. I’m not yet sure how to accomplish this. We could create multiple versions of the current funds with different fund managers, but that is likely to be very confusing to most donors. I’m open to ideas on how to help with this concern.
Quick (thus likely wrong) thought on solving unilateralist’s curse: put multiple position in charge of each fund, each representing a different worldview, and give everyone 3 grant vetoes each year (so they can prevent grants that are awful in their worldview). You can also give them control of a percentage of funds in proportion to CEA’s / the donor’s confidence in that worldview.
Or maybe allocate grants according to a ranked preference vote of the three fund managers, plus have them all individually and publicly write up their reasoning and disagreements? I’d like that a lot.
Or maybe allocate grants according to a ranked preference vote of the three fund managers, plus have them all individually and publicly write up their reasoning and disagreements?
Serious question: What do you think of N fund managers in your scenario?
Allocating grants according to a ranked preference vote of an arbitrary amount of people (and having them write up their arguments); what is the optimal number here? Where is the inflection point where adding more people decreases the quality of the grants?
On tertiary reading I somewhat misconstrued “three fund managers” as “three fund managers per fund” rather than “the three fund managers we have right now (Nick, Elie, Lewis)”, but the possibility is still interesting with any variation.
That’s a good question. I did intend “three fund managers” to mean “the three fund managers we have right now”, but I could also see the optimal number of people being 2-3.
Every time in the past week or so that I’ve seen someone talk about a bad venture, they’ve given the same example. That suggests that there is indeed a shortage of bad ventures—or at least, ventures bad enough to get widespread attention for how bad they are. (Most ventures are “bad” in a trivial sense because most of them fail, but many failed ideas looked like good ideas ex ante.)
It’s not clear that Juicero is actually a bad venture in the sense that doesn’t return the money for it’s investors.
Even if that would be the case, VC’s make most of the money with a handful companies. A VC can have a good fund if 90% of their investments don’t return their money.
I would guess that the same is true for high risk philanthropic investments. It’s okay if some high risk investments don’t provide value as long as you are betting on some investments that deliever.
I’m not sure that’s true. There are a lot of venture funds in the Valley but that doesn’t mean it’s easy to get any venture fund to give you money.
I don’t have the precise statistics handy, but my understanding is that VC returns are very good for a small number of firms and break-even or negative for most VC firms. If that’s the case, it suggests that as more VCs enter the market, more bad companies are getting funded.
I’m not sure it’s obvious that current VCs fund all the potentially top companies. If you look into the history of many of the biggest wins, many of them nearly failed multiple times and could have easily shut down if a key funder didn’t exist (e.g. Airbnb and YC).
I think a better approximation is an efficient market, in which the risk-adjusted returns of VC at the margin are equal to the market. This means that the probability of funding a winner for a marginal VC is whatever it would take for their returns to equal the market.
Then also becoming a VC, to a first order, has no effect on the cost of capital (which is fixed to the market), so no effect on the number of startups formed. So you’re right that additional VCs aren’t helpful, but it’s for a different reason.
To a second order, there probably are benefits, depending on how skilled you are. The market for startups doesn’t seem very efficient and requires specialised knowledge to access. If you develop the VC skill-set, you can reduce transaction costs and make the market for startups more efficient, which enables more to be created.
Moreover, the more money that gets invested rather than consumed, the lower the cost of capital in the economy, which lets more companies get created.
The second order benefits probably diminish as more skilled VCs enter, so that’s another sense in which extra VCs are less useful than those we already have.
I don’t think the argument that there are a lot of VC firms that don’t get good returns suggest that centralization into one VC firm would be good.
There are different successful VC firms that have different preferences in how to invest.
Having one central hub of decision making is essentially the model used in the Soviet Union. I don’t think that’s a good model.
Decentral decision making usually beats central planning with one single decision making authority in domain with a lot of spread out information.
I hadn’t considered the unilateralist’s curse and I’ll keep this in mind.
To what extent do you think it’s sustainable to
a) advocate for a centralised system run by trusted professionals VS.
b) building up the capacity of individual funders to recognise activities that are generally seen as problematic/negative EV by cause prioritisation researchers?
Put simply, I wonder if going for a) centralisation would make the ‘system’ fragile because EA donors would be less inclined to build up their awareness of big risks. For those individual donors who’d approach cause-selection with rigour and epistemic humility, I can see b) being antifragile. But for those approaching it amateuristically/sloppily, it makes sense to me that they’re much better off handing over their money and employing their skills elsewhere.
I admit I don’t have a firm grasp of unilateralist’s curse scenarios.
The upside of centralization is that it helps prevent the unilateralist curse for funding bad projects.
This is an interesting point.
It seems to me like mere veto power is sufficient to defeat the unilateralist’s curse. The curse doesn’t apply in situations where 99% of the thinkers believe an intervention is useless and 1% believe it’s useful, only in situations where the 99% think the intervention is harmful and would want to veto it. So technically speaking we don’t need to centralize power of action, just power of veto.
That said, my impression is that the EA community has such a strong allergic reaction to authority that anything that looks like an official decisionmaking group with an official veto would be resisted. So it seems like the result is that we go past centralization of veto in to centralization of action, because (ironically) it seems less authority-ish.
On second thought, perhaps it’s just an issue of framing.
Would you be interested in an “EA donors league” that tried to overcome the unilateralist’s curse by giving people in the league some kind of power to collectively veto the donations made by other people in the league? You’d get the power to veto the donations of other people in exchange for giving others the power to veto your donations (details to be worked out)
[pollid:7]
(I guess the biggest detail to work out is how to prevent people from simply quitting the league when they want to make a non-kosher donation. Perhaps a cash deposit of some sort would work.)
Every choice to fund has false positives (funding something that should not have been funded) and false negatives (not funding something that should have been funded). Veto power only guards against the first one.
Kerry’s argument was that centralization helps prevent false positives. I was trying to show that there are other ways to prevent false positives.
With regard to false negatives, I would guess that centralization exacerbates that problem—a decentralized group of funders are more likely to make decisions using a diverse set of paradigms.
The unilateralist’s curse does not apply to donations, since funding a project can be done at a range of levels and is not a single, replaceable decision.
The basic dynamic applies. Think it’s pretty reasonable to use the name to point loosely in such cases, even if the original paper didn’t discuss this extension.
The basic dynamic doesn’t apply. This isn’t about the name, it’s about the concept. You can’t make an extension from the literature without mathematically showing that the concept is still relevant!
If there’s potential utility to be had in multiple people taking the same action, then people are just as likely to err in the form of donating too little money as they are to donate too much. The only reason the unilateralist’s curse is a problem is that there is no benefit to be had from lots of agents taking the same action, which prevents the expected value of a marginal naive EV-maximizing agent’s action from being positive.
An easy-to-evaluate opportunity which produces 1 util/$, which everyone correctly evaluates
100 hard-to-evaluate opportunities each of which actually produces 0.1 util / $, but where everyone has an independent estimate of cost-effectiveness which is a log-normal centered on the truth
Then for any individual the’re likely to think one of the 100 is best and donate there. If they all pooled their info they would instead all donate to the first opportunity.
Obviously the numbers and functional form here are implausible—I chose them for legibility of the example. It’s a legitimate question how strongly the dynamic applies in practice. But it seems fairly clear to me that it can apply. You suggested there’s a symmetry with donating too little—I think this is broken because people are selecting the top option, so they are individually running into the optimizer’s curse.
Have you even read Bostrom’s paper? This isn’t the unilateralist’s curse. You are not extending a principle of the paper, you are rejecting its premise from the start. I don’t understand how this is not obvious.
You are merely restating the optimizer’s curse, and the easy solution there is for people to read Givewell’s blog post about it. If someone has, then the only way their decisions can be statistically biased is if they have the wrong prior distributions, which is something that nobody can be sure about anyway, and therefore is wholly inappropriate as the grounds for any sort of overruling of donations. But even if it were appropriate, having a veto would simply be the wrong thing to do, since (as noted above) the unilateralist’s curse is no longer present, and you’re going to have to find a better strategy that corrects for improper priors in accordance with the actual situation.
But it seems fairly clear to me that it can apply.
It also seems fairly clear to me that the opposite can apply - e.g., if giving opportunities are normally distributed and people falsely believe them to be lognormal, then they will give too much to the easy-to-evaluate opportunity.
I find your comments painfully uncharitable, which really reduces my inclination to engage. If you can’t find an interpretation of my comment which isn’t just about the optimizer’s curse I don’t feel like helping you right now.
Agree that vetoes aren’t the right solution, though (indeed they are themselves subject to a unilateralist’s curse, perhaps of a worse type).
I find your comments painfully uncharitable, which really reduces my inclination to engage.
Really? I haven’t misinterpreted you in any way. I think the issue is that you don’t like my comments because I’m not being very nice. But you should be able to deal with comments which aren’t very nice.
If you can’t find an interpretation of my comment which isn’t just about the optimizer’s curse I don’t feel like helping you right now.
Yes, it’s specifically the effect of the optimizer’s curse in situations where the better options have more uncertainty regarding their EV estimates, but that’s the only time that the optimizer’s curse is decision relevant anyway, since all other instantiations of the optimizer’s curse modify expected utilities without doing anything to change the ordinal ranking. And the fact that this happens to be a case with 100 uncertain options rather than 1, or a large group of donors rather than just one, doesn’t modify the basic issue that people’s choices will be suboptimal, so the fact that you specified a very particular scenario doesn’t make it about anything other than the basic optimizer’s curse.
The upside of centralization is that it helps prevent the unilateralist curse for funding bad projects. As the number of funders increases, it becomes increasingly easy for the bad projects to find someone who will fund them.
That said, I share the concern that EA Funds will become a single point of failure for projects such that if EA Funds doesn’t fund you, the project is dead. We probably want some centralization but we also want worldview diversification. I’m not yet sure how to accomplish this. We could create multiple versions of the current funds with different fund managers, but that is likely to be very confusing to most donors. I’m open to ideas on how to help with this concern.
Quick (thus likely wrong) thought on solving unilateralist’s curse: put multiple position in charge of each fund, each representing a different worldview, and give everyone 3 grant vetoes each year (so they can prevent grants that are awful in their worldview). You can also give them control of a percentage of funds in proportion to CEA’s / the donor’s confidence in that worldview.
Or maybe allocate grants according to a ranked preference vote of the three fund managers, plus have them all individually and publicly write up their reasoning and disagreements? I’d like that a lot.
Serious question: What do you think of N fund managers in your scenario?
I don’t understand the question.
Allocating grants according to a ranked preference vote of an arbitrary amount of people (and having them write up their arguments); what is the optimal number here? Where is the inflection point where adding more people decreases the quality of the grants?
On tertiary reading I somewhat misconstrued “three fund managers” as “three fund managers per fund” rather than “the three fund managers we have right now (Nick, Elie, Lewis)”, but the possibility is still interesting with any variation.
That’s a good question. I did intend “three fund managers” to mean “the three fund managers we have right now”, but I could also see the optimal number of people being 2-3.
I’m not sure that’s true. There are a lot of venture funds in the Valley but that doesn’t mean it’s easy to get any venture fund to give you money.
There’s no shortage of bad ventures in the Valley: https://thenextweb.com/gadgets/2017/04/21/this-400-juicer-that-does-nothing-but-squeeze-juice-packs-is-peak-silicon-valley/#.tnw_Aw4G0WDt
http://valleywag.gawker.com/is-the-grilled-cheese-startup-silicon-valleys-most-elab-1612937740
Of course, there are plenty of other bad ventures that don’t get funding...
Every time in the past week or so that I’ve seen someone talk about a bad venture, they’ve given the same example. That suggests that there is indeed a shortage of bad ventures—or at least, ventures bad enough to get widespread attention for how bad they are. (Most ventures are “bad” in a trivial sense because most of them fail, but many failed ideas looked like good ideas ex ante.)
Or that there’s one recent venture that’s so laughably bad that everyone is talking about it right now...
It’s not clear that Juicero is actually a bad venture in the sense that doesn’t return the money for it’s investors.
Even if that would be the case, VC’s make most of the money with a handful companies. A VC can have a good fund if 90% of their investments don’t return their money.
I would guess that the same is true for high risk philanthropic investments. It’s okay if some high risk investments don’t provide value as long as you are betting on some investments that deliever.
I don’t have the precise statistics handy, but my understanding is that VC returns are very good for a small number of firms and break-even or negative for most VC firms. If that’s the case, it suggests that as more VCs enter the market, more bad companies are getting funded.
This is a huge digression, but:
I’m not sure it’s obvious that current VCs fund all the potentially top companies. If you look into the history of many of the biggest wins, many of them nearly failed multiple times and could have easily shut down if a key funder didn’t exist (e.g. Airbnb and YC).
I think a better approximation is an efficient market, in which the risk-adjusted returns of VC at the margin are equal to the market. This means that the probability of funding a winner for a marginal VC is whatever it would take for their returns to equal the market.
Then also becoming a VC, to a first order, has no effect on the cost of capital (which is fixed to the market), so no effect on the number of startups formed. So you’re right that additional VCs aren’t helpful, but it’s for a different reason.
To a second order, there probably are benefits, depending on how skilled you are. The market for startups doesn’t seem very efficient and requires specialised knowledge to access. If you develop the VC skill-set, you can reduce transaction costs and make the market for startups more efficient, which enables more to be created.
Moreover, the more money that gets invested rather than consumed, the lower the cost of capital in the economy, which lets more companies get created.
The second order benefits probably diminish as more skilled VCs enter, so that’s another sense in which extra VCs are less useful than those we already have.
I don’t think the argument that there are a lot of VC firms that don’t get good returns suggest that centralization into one VC firm would be good. There are different successful VC firms that have different preferences in how to invest.
Having one central hub of decision making is essentially the model used in the Soviet Union. I don’t think that’s a good model.
Decentral decision making usually beats central planning with one single decision making authority in domain with a lot of spread out information.
I hadn’t considered the unilateralist’s curse and I’ll keep this in mind.
To what extent do you think it’s sustainable to
a) advocate for a centralised system run by trusted professionals VS.
b) building up the capacity of individual funders to recognise activities that are generally seen as problematic/negative EV by cause prioritisation researchers?
Put simply, I wonder if going for a) centralisation would make the ‘system’ fragile because EA donors would be less inclined to build up their awareness of big risks. For those individual donors who’d approach cause-selection with rigour and epistemic humility, I can see b) being antifragile. But for those approaching it amateuristically/sloppily, it makes sense to me that they’re much better off handing over their money and employing their skills elsewhere.
I admit I don’t have a firm grasp of unilateralist’s curse scenarios.
This is an interesting point.
It seems to me like mere veto power is sufficient to defeat the unilateralist’s curse. The curse doesn’t apply in situations where 99% of the thinkers believe an intervention is useless and 1% believe it’s useful, only in situations where the 99% think the intervention is harmful and would want to veto it. So technically speaking we don’t need to centralize power of action, just power of veto.
That said, my impression is that the EA community has such a strong allergic reaction to authority that anything that looks like an official decisionmaking group with an official veto would be resisted. So it seems like the result is that we go past centralization of veto in to centralization of action, because (ironically) it seems less authority-ish.
On second thought, perhaps it’s just an issue of framing.
Would you be interested in an “EA donors league” that tried to overcome the unilateralist’s curse by giving people in the league some kind of power to collectively veto the donations made by other people in the league? You’d get the power to veto the donations of other people in exchange for giving others the power to veto your donations (details to be worked out)
[pollid:7]
(I guess the biggest detail to work out is how to prevent people from simply quitting the league when they want to make a non-kosher donation. Perhaps a cash deposit of some sort would work.)
Every choice to fund has false positives (funding something that should not have been funded) and false negatives (not funding something that should have been funded). Veto power only guards against the first one.
Kerry’s argument was that centralization helps prevent false positives. I was trying to show that there are other ways to prevent false positives.
With regard to false negatives, I would guess that centralization exacerbates that problem—a decentralized group of funders are more likely to make decisions using a diverse set of paradigms.
The unilateralist’s curse does not apply to donations, since funding a project can be done at a range of levels and is not a single, replaceable decision.
The basic dynamic applies. Think it’s pretty reasonable to use the name to point loosely in such cases, even if the original paper didn’t discuss this extension.
The basic dynamic doesn’t apply. This isn’t about the name, it’s about the concept. You can’t make an extension from the literature without mathematically showing that the concept is still relevant!
If there’s potential utility to be had in multiple people taking the same action, then people are just as likely to err in the form of donating too little money as they are to donate too much. The only reason the unilateralist’s curse is a problem is that there is no benefit to be had from lots of agents taking the same action, which prevents the expected value of a marginal naive EV-maximizing agent’s action from being positive.
The kind of set-up where it would apply:
An easy-to-evaluate opportunity which produces 1 util/$, which everyone correctly evaluates
100 hard-to-evaluate opportunities each of which actually produces 0.1 util / $, but where everyone has an independent estimate of cost-effectiveness which is a log-normal centered on the truth
Then for any individual the’re likely to think one of the 100 is best and donate there. If they all pooled their info they would instead all donate to the first opportunity.
Obviously the numbers and functional form here are implausible—I chose them for legibility of the example. It’s a legitimate question how strongly the dynamic applies in practice. But it seems fairly clear to me that it can apply. You suggested there’s a symmetry with donating too little—I think this is broken because people are selecting the top option, so they are individually running into the optimizer’s curse.
Have you even read Bostrom’s paper? This isn’t the unilateralist’s curse. You are not extending a principle of the paper, you are rejecting its premise from the start. I don’t understand how this is not obvious.
You are merely restating the optimizer’s curse, and the easy solution there is for people to read Givewell’s blog post about it. If someone has, then the only way their decisions can be statistically biased is if they have the wrong prior distributions, which is something that nobody can be sure about anyway, and therefore is wholly inappropriate as the grounds for any sort of overruling of donations. But even if it were appropriate, having a veto would simply be the wrong thing to do, since (as noted above) the unilateralist’s curse is no longer present, and you’re going to have to find a better strategy that corrects for improper priors in accordance with the actual situation.
It also seems fairly clear to me that the opposite can apply - e.g., if giving opportunities are normally distributed and people falsely believe them to be lognormal, then they will give too much to the easy-to-evaluate opportunity.
I find your comments painfully uncharitable, which really reduces my inclination to engage. If you can’t find an interpretation of my comment which isn’t just about the optimizer’s curse I don’t feel like helping you right now.
Agree that vetoes aren’t the right solution, though (indeed they are themselves subject to a unilateralist’s curse, perhaps of a worse type).
Really? I haven’t misinterpreted you in any way. I think the issue is that you don’t like my comments because I’m not being very nice. But you should be able to deal with comments which aren’t very nice.
Yes, it’s specifically the effect of the optimizer’s curse in situations where the better options have more uncertainty regarding their EV estimates, but that’s the only time that the optimizer’s curse is decision relevant anyway, since all other instantiations of the optimizer’s curse modify expected utilities without doing anything to change the ordinal ranking. And the fact that this happens to be a case with 100 uncertain options rather than 1, or a large group of donors rather than just one, doesn’t modify the basic issue that people’s choices will be suboptimal, so the fact that you specified a very particular scenario doesn’t make it about anything other than the basic optimizer’s curse.