Note: EA is totally a trust network—I don’t think the funds are trying to be anything like GiveWell, who you’re supposed to trust based on the publicly-verifiable rigour of their research. EA funds is much more toward the side of the spectrum of “have you personally seen CEA make good decisions in this area” or “do you specifically trust one of the re-granters”. Which is fine, trust is how tightly-knit teams and communities often get made. But if you gave to it thinking “this will look like if I give to Oxfam, and will have the same accountability structure” then you’ll correctly be surprised to find out it works significantly via personal connections.
The same way you’d only fund a startup if you knew them and how they worked, you should probably only fund EA funds for similar reasons—and if the startup tried to make its business plan such that anyone would have reason to fund it, the business plan probably wouldn’t be very good. I think that EA should continue to be a trust-based network, and so on the margin I guess people should give less to EA funds rather than EA funds make grants that are more defensible.
On this very website, clicking the link “New to Effective Altruism?” and a little browsing quickly leads to recommendations to give to EA funds. If EA funds really is intended to be a high-trust option, CEA should change that recommendation.
Yup. I suppose I wrote down my assessment of the information available about the funds and the sort of things that would cause me to donate to it, not the marketing used to advertise it—which does indeed feel disconnected. It seems that there’s a confusing attempt to make this seem reasonable to everyone whilst in fact not offering the sort of evidence that should make it so.
The evidence about it is not the ‘evidence-backed charities’ that made GiveWell famous/trustworthy, but is “here is a high status person in a related field that has a strong connection to EA”, which seems not that different from the way other communities ask their members to give funding—it’s based on trust in the leaders in the community, not on objectively verifiable metrics to outsiders. So you should ask yourself what causes you to trust CEA and then use that, as opposed to the objective metrics associated with the EA funds (which there are far fewer of than with GiveWell). For example if CEA has generally made good philosophical progress in this area and also made good hiring decisions, that would make you trust the grant managers more.
I don’t think the funds are trying to be anything like GiveWell
I disagree – the funds are definitely positioned as more risky than e.g. GiveWell, but nothing like as risky as you’re making out. Take the Global Health and Development Fund:
Why might you choose not to donate to this fund?
Donors with a very low risk tolerance may choose to avoid this fund because the fund is empowered to take risks at the organizational level by funding unproven, but promising new charities.
Reading that, I would come away thinking that donations from that fund ought to be rigorous in a (mostly) publicly-verifiable way (more than just a conversation someone had with someone).
This strikes me as making a false dichotomy between ‘trust the grant making because lots of information is made public about its decisions’ and ‘trust the grant making because you personally know the re-granter (or know someone who knows someone etc)’. I would expect this is instead supposed to work in the way a lot of for profit funds presumably work: you trust your money to a particular fund manager because they have a strong history of their funds making money. You don’t need to know Elie personally (or know about how he works / makes decisions) to know his track record of setting up GW and thereby finding excellent giving opportunities.
Note: EA is totally a trust network—I don’t think the funds are trying to be anything like GiveWell, who you’re supposed to trust based on the publicly-verifiable rigour of their research. EA funds is much more toward the side of the spectrum of “have you personally seen CEA make good decisions in this area” or “do you specifically trust one of the re-granters”. Which is fine, trust is how tightly-knit teams and communities often get made. But if you gave to it thinking “this will look like if I give to Oxfam, and will have the same accountability structure” then you’ll correctly be surprised to find out it works significantly via personal connections.
The same way you’d only fund a startup if you knew them and how they worked, you should probably only fund EA funds for similar reasons—and if the startup tried to make its business plan such that anyone would have reason to fund it, the business plan probably wouldn’t be very good. I think that EA should continue to be a trust-based network, and so on the margin I guess people should give less to EA funds rather than EA funds make grants that are more defensible.
On this very website, clicking the link “New to Effective Altruism?” and a little browsing quickly leads to recommendations to give to EA funds. If EA funds really is intended to be a high-trust option, CEA should change that recommendation.
Yup. I suppose I wrote down my assessment of the information available about the funds and the sort of things that would cause me to donate to it, not the marketing used to advertise it—which does indeed feel disconnected. It seems that there’s a confusing attempt to make this seem reasonable to everyone whilst in fact not offering the sort of evidence that should make it so.
The evidence about it is not the ‘evidence-backed charities’ that made GiveWell famous/trustworthy, but is “here is a high status person in a related field that has a strong connection to EA”, which seems not that different from the way other communities ask their members to give funding—it’s based on trust in the leaders in the community, not on objectively verifiable metrics to outsiders. So you should ask yourself what causes you to trust CEA and then use that, as opposed to the objective metrics associated with the EA funds (which there are far fewer of than with GiveWell). For example if CEA has generally made good philosophical progress in this area and also made good hiring decisions, that would make you trust the grant managers more.
I disagree – the funds are definitely positioned as more risky than e.g. GiveWell, but nothing like as risky as you’re making out. Take the Global Health and Development Fund:
Reading that, I would come away thinking that donations from that fund ought to be rigorous in a (mostly) publicly-verifiable way (more than just a conversation someone had with someone).
This strikes me as making a false dichotomy between ‘trust the grant making because lots of information is made public about its decisions’ and ‘trust the grant making because you personally know the re-granter (or know someone who knows someone etc)’. I would expect this is instead supposed to work in the way a lot of for profit funds presumably work: you trust your money to a particular fund manager because they have a strong history of their funds making money. You don’t need to know Elie personally (or know about how he works / makes decisions) to know his track record of setting up GW and thereby finding excellent giving opportunities.