Issues with centralised grantmaking
Recently someone made a post expressing their unease with EAās recent wealth. I feel uncomfortable too. The primary reason I feel uncomfortable is that a dozen people are responsible for granting out hundreds of millions of dollars, and that as smart and hardworking as these people are, they will have many blindspots. I believe there are other forms of grantmaking structures that should supplement our current model of centralised grantmaking, as it would reduce the blindspots and get us closer to optimal allocation of resources.
In this post I will argue:
That we should expect centralised grantmaking to lead to suboptimal allocation of capital.
That there exists other grantmaking structures that will get us closer to the best possible allocation.
Issues with centralised funding
Similarly to the USSRās economic department that struggled with determining the correct price of every good, I believe EA grantmaking departments will struggle for similar reasons. Grantmakers have imperfect information! No matter how smart the grantmaker, they canāt possibly know everything.
To overcome their lack of omniscience grantmakers must rely on heuristics such as:
Is there someone in my network who can vouch for this person/āteam?
Do they have impressive backgrounds?
Does their theory of change align with my own?
These heuristics can be perfectly valid for grantmakers to use, and result in the best allocation they can achieve given their limited information. But the heuristics are biased and result in sub-optimal allocation to what could theoretically be achieved with perfect information.
For example, people who have spent significant time in EA hubs are more likely to be vouched for by someone in the grantmakers network. Having attended an ivy league university is a great signal that someone is talented, but there is a lot of talent that did not.
My issue is not that grantmakers use these proxies. My issue is that if all of our grantmaking uses the same proxies, then there will be a great deal of talented people with great projects that should have been funded but were overseen. Iām not sure about this, but I imagine that some complaints about EAās perceived elitism stem from this. EA grantmakers are largely cut from the same cloth, live in the same places, and have similar networks. Two anti-virus systems that detect the same 90% of viruses is no more useful than a single anti-virus system, two systems that are uncorrelated will instead detect 99% of all viruses. Similarly we should strive for our grantmakersās biases to be uncorrelated if we want the best allocation of our capital.
In the long run, overreliance on these proxies can also lead to bad incentives and increased participation in zero-sum games such as pursuing expensive degrees to signal talent.
We shouldnāt expect for our current centralised grantmaking to be optimal in theory, and I donāt think it is in practice either. But fortunately I think thereās plenty we can do to improve it.
What we can do to improve grantmaking
The issue with centralised grantmaking is that it operates off imperfect information. To improve grantmaking we need to take steps to introduce more information into the system. I donāt want to propose anything particularly radical. The system we have in place is working well, even if it has its flaws. But I do think we should be looking into ways to supplement our current centralised funding with other forms of grantmaking that have other strengths and weaknesses.
Each new type of grantmaking and grantmaker will spot talent that other grantmaking programs would have overseen. Combined they create a more accurate and robust ecosystem of funding.
FTX Future fundās regranting programme is a great example of the type of supplementing grantmaking structure I think we should be experimenting with. I feel slightly queasy that their system to decide new grantmakers may perpetuate the biases of the current grantmakers. But I donāt want to let perfect be the enemy of the good, and their grantmaker programme is yet another reason Iām so excited about the FTX future fund.
Below are a few off-the-cuff ideas that could supplement our current centralised structure:
Quadratic funding
Grantmaker rotation system
regranting programmes
Incubator programs to discover projects and talent worth funding
More grantmakers
Hundreds of people spent considerable time writing applications to FTX Future fundās first round of funding. It seems inefficient to me that there arenāt more sources of funding looking over these applications and funding the projects they think look the most promising.
Given that many are receiving answers from their FTX Grant, I think the timing of this post is unfortunate. I worry that our judgement will be clouded by emotions over whether we received a grant, and if we didnāt whether we approved of the reasoning and so fourth. My goal is not to criticise our current grantmakers. I think they are doing an excellent job considering their constraints. My goal is instead to point out that itās absurd to expect them to be superhuman and somehow correctly identify every project worth funding!
No grantmaker is superhuman, but we should strive for a grantmaking ecosystem that is.
One issue is that decentralised grant-making could increase the risk that projects that are net negative get funding, as per the logic of the unilateralistās curse. The risk of that probably varies with cause area and type of project.
My hunch is that many people have a bit of an intuitive bias against centralised funding; e.g. because it conjures up images of centralised bureaucracies (cf. the reference to the USSR) or appears elitist. I think that in the end itās a tricky empirical question and that the hypothesis that relatively centralised funding is indeed best shouldnāt be discarded prematurely.
I should also say that how centralised or coordinated grant-makers are isnāt just a function of how many grant-makers there are, but also of how much they communicate with each other. There might be ways of getting many of the benefits of decentralisation while reducing the risks, e.g. by the right kinds of coordination.
Right, but the unilateralistās curse is just a pro tanto reason not to have dispersed funding. Itās something of a false positive (funding stuff that shouldnāt get funded) but that needs to be considered against the false negatives of centralised funding (not funding stuff that should get funded). Itās not obvious, as a matter of conjecture, which is larger.
Yes, but it was a consideration not mentioned in the OP, so it seemed worth mentioning.
To be honest, the overall (including non-EA) grantmaking ecosystem is not so centralized that people canāt get funding for possibly net-negative ideas elsewhere. Especially given they have already put work in, have a handful of connections, or will be working in a sort of āsexyā cause area like AI that even some rando UHNWI would take interest in.
Given that, I donāt think that keeping grantmaking very centralized yields enough of a reduction in risk that it is worth protecting centralized grantmaking on that metric. And frankly, sweeping such risky applications under the rug hoping they disappear because they arenāt funded (by you, that one time) seems a terrible strategy. Iām not sure that is what is effectively happening, but if it is:
I propose a 2 part protocol within the grantmaking ecosystem to reduce downside risk:
1. Overt feedback from grantmakers in the case that they think a project is potentially net-negative.
2. To take it a step further, EA could employ someone whose role it is to try to actively sway a person from an idea, or help mitigate the risks of their project if the applicants affirm they are going to keep trying.
Imagine, as an applicant, receiving an email saying:
āHello [Your Name],
Thank you for your grant application. We are sorry to bear the bad news that we will not be funding your project. We commend you on the effort you have already put in, but we have concerns that there may be great risks to following through and we want to strongly encourage you to consider other options.
We have CCāed [name of unilateralistās curse expert with domain expertise], who is a specialist in cases like these who contracts with various foundations. They would be willing to have a call with you about why your idea may be too risky to move forward with. If this email has not already convinced you, we hope you consider scheduling a call on their [calendly] for more details and ideas, including potential risk mitigation.
We also recommend you apply for 80k coaching [here]. They may be able to point you toward roles that are just as good or a better fit for you, but with no big downside risk and with community support. You can list us a recommendation on your coaching application.
We hope that you do not take this too personally as this is not an uncommon reason to withhold funding (hopefully evidenced by the resources in place for such cases), and we hope to see you continuing to put your skills toward altruistic efforts.
Best,
[Name of Grantmaker]ā
Should I write a quick EA forum post on this 2 part idea? (Basically Iāll copy-paste this comment and add a couple paragraphs). Is there a better idea?
I realize that email will look dramatic as a response to some, but it wouldnāt have to be sent in every ācursed caseā. Iām sure many applications are rather random ideas. I imagine that a grantmaker could tell by the applicantsā resumes and their social positioning how likely the founding team are to keep trying to start or perpetuate a project.
I think giving this type of feedback when warranted also reflects well on EA. It makes EA seem less of an ivory tower/ābillionaire hobby and more of a conversational and collaborative movement.
*************************************
The above is a departure from the point of the post. FWIW, I do think the EA grantmaking ecosystem is so centralized that people who have potentially good ideas which stem from a bit of a different framework than those of typical EA grantmakers will struggle to get funding elsewhere. I agree decentralizing grantmaking to some extent is important and I have my reasoning here
tl;dr please write that post
Iām very strongly in favor of this level of transparency. My co-founder Max has been doing some work along those lines in coordination with CEAās community health team. But if I understand correctly, theyāre not that up front about why theyāre reaching out. Being more āon the noseā about it, paired with a clear signal of support would be great because these people are usually well-meaning and can struggle parsing ambiguous signals. Of course, thatās a question of qualified manpowerāarguably our most limited resourceābut we shouldnāt let our limited capacity for immediate implementation stand in the way of inching ever closer to our ideal norms.
I completely agree with this actually. I think concerns over unilaterialistās curse is a great argument in favour of keeping funding central, at least for many areas. I also donāt feel particularly confident that attempts to spread out or democratize funding would actually lead to net-better projects.
But I do think there is a strong argument in favour of experimenting with other types of grantmaking, seeing as we have identified weaknesses in the current form which could potentially be alleviated.
I think the unilateralistās curse can be avoided if we make sure to avoid hazardous domains of funding for our experiements to evaluate other types of grantmaking.
Actually, a simple (but perhaps not easy) way to reduce the risks of funding bad projects in a decentralized system would be to have a centralized team screen out obviously bad projects. For example, in the case of quadratic funding, prospective projects would first be vetted to filter out clearly bad projects. Then, anyone using the platform would be able to direct matching funds to whichever of the approved projects they like. As an analogy, Impact CoLabs is a decentralized system for matching volunteers to projects, but it has a centralized screening process with somewhat rigorous vetting criteria.
(Just saying I did lots of the vetting for colabs and I think it would be better if our screening would be totally transparent instead of hidden, though I donāt speak for the entire team)
Yes! Exactly!
If you want a system to counter the univerversalist curse, then designen a system with the goal of countering the univeralist curse. Donāt relly on an unintended sidefect of a coincidental system design.
I donāt think there is a negative bias against centalised funging in the EA netowrk.
Iāve discussed funding with quite a few people, and my experience is that EAs like experts and efficiency, which mathces well with centralisd funding, at least in theory. I never heard anyone compare it to USSR and similar before.
Even this post is not against centralsised funding. The autor is just arguing that any system have blindspots, and we should have other systems too.
While itās definitely a potential issue, I donāt think itās a guaranteed issue. For example, with a more distributed grantmaking system, grantmakers could agree to not fund projects that have consensus around potential harms, but fund projects that align with their specific worldviews that other funders may not be interested in funding but do not believe have significant downside risks. That structure was part of the initial design intent of the first EA Angel Group (not to be confused with the EA Angel Group that is currently operating).
Yes, cf. my ending:
I see, just pointing out a specific example for readers! You mention the āhypothesis that relatively centralised funding is indeed best shouldnāt be discarded prematurely.ā Do you think itās concerning that EA hasnāt (to my understanding) tried decentralized funding at any scale?
I havenāt studied EA grant-making in detail so canāt say with any confidence, but if you ask me Iād say Iām not concerned, no.
Isnāt there a very considerably potential opportunity cost by not trying out funding systems that could vastly outperform the current funding system?
Obviously there is a big opportunity cost to not trying something that could vastly outperform something we currently doāthatās more or less true by definition. But the question is whether we could (or ratherāwhether there is a decent chance that we would) see such a vast outperformance.
Thereās evidence to suggest that decentralized decision making can outperform centralized decision making; for example with prediction markets and crowdsourcing. I think itās problematic in general to assume that centralized thinking and institutions are better than decentralized thinking and institutions, especially if that reasoning is based on the status quo. I was asking this series of questions because by wording that centralized funding was a āhypothesis,ā I thought you would support testing other hypotheses by default.
I donāt think thereās evidence that centralised or decentralised decision-making is in general better than the other. It has to be decided on a case-by-case-basis.
I think this discussion is too abstract and that to determine whether EA grant-making should be more decentralised one needs to get into way more empirical detail. I just wanted to raise a consideration the OP didnāt mention in my top-level comment.
I agree! I was trying to highlight that because weāre not sure that centralized funding is better or not, it would be a high priority to test other mechanisms, especially if thereās reason to believe other mechanisms could result in significantly different outcomes.
One idea I have:
Instead of increasing the number of grantmakers, which would increase the number of altruistic agents and increase the risks from the unilateralistsā curse, we could work on ways for our grantmakers to have different blind spots. The simplest approach would be to recruit grantmakers from different countries, academic backgrounds, etc.
That being said, I am still in favour of a greater number of grantmakers but in areas unrelated to AI Safety and biosecurity so that the risks from the unilateralists curse are much smallerāsuch as global health, development, farmed animal welfare, promoting evidence based policy, promoting liberal democracy etc.
Iām not sure this comment is helping, but I donāt agree with this post.
Separate from any individual grant, a small number of grant makers have unity and serve critical coordination purposes, especially in design of EA and meta projects, but in other areas as well.
Most of the hardest decisions in grant making require common culture and communicating private, sensitive information. Ideas are worth less than execution, well-aligned competent grantees are critical. Also, success of the project is only one consideration (deploying money has effects on the EA space and also into the outside world, maybe with lasting effects that can be hard to see).
Once you solve the above problems, which benefit from a small number of grant makers, there are classes of projects where you can deploy a lot of money into (AMF, big science grants, or CSET).
The above response doesnāt cover all kinds of EA projects, like the development of people, or nascent smaller projects that are important. To address this, outreach is a focus and grant makers are often generous with small grants.
Grant makers arenāt just passively gatekeeping money, just saying yes or no to proposals. Thereās an extremely important and demanding role that grant makers perform (that might be unique to EA) where they develop whole new fields and programmes. So grant makers fund and build institutions to create and influence generations of projects. This needs longevity and independence.
The post doesnāt mention how advisers and peripheral experts, in and outside of EA, are used. Basically, key information to inform grant making decisions is outsourced, in the best sense, to a diverse group of people. This probably expands grant making capacity many, many, times. (Of course this can be poorly executed, capture, etc. is possible, but someone I know is perceptive and hasnāt seen evidence of this.)
Iām not sure Iām wording this well, but inferential distance can be vast. I find it difficult to even āseeā how better people are better than me. Itās hard to understand this, you sort of have to experience it. To give an analogy, an Elo 1800 chess player can beat me, and an Elo 2400 chess player can beat that person. In turn, an Elo 2800 player can effortlessly beat those people. When being outplayed in this way, communication is literally impossible, I wouldnāt understand what is going on in a game between me and the Elo 1800, even if they explained everything, move by move. In the same way, the very best experts in a field have deep and broad understanding, so they can make large, correct inferential leaps very quickly. I think this should be appreciated. I donāt think itās unreasonable that EA can get the very best experts in the world and they have insights like this. This puts constraints on the nature and number of grant makers who need to communicate and coordinate with these experts, and grantmakers themselves may have these qualities.
I think someone might see a large amount of money and see a small amount of people deciding where it goes. They might feel that seems wrong.
But what if the causal story is the exact opposite of this intuition? The people who have donated this money seem to be competent, and have specifically set up these systems. Weāve seen two instances of this now. The reason why there is money at all is because these structures have been setup successfully.
Iām qualified and well positioned to give the perspective above. Iām someone who has benefitted and gotten direct insights from grant makers, and probably seen large funding offered. At the same time, I donāt have this money. Due to the consequences of my actions,
Iāve removed myself from the EA projects gene pool. Iām sort of an EA Darwin award holder. So I have no personal financial/āproject motivation to defend this thing if I thought it was bad.There are ways to design centralized, yet decentralized grantmaking programs. For example, regranting programs that are subject to restrictions, like not funding projects that some threshold of grantmakers/āother inputs consider harmful.
Can you specify what āin design of EA and meta projectsā means?
EA has multiple grantmakers right now, and lots of people that are aware of various infohazards, and it doesnāt seem to me like the communication of private, sensitive information has been an issue. Iām sure thereās a threshold at which this would fail (perhaps if thousands of people were all involved with discussing private, sensitive information) but I donāt think weāre close to that threshold.
I think the perception of who is a well-aligned, competent grantee can vary by person. More of a reason to have more decentralization with grantmaking. Also, the forecating of effects can also vary by person, and having this be centralized may lead to failures to forecast certain impacts accurately (or at all).
My sense is that this is still fairly centralized and capacity constrained, since this only engages a very small fraction of the community. This stands in contrast to a highly distributed system, like EAs contributing to and voting in the FTX Project Ideas competition, which seems like it surfaced both some overlap and some considerable differences in opinion on certain projects.
There have also been large amounts of funds granted with decentralized grantmaking; see Gitcoinās funding of public goods as an example.
These are good questions.
So this is getting abstract and outside my competency, Iām basically LARPing now.
I wrote something below that seems not implausible.
I didnāt mean infohazards or downsides.
This is about intangible characteristics that seem really important in a grantee.
To give intuition, I guess one analogy is hiring. You wouldnāt hire someone off a LinkedIn profile, thereās just so much ālatentā or unknown information and fit that matters. To solve this problem, people often have pretty deep networks and do reference checks on people.
This is important because if you went in big for another CSET, or something that had to start in the millions, you better know the people, the space super well.
I think this means you need to communicate well with other grant makers. For any given major grant, this might be a lot easier with 3-5 close colleagues, versus a group of 100 people.
I guess this is fair, that my answer is sort of kicking the can. More grant makers is more advisers too.
On the other hand, I think thereās two other ways to look at this:
Letās say youāre in AI safety or global health,
There may only be like say about 50 experts in malaria or agents/ātheorems/āinterpretability. So it doesnāt matter how large your team is, thereās no value getting 1000 grantmakers if you only need to know 200 experts in the space.
Another point is that decentralization might make it harder to use experts, so you may not actually get deep or close understanding to use the expert.
This answer is pretty abstract and speculative. Iām not sure Iām saying anything above noise.
Letās say Charles He starts some meta EA service, letās say an AI consultancy, ā123 Fake AIā.
Charlesās service is actually pretty bad, he obscure methods and everyone suspects Charles to be gatekeeping and crowding out other AI consultancies. This squatting is harmful.
Charles sorts of entrenches, rewards his friends etc. So any normal individual raising issues is shouted down.
Someone has to kibosh this, and a set of unified grant makers could do this.
I see! Interestingly there are organizations, like DAOs, that do hiring in a decentralized manner (lots of people deciding on one candidate). There probably isnāt much efficacy data on that compared to more centralized hiring, but itās something Iām interested in knowing.
I think there are ways to assess candidates that can be less centralized, like work samples, rather than reference checks. I mainly use that when hiring, given it seems some of the best correlates of future work performance are present and past work performance on related tasks.
If sensitive info matters, I can see smaller groups being more helpful, I guess Iām not sure the degree to which thatās necessary. Basically I think that public info can also have pretty good signal.
Thatās a good point! Hmm, I think that does go into interesting and harder to answer questions like whether experts are needed/āhow useful they are, whether having people ask a bunch of different subject matter experts that they are connected with (easier with a more decentralized model) is better than asking a few that a funder has vetted (common with centralized models), whether an expert interview that can be recorded and shared is as good as interviewing the expert yourself, etc., some of which may be field-by-field.
Is there a reason a decentralized network couldnāt also do this? If it turns out that there are differing views, it seems that might be a hard judgement to make, whether in a centralized model or not.
So this is borderline politics as this point, but I would expect that a malign agent could capture or entrench in some sort of voting/ādecentralized network more easily than any high quality implementation of an EA grant making system (E.g, see politicians/āposturing).
(So this is a little spicy and thereās maybe some inferential leaps here? but ) a good comment related to the need for centralization comes from what I think are very good inside views on ETH development.
In ETH development, itās clear how centralized decision-making de-facto occurs, for all important development and functionality. This is made by a central leadership, despite there technically being voting and decentralization in a mechanical sense.
Thatās pretty telling since this is like the canonical decentralized thing.
Your comments are really interesting and important.
I guess that public demand for my own personal comments is low, and Iāll probably no longer reply, feel free to PM!
I donāt understand your model of crowding out? How exatly is Charles and his firends shouting everyone down? If everyone supsects 123 Fake AI to be bad, it will not be hard to get funding to set up a compeeting service.
In a centralised system Charles only have to convince the unified grantmakers that he is better, to stay on top. In a de-centralised system he has to convince everyone.
As far as I can tell, EA grantmakers and leadership are overly worried about crowding out effects. They donāt want to give money to a project if there might be a similar but better funding options later, because they think funding the first will crowd out the later. But my experience from the other side (applying and talking to other applicants) is that the effect is the compleet oposite. If you fund a type of project, others will see that this is the type of project that can be funded, and youāll get more similar applications.
Ok, so either you have a service funded by EA money and claims to support EAs, or itās not funded by EA money and claims to support EAs.
(Off topic: If itās not funded by EA money, this is a yellow flag. Thereās many services like coaching, mental health targeting EAs that are valuable. But itās good to be skeptical of a commercial service that seems to try hard to aim at an EA audienceāwhy isnāt it successful in the real world?)
The premise of my statement is that you have an EA service funded by EA money. Thereās many issues if done poorly.
Often, the customers/ādecision makers (CEOs) are sitting ducks because they donāt know the domain that is being offered ( law/āML/āIT/ācountry or what have you) very well. At the same time, they arenāt going to pass up a free or subsidized service by EA moneyāeven more a service with the imprimatur of EA funds.
This subsidized service and money gives a toehold to bad actors. One can perform a lot of mischief and put down competitors with a little technical skill and a lot of brash and art. (I want to show, not tell, but this is costly and I donāt need to become a dark thought or something.)
I think there are subtler issues. Like, once you start off with a low funding environment and slowly raise funding bit by bit, until you get first entry, this is sort of perfectly searching the supply curve for adverse selection.
But really, your response/āobjection is about something else.
Thereās a lot of stuff going on but I think itās fair to say I was really pointing out one pathology specifically (of a rainbow of potential issues just on this one area). This wasnāt some giant statement about the color and shape of institutional space in general.
Ok, my above comment is pretty badly written and Iām not sure Iām right and if Iām right I donāt think Iām right for the reason stated. Linda may be right, but I donāt agree.
In particular, I donāt answer this:
āIn a centralised system Charles only have to convince the unified grantmakers that he is better, to stay on top. In a de-centralised system he has to convince everyone.ā
Iām describing a situation of bad first movers and malign incentives, because this is what should be most concerning in general to EAs.
I think an answer is that actually, to start something, you shouldnāt have to convince everyone in a decentralized system. That seems unworkable and wonāt happen. Instead, the likely outcome is that you only need to convince enough people to get seed funding.
This isnāt good because you have the same adverse selection or self selection problems as in my comment above. I think that for many services, first mover/ālock-in effects are big and (as mentioned, but not really explained) there is malign incentives, where people can entrench and principled founders arenāt willing to wrestle in the mud (because their opportunity costs are higher or the adversarial skills are disjoint from good execution of the actual work).
(on phone againāI really need to change this wakeup routine š!)
This was helpful. Alongside further consideration of risks, it has made me update to thinking about an intermediate approach. Will be interested to hear what people think!
This approach could be a platform like Kickstarter that is managed and moderated by EA funders. It is an optimal home for projects that are in the gap between good enough to fund centrally by EA orgs and judged best never to fund.
For instabce, if you submit to FTX for and they think that you had a good idea but werenāt quite sure enough that you could pull it off, or that it wasnāt high value relative to competitors, then you get the opportunity to rework the application into funding request for this platform.
It then lives there so that others can see it and support it if they want. Maybe your local community members know you better or there is a single large donor who is more sympathetic to your theory of change and together these are sufficient to give you some initial funding to test the idea.
Having such platform therefore helps aggregate interesting projects and helps individuals and organisations to find and support them. It also reduces the effort involved in seeking funding by reducing it to being closer to submitting a single application.
It addresses several of the issues raised in the post and elsewhere without much additional risk and also provides a better way to do innovation competitions and store and leverage the ideas.
(Iām just writing fan fiction here, I donāt know much about your project, this is like ādiscount hacker newsā level advice. )
This seems great and could work!
I guess an obvious issue is āadverse selectionā. Youāre getting proposals that couldnāt make the cut, so I would be concerned about the quality of the pool of proposals.
At some point, average quality might be too low for viability, so the fund canāt sustain itself or justify resources. Related considerations:
Adverse selection probably gets worse the more generous FTX or other funders gets
Related to the above, I guess itās relatively common to be generous to give smaller starter grants, so the niche is might be particularly crowded.
Note that many grant makers ask for revise and resubmits, itās relationship focused, not grant focused.
Note that adverse selection often happens on complex, hard to see characteristics. E.g. people are hucksters asking money for a business, the cause area is implausible and this is camouflaged, or the founding team is bad or misguided and this isnāt observable from their resume.
Adverse selection can get to the point it might be a stigma, e.g. good projects donāt even want to be part of this fund.
This might be perfectly viable and Iām wrong. Another suggestion that would help is to have a different angle or source of projects besides those ānot quite over the lineā at FTX/āOpen Phil.
The chess analogy donāt work. We donāt have grant experts in the same way we have chess experts.
Expertice is created by experience coupled with high quality feedback. This type of expertice exists in chess, but not much grantmaking. EA grantmaking is not old enough to have experts. This is extra true in longtermist grantmaking where you donāt get true feedback at all, but have to rely on proxies.
Iām not saying that there are no diffence in relevant skills. Beeing genneraly smart and having related knolage is very usefull in areas where no-one is an expert. But the level of skill you seem to be claming is not belivable. And if they convinced themselves of that level of supeiriority, thatās evidence of group think.
Multiple grantmakers with diffrent heruristics will help deveolop expertice, since this means that we can compare diffrent strategies, and sometimes a grantmaker get to see what happens to projects they rejected that got funding somwhere else.
I agree, but this donāt require that there are only few funders.
Now we happen to be in a situation where almost all EA money comes from a few rich people. Thatās just how things are wether I like it or not. Itās their money to distrubute as they want. Trying to argue that the EA bilionares should not have the right to direct their donations as they want, would be pointless or couterproductive.
Also, I do think that these big donors are awsome people and that the world is better for their generosity. As far as I can see, they are spending their money on very important projects.
But they are not perfect! (This is not an attack!)
I think it would be very bad for EA to spread the idea that the large EA funders are some how infalable, and that small donors should avoid making their on grant decition.
Hi,
So Iāll start off by being a jerk and say that it seems there are a lot of spelling things going on in your comment.
These spelling boo-boos are, like, sort of on the nose here, for this particular topic and maybe why it got only a downvote.
What gets under my skin is that I suspect I put even less effort into writing and spelling than you and that my ability isnāt higher. Iām not a better writer or speller. I have tools or something, so my half baked ideas come out pretty smooth.
Like, Iām mansplaining but a trick is to try writing up or copying into Google docs, which fixes up a lot of grammar and writing snags. Also, people are working on more general tools to help spread and replicate principled and clear thought (FTX idea #2 or something) but that takes more time.
This is good advice in the wrong place. DMs exist dude.
If someone had access to DMs, what are possible reasons they would make this message public? Would a reasonable person knowing EA forum norms expect writing this message to help them? What would the actual impact on public perception on the other person be? Why would someone do something like this rhetorically?
By the way, this person obviously speaks a second (or third language). This is close to heroic. Me writing quickly in Swedish would be impossible.
So, if you made it past this far, about grantmaking skill and concentration: I think Iām right, but I could be wrong and it seems good to have the strongest form of this criticism.
But look, especially in this situation, it seems difficult to communicate and explain the topic of grant making skill. Thereās a lot of things going on and Iām also sort of dumb. If we donāt agree on the premises itās hard to make progress.
Because of this, I want to ask, is this really about grant making skill (which I think is extremely, comically demanding) or is it about perceived control, values or fairness, or something else?
Did you see Mackenzie Scottās āorgā distributing 8.6B? She wrote a public letter on medium explaining her views.
https://āāmackenzie-scott.medium.com/āāhelping-any-of-us-can-help-us-all-f4c7487818d9
After reading this, it āfeelsā strange to imagine walking into Scottās office and telling them about democracy or something, even though I donāt agree with all the funding choices.
But certainly this feeling isnāt the same for EA. But why is this?
For EA grantmaking, whatās the āpromiseā, what is owed, and to whom? I honestly want to learn from you.
I agree that grantmaking is hard!
There are gaps in the sytem exactly because grantmaking is hard.
No, this is not about grantmakting skills, or at least not directly. But skills in relation to the task dificulty is very relevant. But nither is it about fairness. Slowing down to worry about fairness with in EA seems dumb.
This is about not spreading harmfull missleading information to applicants, and other potential donors who are concidering if they want to make thier own donation decition or not.
Iām mostly just trying to say that can we please accknolage that the system is not perfect? How do I say this without anyone feeling attact?
Getting rejected hurts. If you tell everyone that EA has heeps of money and that the grantmakers are perfect, then it hurts about 100x more. This is a real cost. EA is loosing members because of this, and almost no-one talks about it. But it would not be so bad, if we could just agree that grantmaking is hard, and therefor grantmakers makes mistakes sometimes.
https://āāforum.effectivealtruism.org/āāposts/āāKhon9Bhmad7v4dNKe/āāthe-cost-of-rejection
My current understanding is that the bigest dificulty in grantmaking is the information bandwith. The text in the application is usually not nearly enough information, which is why grantmakers rely on other channels of information. This information is nesserarly biased by their network, mainly it is much easier to get funded if you know the right people. This is all fine! I want grantmakers to use all the information they can, even if this casues unfairness. All successfull networks rely hevily on personal conections, becasue itās just more efficient. Personal trust beats formal systems every day. I just wish we could be honest about what is going on.
I donāt expect rich people to deligate their funding decitions to unknown people outside their network, just for fairness. I donāt think that would be a good idea.
But I do want EAs who who happen to have some money to give, and happen to have significantly diffrent networks compared to the super donors, to be aware of this, to be aware of their comparative advantage to donate in their own network, instead of deligating this away to EA Funds.
What is owed is honesty. That is all.
Itās not even the case that the grant makers themsevels exagurate their own infalability, at least not explicitly. But others do, which leads to the same problems. This makes it harder to answer āwho owes whatā. Fortunatly I donāt care much about blame. I just want to spread more accurate informations becasue Iāve seen the harm of the missinformation. Thatās why I decided to argue against your comment. Leaving those claims unchalanged would add to the problems I tried to explain here.
_____________________
Regarding spelling. I usually try harder. But this topic makes me very angry, so I try minimising the time I spend on writing this. Sorry about that.
What youāre saying makes sense and is important to me. In fact itās mainly what I care about.
In the comment that appeared above your first reply. I said the experts (like take the billions of people in the world, and then take the best in each domain) might be so good that itās difficult to communicate or understand them.
So my claim was that it is unwieldy for a large group of people to act like grant makers because of the nature of these experts. I left the door open to grantmakers being this good (because that seems positive and itās strong to say they canāt be?).
I think you believe Iām arguing that current grantmakers are unquestionable. That isnāt what I wrote (you can look again at the top comment, I canāt link, Iām typing on my phone and itās hard, seriously this physically hurts my thumbs ).
In the other comment chain with you, you replied objecting to the idea of malign behaviour requiring centralization. Here, sort of like above, I find it tempting to see you pushing back against a broader point than I originally made.
You did this because it was important to you.
Iām not writing this comment, the previous comment or any comment here to you because I want to argue. I didnāt write it because I want to be polite or even strictly because I had a āscout mentalityā. I literally donāt have any attachments for or against what you said. I wanted to understand.
You expressed something important to you. Iām sorry you felt the need to write or defend with the effort and emotion you did.
The reason why this is valuable is that most of what I wrote and the top of what you wrote are just arguments.
We can take these arguments and knock them out of someoneās hand, or give better new ones instead. Itās just logic and reasoning.
Itās the values that I care about and wanted to understand. The reasons why you wanted to talk and how you felt. (This wasnāt supposed to be difficult or cause stress either).
The end of the above comment included a statement about no funding, which suggested that my comment is entirely disinterested.
Iāve since learned (this morning) of additional funding and/āor interest in funding and this statement about no funding is no longer true. It was probably also misleading or unfair to have made it in the first place.
This wouldnāt directly address your main concern, but Iād be really interested to see more full grant applications posted publicly (both successful and non-successful).
Manifold Markets (which I have a COI with) posted their FTX FF grant application here.
I want you to know there isnāt some secret sauce or special formula in the words of a grant proposal itself. I donāt think there is really anything canonically correct.
There might be one such grant application shared publicly, if that person ever gets around to it.
This is grant is interesting because it was both successful and non-successful at the same time. This is because it has interest but was rejected due to the founder, so the project might be āopenā.
One advantage of centralized grantmaking though is that it can convey more information, due to the experience of the grantmakers. In particular, centralized decision-making allows for better comparisons between proposals. This can lead to only the most effective projects being carried out, as it would be the case with startups if one were to restrict himself to only top venture capitalists.
Do you have any evidence for this? Thereās definitely evidence to suggest that decentralized decision making can outperform centralized decision making; for example, prediction markets and crowdsourcing. I think itās dangerous to automatically assume that all centralized thinking and institutions are better than decentralized thinking and institutions.
I recall reading that top VCās are able to outperform the startup investing market, although it may have a causal relationship going the other way around. That being said, the very fact that superforecasters are able to outperform prediction markets should signal that there are (small groups of) people able to outperform the average, isnāt it?
On the other hand prediction markets are useful, Iām just wondering how much of a feedback signal there is for altruistic donations, and if it is sufficient for some level of efficiency.
Yep, thereās definitely return persistence with top VCs, and the last time I checked I recall there was uncertainty around whether that was due to enhanced deal flow or actual better judgement.
I think that just taking the average is one decentralized approach, but certainly not representative of decentralized decision making systems and approaches as a whole.
Even the Good Judgement Project can be considered a decentralized system to identify good grantmakers. Identifying superforecasters requires having everyone do predictions and then find the best forecasters among them, whereas I do not believe the route to become a funder/āgrantmaker is that democratized. For example, thereās currently no way to measure what various people think of a grant proposal, fund that regardless of what occurs (there can be rules about not funding downside risk stuff, of course), and then look back and see who was actually accurate.
There havenāt actually been real prediction markets implemented at a large scale (Kalshi aside, which is very new), so itās not clear whether thatās true. Denise quotes Tetlock mentioning that objection here.
I also think that determining what to fund requires certain values and preferences, not necessarily assessing whatās successful. So viewpoint diversity would be valuable. For example, before longtermism became mainstream in EA, it would have been better to allocate some fraction of funding towards that viewpoint, and likewise with other viewpoints that exist today. A test of who makes grants to successful individuals doesnāt protect against funding the wrong aims altogether, or certain theories of change that turn out to not be that impactful. Centralized funding isnāt representative of the diversity of community views and theories of change by default (I donāt see funding orgs allocating some fraction of funding towards novel theories of change as a policy).
(On phone, early in the morning!)
Thanks for this.
I agree with nearly all of it.
Iād like us to have a community fundraising platform and a coexisting crowdfunding norm so that more good ideas get proposed and backed. Also, so that the community (including centralised funders) have a better read on what the community wants and why.
As an example, I have several desires for changes and innovations that Iād be happy to help fund. As an example, I would like to be able to read and refer to a really detailed assessment and guesstimate model for whether, when, and how best to decide on giving now v saving and giving later. Iād help fund an effective bequest or volunteer pledge program. I know others who share my views. Iād like to know the collective interest in funding, either of these. Iād also like centralised funders to know that information, as that community willingness to funding something might make them decide to fund it in conjunction or instead. I donāt currently have any easy way to do this.
I suspect there are many ideas in EA that would possibly attract crowdfunding but not centralised funding (at least initially) because many people in some part of the EA community have some individually small, but collectively important need that funders donāt realise.
With regard to Stefanās point, rather than reduce risk by reducing and centralising access to funding like we do now, we could reduce it in other ways. We could have community feedback. We could also have contingencies within grants (e.g., projects only funded after a risk assessment is conducted). We could have something modelled on ethics committees to assess what project types are higher risk.
As a community manager, I care a lot about maximizing the potential of any community member who is already deep enough on the EA engagement funnel to even be applying for a grant. In addition to the (very good) reasons in OPās post, I want to see the grantmaking ecosystem become less centralized because:
1. Founders, scalers, and new projects are a bottleneck for EA and it is surprisingly hard to prompt people to take such a route. It seems to be a personality thing, so we should look twice before dismissing people who want to try.
2. Even if a project ends up underperforming, the opportunity to try scaling or starting up a project does give a dedicated and self-starting EA valuable experience. That innovator-EA may get more potential benefit from being funded than a lot of other ways that one might slowly gain experience. And funding the project should come with some potential positive impact, even if it isnāt the most impactful and exciting project to many grantmakers.
Similar tactics exist in the movement already: EA/ā80K recommends people enter the for-profit world to gain experience, which comes with near-zero positive impact potential during that time. EA also subsidizes career trainings, workshops, and even advanced degrees toward filling bottlenecks of all types.
Therefore, Iād also advocate for being a bit more lax in funding/āsubsidizing relatively cheap new projects or scale-ups which can help dedicated innovator/āself-starter EAs gain career experience and yield some altruistic wins. (I admit that some funders may already be thinking this way, I donāt know!)
3. It is sad to me that dedicated EAs can essentially be blackballed in what Iād still like to think of as an egalitarian movement. I donāt think it is anyoneās fault (mad props to grantmakers and funders), but if the funding ecosystem evolves to be a bit more diverse, I think it would be good for the movementās impact and reputation, at least via the mental health and value drift levels of EAs themselves. Iām not saying āfund everything that isnāt riskyā, but that being gatekept/āblackballed is a uniquely frustrating experience that can sour oneās involvement with the movement. Despite good intentions and a mature personality, it seems natural to stick more to the sidelines after being rejected the first time you stick your neck out and not given any recommendations for where else to apply for funding. The more avenues the movement has and the more obvious these avenues are, the less a rejection will feel like a blackball and prompt people to stop trying.
FWIW I really like the vetted kickstarter idea posted by Peter Slattery below. A bonus with an idea like that is that it will also keep E2Gers engaged. It is a lot more interesting than, say, donating to EAIF every year, and maybe they can get their warm fuzzies there too.
I agree with the issues related to centralized grantmaking flagged by this article! I wrote a bit about this back in 2018. To my understanding, EA has not been trying forms of decentralized/ācollective thinking, including decentralized grantmaking. I think that this is definitely a very promising area of inquiry worthy of further research and experimentation.
One example of the blind spots and differences in theories of change you mention is reflected in the results of the Future Fundās Project Ideas Competition. Highly upvoted ideas like āInvestment strategies for longtermist fundersā and āHighly effective enhancement of productivity, health, and wellbeing for people in high-impact roles,ā which came in at #3 and #4 respectively, did not win any awards or mention. This suggests that there is decent community interest and consensus around projects and project areas that arenāt being funded or funded sufficiently by centralized entities. For those project areas, there are a decent number of people within EA , project leads, and smaller-scale funders (BERI, EA Funds, various HNWIs) that I am aware of that either believe such efforts are valuable and underfunded or have funded projects in those areas in the past. The specific grantmaking team at The Future Fund may have interests and theories of change that arenāt the same as other grantmaking teams and EAs. Itās definitely fine to have specialized interests and theories of change, and indeed everyone does, but the issue is only one set of those is coming through to decide how to allocate all of the Future Fundās funding. As you point out, thatās basically guaranteed to be suboptimal.
This is yet another reason why Iād love to see mini-EA hotels in major cities around the world as I described in this Twitter thread. Obviously, this wouldnāt remove the bias towards people in major cities, but it would decrease geographical bias overall and the perfect shouldnāt be the enemy of the good.
I would be very interested in doing this in Copenhagen. If anybody going to EA global has strong opinions this I would love to set up a meeting and chat about this
Iāll be at EAGlobal. Feel free to reach out to me.
I agree that centralised grant-making might mean that some promising projects are missed. But weāre not solely interested in this? Weāre overall interested in:
Average cost-effectiveness per $ granted * Number of $ weāre able to grant
My intuition would be that the more decentralised the grant-making process, the more $ weāre able to grant.
But this also requires us to invest more talent in grant-making, which means, in practice, fewer promising people applying for grants themselves, which might non-negligibly reduce average cost-effectiveness per $ granted.
Beyond the above consideration, it seems unclear whether decentralised grant-making would overall increase of decrease the average cost-effectiveness. Sure, fewer projects above the current average cost-effectiveness would slip through the net, but so too fewer projects below the current average cost-effectiveness would slip through the net. So Iād expect these things to balance each other out roughly UNLESS weāre making a separate claim that the current grantmakers are making poor /ā miscalibrated decisions. But at that point, this is not an argument in favour of decentralising grant-making, but an argument in favour of replacing (or competing with) the current grantmakers.
So maybe overall, decentralising grant-making would trade an increase in $ weāre able to grant for a small decrease in average cost-effectiveness of granted $.
(I felt pretty confused writing these comments and suspect Iāve missed many relevant considerations, but thought Iād flesh out and share my intuitive concerns with the central argument of this post, rather than just sit on them.)
[Quick thoughts whilst on mobile]
My takeaway: interested to hear what said grant makers think about this idea.
I find the arguments re: efficient market hypothesis pretty compelling , but also find the arguments re: āinferential distanceā and unilateralist curse also compelling.
One last points, so far, I think one EAās biggest achievements is around truly unsually good epistemics, and Iām particularly concerned around how centralised small groups could damage thatāespecially since more funding could exacerbate this effect
Posted on my shortform, but thought itās worth putting here as well, given that I was inspired by this post to write it: