I’m not sure this comment is helping, but I don’t agree with this post.
Separate from any individual grant, a small number of grant makers have unity and serve critical coordination purposes, especially in design of EA and meta projects, but in other areas as well.
Most of the hardest decisions in grant making require common culture and communicating private, sensitive information. Ideas are worth less than execution, well-aligned competent grantees are critical. Also, success of the project is only one consideration (deploying money has effects on the EA space and also into the outside world, maybe with lasting effects that can be hard to see).
Once you solve the above problems, which benefit from a small number of grant makers, there are classes of projects where you can deploy a lot of money into (AMF, big science grants, or CSET).
The above response doesn’t cover all kinds of EA projects, like the development of people, or nascent smaller projects that are important. To address this, outreach is a focus and grant makers are often generous with small grants.
Grant makers aren’t just passively gatekeeping money, just saying yes or no to proposals. There’s an extremely important and demanding role that grant makers perform (that might be unique to EA) where they develop whole new fields and programmes. So grant makers fund and build institutions to create and influence generations of projects. This needs longevity and independence.
The post doesn’t mention how advisers and peripheral experts, in and outside of EA, are used. Basically, key information to inform grant making decisions is outsourced, in the best sense, to a diverse group of people. This probably expands grant making capacity many, many, times. (Of course this can be poorly executed, capture, etc. is possible, but someone I know is perceptive and hasn’t seen evidence of this.)
I’m not sure I’m wording this well, but inferential distance can be vast. I find it difficult to even “see” how better people are better than me. It’s hard to understand this, you sort of have to experience it. To give an analogy, an Elo 1800 chess player can beat me, and an Elo 2400 chess player can beat that person. In turn, an Elo 2800 player can effortlessly beat those people. When being outplayed in this way, communication is literally impossible, I wouldn’t understand what is going on in a game between me and the Elo 1800, even if they explained everything, move by move. In the same way, the very best experts in a field have deep and broad understanding, so they can make large, correct inferential leaps very quickly. I think this should be appreciated. I don’t think it’s unreasonable that EA can get the very best experts in the world and they have insights like this. This puts constraints on the nature and number of grant makers who need to communicate and coordinate with these experts, and grantmakers themselves may have these qualities.
I think someone might see a large amount of money and see a small amount of people deciding where it goes. They might feel that seems wrong.
But what if the causal story is the exact opposite of this intuition? The people who have donated this money seem to be competent, and have specifically set up these systems. We’ve seen two instances of this now. The reason why there is money at all is because these structures have been setup successfully.
I’m qualified and well positioned to give the perspective above. I’m someone who has benefitted and gotten direct insights from grant makers, and probably seen large funding offered. At the same time, I don’t have this money. Due to the consequences of my actions, I’ve removed myself from the EA projects gene pool. I’m sort of an EA Darwin award holder. So I have no personal financial/project motivation to defend this thing if I thought it was bad.
Separate from any individual grant, a small number of grant makers have unity and serve critical coordination purposes, especially in design of EA and meta projects, but in other areas as well.
There are ways to design centralized, yet decentralized grantmaking programs. For example, regranting programs that are subject to restrictions, like not funding projects that some threshold of grantmakers/other inputs consider harmful.
Can you specify what “in design of EA and meta projects” means?
Most of the hardest decisions in grant making require common culture and communicating private, sensitive information. Ideas are worth less than execution, well-aligned competent grantees are critical. Also, success of the project is only one consideration (deploying money has effects on the EA space and also into the outside world, maybe with lasting effects that can be hard to see).
EA has multiple grantmakers right now, and lots of people that are aware of various infohazards, and it doesn’t seem to me like the communication of private, sensitive information has been an issue. I’m sure there’s a threshold at which this would fail (perhaps if thousands of people were all involved with discussing private, sensitive information) but I don’t think we’re close to that threshold.
I think the perception of who is a well-aligned, competent grantee can vary by person. More of a reason to have more decentralization with grantmaking. Also, the forecating of effects can also vary by person, and having this be centralized may lead to failures to forecast certain impacts accurately (or at all).
The post doesn’t mention how advisers and peripheral experts, in and outside of EA, are used. Basically, key information to inform grant making decisions is outsourced, in the best sense, to a diverse group of people. This probably expands grant making capacity many, many, times. (Of course this can be poorly executed, capture, etc. is possible, but someone I know is perceptive and hasn’t seen evidence of this.)
My sense is that this is still fairly centralized and capacity constrained, since this only engages a very small fraction of the community. This stands in contrast to a highly distributed system, like EAs contributing to and voting in the FTX Project Ideas competition, which seems like it surfaced both some overlap and some considerable differences in opinion on certain projects.
But what if the causal story is the exact opposite of this intuition? The people who have donated this money seem to be competent, and have specifically set up these systems. We’ve seen two instances of this now. The reason why there is money at all is because these structures have been setup successfully.
There have also been large amounts of funds granted with decentralized grantmaking; see Gitcoin’s funding of public goods as an example.
So this is getting abstract and outside my competency, I’m basically LARPing now.
I wrote something below that seems not implausible.
not funding projects that some threshold of grantmakers/other inputs consider harmful.
EA has multiple grantmakers right now, and lots of people that are aware of various infohazards, and it doesn’t seem to me like the communication of private, sensitive information has been an issue. I’m sure there’s a threshold at which this would fail (perhaps if thousands of people were all involved with discussing private, sensitive information) but I don’t think we’re close to that threshold.
I didn’t mean infohazards or downsides.
This is about intangible characteristics that seem really important in a grantee.
To give intuition, I guess one analogy is hiring. You wouldn’t hire someone off a LinkedIn profile, there’s just so much “latent” or unknown information and fit that matters. To solve this problem, people often have pretty deep networks and do reference checks on people.
This is important because if you went in big for another CSET, or something that had to start in the millions, you better know the people, the space super well.
I think this means you need to communicate well with other grant makers. For any given major grant, this might be a lot easier with 3-5 close colleagues, versus a group of 100 people.
My sense is that this is still fairly centralized and capacity constrained, since this only engages a very small fraction of the community. This stands in contrast to a highly distributed system, like EAs contributing to and voting in the FTX Project Ideas competition, which seems like it surfaced both some overlap and some considerable differences in opinion on certain projects.
I guess this is fair, that my answer is sort of kicking the can. More grant makers is more advisers too.
On the other hand, I think there’s two other ways to look at this:
Let’s say you’re in AI safety or global health,
There may only be like say about 50 experts in malaria or agents/theorems/interpretability. So it doesn’t matter how large your team is, there’s no value getting 1000 grantmakers if you only need to know 200 experts in the space.
Another point is that decentralization might make it harder to use experts, so you may not actually get deep or close understanding to use the expert.
This answer is pretty abstract and speculative. I’m not sure I’m saying anything above noise.
Can you specify what “in design of EA and meta projects” means?
Let’s say Charles He starts some meta EA service, let’s say an AI consultancy, “123 Fake AI”.
Charles’s service is actually pretty bad, he obscure methods and everyone suspects Charles to be gatekeeping and crowding out other AI consultancies. This squatting is harmful.
Charles sorts of entrenches, rewards his friends etc. So any normal individual raising issues is shouted down.
Someone has to kibosh this, and a set of unified grant makers could do this.
This is about intangible characteristics that seem really important in a grantee.
To give intuition, I guess one analogy is hiring. You wouldn’t hire someone off a LinkedIn profile, there’s just so much “latent” or unknown information and fit that matters. To solve this problem, people often have pretty deep networks and do reference checks on people.
This is important because if you went in big for another CSET, or something that had to start in the millions, you better know the people, the space super well.
I think this means you need to communicate well with other grant makers. For any given major grant, this might be a lot easier with 3-5 close colleagues, versus a group of 100 people.
I see! Interestingly there are organizations, like DAOs, that do hiring in a decentralized manner (lots of people deciding on one candidate). There probably isn’t much efficacy data on that compared to more centralized hiring, but it’s something I’m interested in knowing.
I think there are ways to assess candidates that can be less centralized, like work samples, rather than reference checks. I mainly use that when hiring, given it seems some of the best correlates of future work performance are present and past work performance on related tasks.
If sensitive info matters, I can see smaller groups being more helpful, I guess I’m not sure the degree to which that’s necessary. Basically I think that public info can also have pretty good signal.
So it doesn’t matter how large your team is, there’s no value getting 1000 grantmakers if you only need to know 200 experts in the space.
That’s a good point! Hmm, I think that does go into interesting and harder to answer questions like whether experts are needed/how useful they are, whether having people ask a bunch of different subject matter experts that they are connected with (easier with a more decentralized model) is better than asking a few that a funder has vetted (common with centralized models), whether an expert interview that can be recorded and shared is as good as interviewing the expert yourself, etc., some of which may be field-by-field.
Someone has to kibosh this, and a set of unified grant makers could do this.
Is there a reason a decentralized network couldn’t also do this? If it turns out that there are differing views, it seems that might be a hard judgement to make, whether in a centralized model or not.
Is there a reason a decentralized network couldn’t also do this? If it turns out that there are differing views, it seems that might be a hard judgement to make, whether in a centralized model or not.
So this is borderline politics as this point, but I would expect that a malign agent could capture or entrench in some sort of voting/decentralized network more easily than any high quality implementation of an EA grant making system (E.g, see politicians/posturing).
(So this is a little spicy and there’s maybe some inferential leaps here? but ) a good comment related to the need for centralization comes from what I think are very good inside views on ETH development.
In ETH development, it’s clear how centralized decision-making de-facto occurs, for all important development and functionality. This is made by a central leadership, despite there technically being voting and decentralization in a mechanical sense.
That’s pretty telling since this is like the canonical decentralized thing.
Your comments are really interesting and important.
I guess that public demand for my own personal comments is low, and I’ll probably no longer reply, feel free to PM!
Let’s say Charles He starts some meta EA service, let’s say an AI consultancy, “123 Fake AI”.
Charles’s service is actually pretty bad, he obscure methods and everyone suspects Charles to be gatekeeping and crowding out other AI consultancies. This squatting is harmful.
Charles sorts of entrenches, rewards his friends etc. So any normal individual raising issues is shouted down.
Someone has to kibosh this, and a set of unified grant makers could do this.
I don’t understand your model of crowding out? How exatly is Charles and his firends shouting everyone down? If everyone supsects 123 Fake AI to be bad, it will not be hard to get funding to set up a compeeting service.
In a centralised system Charles only have to convince the unified grantmakers that he is better, to stay on top. In a de-centralised system he has to convince everyone.
As far as I can tell, EA grantmakers and leadership are overly worried about crowding out effects. They don’t want to give money to a project if there might be a similar but better funding options later, because they think funding the first will crowd out the later. But my experience from the other side (applying and talking to other applicants) is that the effect is the compleet oposite. If you fund a type of project, others will see that this is the type of project that can be funded, and you’ll get more similar applications.
Ok, so either you have a service funded by EA money and claims to support EAs, or it’s not funded by EA money and claims to support EAs.
(Off topic: If it’s not funded by EA money, this is a yellow flag. There’s many services like coaching, mental health targeting EAs that are valuable. But it’s good to be skeptical of a commercial service that seems to try hard to aim at an EA audience—why isn’t it successful in the real world?)
The premise of my statement is that you have an EA service funded by EA money. There’s many issues if done poorly.
Often, the customers/decision makers (CEOs) are sitting ducks because they don’t know the domain that is being offered ( law/ML/IT/country or what have you) very well. At the same time, they aren’t going to pass up a free or subsidized service by EA money—even more a service with the imprimatur of EA funds.
This subsidized service and money gives a toehold to bad actors. One can perform a lot of mischief and put down competitors with a little technical skill and a lot of brash and art. (I want to show, not tell, but this is costly and I don’t need to become a dark thought or something.)
I think there are subtler issues. Like, once you start off with a low funding environment and slowly raise funding bit by bit, until you get first entry, this is sort of perfectly searching the supply curve for adverse selection.
But really, your response/objection is about something else.
There’s a lot of stuff going on but I think it’s fair to say I was really pointing out one pathology specifically (of a rainbow of potential issues just on this one area). This wasn’t some giant statement about the color and shape of institutional space in general.
Ok, my above comment is pretty badly written and I’m not sure I’m right and if I’m right I don’t think I’m right for the reason stated. Linda may be right, but I don’t agree.
In particular, I don’t answer this:
“In a centralised system Charles only have to convince the unified grantmakers that he is better, to stay on top. In a de-centralised system he has to convince everyone.”
I’m describing a situation of bad first movers and malign incentives, because this is what should be most concerning in general to EAs.
I think an answer is that actually, to start something, you shouldn’t have to convince everyone in a decentralized system. That seems unworkable and won’t happen. Instead, the likely outcome is that you only need to convince enough people to get seed funding.
This isn’t good because you have the same adverse selection or self selection problems as in my comment above. I think that for many services, first mover/lock-in effects are big and (as mentioned, but not really explained) there is malign incentives, where people can entrench and principled founders aren’t willing to wrestle in the mud (because their opportunity costs are higher or the adversarial skills are disjoint from good execution of the actual work).
(on phone again—I really need to change this wakeup routine 😄!)
This was helpful. Alongside further consideration of risks, it has made me update to thinking about an intermediate approach. Will be interested to hear what people think!
This approach could be a platform like Kickstarter that is managed and moderated by EA funders. It is an optimal home for projects that are in the gap between good enough to fund centrally by EA orgs and judged best never to fund.
For instabce, if you submit to FTX for and they think that you had a good idea but weren’t quite sure enough that you could pull it off, or that it wasn’t high value relative to competitors, then you get the opportunity to rework the application into funding request for this platform.
It then lives there so that others can see it and support it if they want. Maybe your local community members know you better or there is a single large donor who is more sympathetic to your theory of change and together these are sufficient to give you some initial funding to test the idea.
Having such platform therefore helps aggregate interesting projects and helps individuals and organisations to find and support them. It also reduces the effort involved in seeking funding by reducing it to being closer to submitting a single application.
It addresses several of the issues raised in the post and elsewhere without much additional risk and also provides a better way to do innovation competitions and store and leverage the ideas.
(I’m just writing fan fiction here, I don’t know much about your project, this is like “discount hacker news” level advice. )
This seems great and could work!
I guess an obvious issue is “adverse selection”. You’re getting proposals that couldn’t make the cut, so I would be concerned about the quality of the pool of proposals.
At some point, average quality might be too low for viability, so the fund can’t sustain itself or justify resources. Related considerations:
Adverse selection probably gets worse the more generous FTX or other funders gets
Related to the above, I guess it’s relatively common to be generous to give smaller starter grants, so the niche is might be particularly crowded.
Note that many grant makers ask for revise and resubmits, it’s relationship focused, not grant focused.
Note that adverse selection often happens on complex, hard to see characteristics. E.g. people are hucksters asking money for a business, the cause area is implausible and this is camouflaged, or the founding team is bad or misguided and this isn’t observable from their resume.
Adverse selection can get to the point it might be a stigma, e.g. good projects don’t even want to be part of this fund.
This might be perfectly viable and I’m wrong. Another suggestion that would help is to have a different angle or source of projects besides those “not quite over the line” at FTX/Open Phil.
The chess analogy don’t work. We don’t have grant experts in the same way we have chess experts.
Expertice is created by experience coupled with high quality feedback. This type of expertice exists in chess, but not much grantmaking. EA grantmaking is not old enough to have experts. This is extra true in longtermist grantmaking where you don’t get true feedback at all, but have to rely on proxies.
I’m not saying that there are no diffence in relevant skills. Beeing genneraly smart and having related knolage is very usefull in areas where no-one is an expert. But the level of skill you seem to be claming is not belivable. And if they convinced themselves of that level of supeiriority, that’s evidence of group think.
Multiple grantmakers with diffrent heruristics will help deveolop expertice, since this means that we can compare diffrent strategies, and sometimes a grantmaker get to see what happens to projects they rejected that got funding somwhere else.
So grant makers fund and build institutions to create and influence generations of projects. This needs longevity and independence.
I agree, but this don’t require that there are only few funders.
Now we happen to be in a situation where almost all EA money comes from a few rich people. That’s just how things are wether I like it or not. It’s their money to distrubute as they want. Trying to argue that the EA bilionares should not have the right to direct their donations as they want, would be pointless or couterproductive.
Also, I do think that these big donors are awsome people and that the world is better for their generosity. As far as I can see, they are spending their money on very important projects.
But they are not perfect! (This is not an attack!)
I think it would be very bad for EA to spread the idea that the large EA funders are some how infalable, and that small donors should avoid making their on grant decition.
So I’ll start off by being a jerk and say that it seems there are a lot of spelling things going on in your comment.
These spelling boo-boos are, like, sort of on the nose here, for this particular topic and maybe why it got only a downvote.
What gets under my skin is that I suspect I put even less effort into writing and spelling than you and that my ability isn’t higher. I’m not a better writer or speller. I have tools or something, so my half baked ideas come out pretty smooth.
Like, I’m mansplaining but a trick is to try writing up or copying into Google docs, which fixes up a lot of grammar and writing snags. Also, people are working on more general tools to help spread and replicate principled and clear thought (FTX idea #2 or something) but that takes more time.
If someone had access to DMs, what are possible reasons they would make this message public? Would a reasonable person knowing EA forum norms expect writing this message to help them? What would the actual impact on public perception on the other person be? Why would someone do something like this rhetorically?
By the way, this person obviously speaks a second (or third language). This is close to heroic. Me writing quickly in Swedish would be impossible.
So, if you made it past this far, about grantmaking skill and concentration: I think I’m right, but I could be wrong and it seems good to have the strongest form of this criticism.
But look, especially in this situation, it seems difficult to communicate and explain the topic of grant making skill. There’s a lot of things going on and I’m also sort of dumb. If we don’t agree on the premises it’s hard to make progress.
Because of this, I want to ask, is this really about grant making skill (which I think is extremely, comically demanding) or is it about perceived control, values or fairness, or something else?
Did you see Mackenzie Scott’s “org” distributing 8.6B? She wrote a public letter on medium explaining her views.
After reading this, it “feels” strange to imagine walking into Scott’s office and telling them about democracy or something, even though I don’t agree with all the funding choices.
But certainly this feeling isn’t the same for EA. But why is this?
For EA grantmaking, what’s the “promise”, what is owed, and to whom? I honestly want to learn from you.
There are gaps in the sytem exactly because grantmaking is hard.
No, this is not about grantmakting skills, or at least not directly. But skillsin relationtothe task dificulty is very relevant. But nither is it about fairness. Slowing down to worry about fairness with in EA seems dumb.
This is about not spreading harmfull missleading information to applicants, and other potential donors who are concidering if they want to make thier own donation decition or not.
I’m mostly just trying to say that can we please accknolage that the system is not perfect? How do I say this without anyone feeling attact?
Getting rejected hurts. If you tell everyone that EA has heeps of money and that the grantmakers are perfect, then it hurts about 100x more. This is a real cost. EA is loosing members because of this, and almost no-one talks about it. But it would not be so bad, if we could just agree that grantmaking is hard, and therefor grantmakers makes mistakes sometimes.
My current understanding is that the bigest dificulty in grantmaking is the information bandwith. The text in the application is usually not nearly enough information, which is why grantmakers rely on other channels of information. This information is nesserarly biased by their network, mainly it is much easier to get funded if you know the right people. This is all fine! I want grantmakers to use all the information they can, even if this casues unfairness. All successfull networks rely hevily on personal conections, becasue it’s just more efficient. Personal trust beats formal systems every day. I just wish we could be honest about what is going on.
I don’t expect rich people to deligate their funding decitions to unknown people outside their network, just for fairness. I don’t think that would be a good idea.
But I do want EAs who who happen to have some money to give, and happen to have significantly diffrent networks compared to the super donors, to be aware of this, to be aware of their comparative advantage to donate in their own network, instead of deligating this away to EA Funds.
What is owed is honesty. That is all.
It’s not even the case that the grant makers themsevels exagurate their own infalability, at least not explicitly. But others do, which leads to the same problems. This makes it harder to answer “who owes what”. Fortunatly I don’t care much about blame. I just want to spread more accurate informations becasue I’ve seen the harm of the missinformation. That’s why I decided to argue against your comment. Leaving those claims unchalanged would add to the problems I tried to explain here.
_____________________
Regarding spelling. I usually try harder. But this topic makes me very angry, so I try minimising the time I spend on writing this. Sorry about that.
What you’re saying makes sense and is important to me. In fact it’s mainly what I care about.
In the comment that appeared above your first reply. I said the experts (like take the billions of people in the world, and then take the best in each domain) might be so good that it’s difficult to communicate or understand them.
So my claim was that it is unwieldy for a large group of people to act like grant makers because of the nature of these experts. I left the door open to grantmakers being this good (because that seems positive and it’s strong to say they can’t be?).
I think you believe I’m arguing that current grantmakers are unquestionable. That isn’t what I wrote (you can look again at the top comment, I can’t link, I’m typing on my phone and it’s hard, seriously this physically hurts my thumbs ).
In the other comment chain with you, you replied objecting to the idea of malign behaviour requiring centralization. Here, sort of like above, I find it tempting to see you pushing back against a broader point than I originally made.
I’m not writing this comment, the previous comment or any comment here to you because I want to argue. I didn’t write it because I want to be polite or even strictly because I had a “scout mentality”. I literally don’t have any attachments for or against what you said. I wanted to understand.
You expressed something important to you. I’m sorry you felt the need to write or defend with the effort and emotion you did.
The reason why this is valuable is that most of what I wrote and the top of what you wrote are just arguments.
We can take these arguments and knock them out of someone’s hand, or give better new ones instead. It’s just logic and reasoning.
It’s the values that I care about and wanted to understand. The reasons why you wanted to talk and how you felt. (This wasn’t supposed to be difficult or cause stress either).
The end of the above comment included a statement about no funding, which suggested that my comment is entirely disinterested.
I’ve since learned (this morning) of additional funding and/or interest in funding and this statement about no funding is no longer true. It was probably also misleading or unfair to have made it in the first place.
I’m not sure this comment is helping, but I don’t agree with this post.
Separate from any individual grant, a small number of grant makers have unity and serve critical coordination purposes, especially in design of EA and meta projects, but in other areas as well.
Most of the hardest decisions in grant making require common culture and communicating private, sensitive information. Ideas are worth less than execution, well-aligned competent grantees are critical. Also, success of the project is only one consideration (deploying money has effects on the EA space and also into the outside world, maybe with lasting effects that can be hard to see).
Once you solve the above problems, which benefit from a small number of grant makers, there are classes of projects where you can deploy a lot of money into (AMF, big science grants, or CSET).
The above response doesn’t cover all kinds of EA projects, like the development of people, or nascent smaller projects that are important. To address this, outreach is a focus and grant makers are often generous with small grants.
Grant makers aren’t just passively gatekeeping money, just saying yes or no to proposals. There’s an extremely important and demanding role that grant makers perform (that might be unique to EA) where they develop whole new fields and programmes. So grant makers fund and build institutions to create and influence generations of projects. This needs longevity and independence.
The post doesn’t mention how advisers and peripheral experts, in and outside of EA, are used. Basically, key information to inform grant making decisions is outsourced, in the best sense, to a diverse group of people. This probably expands grant making capacity many, many, times. (Of course this can be poorly executed, capture, etc. is possible, but someone I know is perceptive and hasn’t seen evidence of this.)
I’m not sure I’m wording this well, but inferential distance can be vast. I find it difficult to even “see” how better people are better than me. It’s hard to understand this, you sort of have to experience it. To give an analogy, an Elo 1800 chess player can beat me, and an Elo 2400 chess player can beat that person. In turn, an Elo 2800 player can effortlessly beat those people. When being outplayed in this way, communication is literally impossible, I wouldn’t understand what is going on in a game between me and the Elo 1800, even if they explained everything, move by move. In the same way, the very best experts in a field have deep and broad understanding, so they can make large, correct inferential leaps very quickly. I think this should be appreciated. I don’t think it’s unreasonable that EA can get the very best experts in the world and they have insights like this. This puts constraints on the nature and number of grant makers who need to communicate and coordinate with these experts, and grantmakers themselves may have these qualities.
I think someone might see a large amount of money and see a small amount of people deciding where it goes. They might feel that seems wrong.
But what if the causal story is the exact opposite of this intuition? The people who have donated this money seem to be competent, and have specifically set up these systems. We’ve seen two instances of this now. The reason why there is money at all is because these structures have been setup successfully.
I’m qualified and well positioned to give the perspective above. I’m someone who has benefitted and gotten direct insights from grant makers, and probably seen large funding offered. At the same time, I don’t have this money. Due to the consequences of my actions,
I’ve removed myself from the EA projects gene pool. I’m sort of an EA Darwin award holder. So I have no personal financial/project motivation to defend this thing if I thought it was bad.There are ways to design centralized, yet decentralized grantmaking programs. For example, regranting programs that are subject to restrictions, like not funding projects that some threshold of grantmakers/other inputs consider harmful.
Can you specify what “in design of EA and meta projects” means?
EA has multiple grantmakers right now, and lots of people that are aware of various infohazards, and it doesn’t seem to me like the communication of private, sensitive information has been an issue. I’m sure there’s a threshold at which this would fail (perhaps if thousands of people were all involved with discussing private, sensitive information) but I don’t think we’re close to that threshold.
I think the perception of who is a well-aligned, competent grantee can vary by person. More of a reason to have more decentralization with grantmaking. Also, the forecating of effects can also vary by person, and having this be centralized may lead to failures to forecast certain impacts accurately (or at all).
My sense is that this is still fairly centralized and capacity constrained, since this only engages a very small fraction of the community. This stands in contrast to a highly distributed system, like EAs contributing to and voting in the FTX Project Ideas competition, which seems like it surfaced both some overlap and some considerable differences in opinion on certain projects.
There have also been large amounts of funds granted with decentralized grantmaking; see Gitcoin’s funding of public goods as an example.
These are good questions.
So this is getting abstract and outside my competency, I’m basically LARPing now.
I wrote something below that seems not implausible.
I didn’t mean infohazards or downsides.
This is about intangible characteristics that seem really important in a grantee.
To give intuition, I guess one analogy is hiring. You wouldn’t hire someone off a LinkedIn profile, there’s just so much “latent” or unknown information and fit that matters. To solve this problem, people often have pretty deep networks and do reference checks on people.
This is important because if you went in big for another CSET, or something that had to start in the millions, you better know the people, the space super well.
I think this means you need to communicate well with other grant makers. For any given major grant, this might be a lot easier with 3-5 close colleagues, versus a group of 100 people.
I guess this is fair, that my answer is sort of kicking the can. More grant makers is more advisers too.
On the other hand, I think there’s two other ways to look at this:
Let’s say you’re in AI safety or global health,
There may only be like say about 50 experts in malaria or agents/theorems/interpretability. So it doesn’t matter how large your team is, there’s no value getting 1000 grantmakers if you only need to know 200 experts in the space.
Another point is that decentralization might make it harder to use experts, so you may not actually get deep or close understanding to use the expert.
This answer is pretty abstract and speculative. I’m not sure I’m saying anything above noise.
Let’s say Charles He starts some meta EA service, let’s say an AI consultancy, “123 Fake AI”.
Charles’s service is actually pretty bad, he obscure methods and everyone suspects Charles to be gatekeeping and crowding out other AI consultancies. This squatting is harmful.
Charles sorts of entrenches, rewards his friends etc. So any normal individual raising issues is shouted down.
Someone has to kibosh this, and a set of unified grant makers could do this.
I see! Interestingly there are organizations, like DAOs, that do hiring in a decentralized manner (lots of people deciding on one candidate). There probably isn’t much efficacy data on that compared to more centralized hiring, but it’s something I’m interested in knowing.
I think there are ways to assess candidates that can be less centralized, like work samples, rather than reference checks. I mainly use that when hiring, given it seems some of the best correlates of future work performance are present and past work performance on related tasks.
If sensitive info matters, I can see smaller groups being more helpful, I guess I’m not sure the degree to which that’s necessary. Basically I think that public info can also have pretty good signal.
That’s a good point! Hmm, I think that does go into interesting and harder to answer questions like whether experts are needed/how useful they are, whether having people ask a bunch of different subject matter experts that they are connected with (easier with a more decentralized model) is better than asking a few that a funder has vetted (common with centralized models), whether an expert interview that can be recorded and shared is as good as interviewing the expert yourself, etc., some of which may be field-by-field.
Is there a reason a decentralized network couldn’t also do this? If it turns out that there are differing views, it seems that might be a hard judgement to make, whether in a centralized model or not.
So this is borderline politics as this point, but I would expect that a malign agent could capture or entrench in some sort of voting/decentralized network more easily than any high quality implementation of an EA grant making system (E.g, see politicians/posturing).
(So this is a little spicy and there’s maybe some inferential leaps here? but ) a good comment related to the need for centralization comes from what I think are very good inside views on ETH development.
In ETH development, it’s clear how centralized decision-making de-facto occurs, for all important development and functionality. This is made by a central leadership, despite there technically being voting and decentralization in a mechanical sense.
That’s pretty telling since this is like the canonical decentralized thing.
Your comments are really interesting and important.
I guess that public demand for my own personal comments is low, and I’ll probably no longer reply, feel free to PM!
I don’t understand your model of crowding out? How exatly is Charles and his firends shouting everyone down? If everyone supsects 123 Fake AI to be bad, it will not be hard to get funding to set up a compeeting service.
In a centralised system Charles only have to convince the unified grantmakers that he is better, to stay on top. In a de-centralised system he has to convince everyone.
As far as I can tell, EA grantmakers and leadership are overly worried about crowding out effects. They don’t want to give money to a project if there might be a similar but better funding options later, because they think funding the first will crowd out the later. But my experience from the other side (applying and talking to other applicants) is that the effect is the compleet oposite. If you fund a type of project, others will see that this is the type of project that can be funded, and you’ll get more similar applications.
Ok, so either you have a service funded by EA money and claims to support EAs, or it’s not funded by EA money and claims to support EAs.
(Off topic: If it’s not funded by EA money, this is a yellow flag. There’s many services like coaching, mental health targeting EAs that are valuable. But it’s good to be skeptical of a commercial service that seems to try hard to aim at an EA audience—why isn’t it successful in the real world?)
The premise of my statement is that you have an EA service funded by EA money. There’s many issues if done poorly.
Often, the customers/decision makers (CEOs) are sitting ducks because they don’t know the domain that is being offered ( law/ML/IT/country or what have you) very well. At the same time, they aren’t going to pass up a free or subsidized service by EA money—even more a service with the imprimatur of EA funds.
This subsidized service and money gives a toehold to bad actors. One can perform a lot of mischief and put down competitors with a little technical skill and a lot of brash and art. (I want to show, not tell, but this is costly and I don’t need to become a dark thought or something.)
I think there are subtler issues. Like, once you start off with a low funding environment and slowly raise funding bit by bit, until you get first entry, this is sort of perfectly searching the supply curve for adverse selection.
But really, your response/objection is about something else.
There’s a lot of stuff going on but I think it’s fair to say I was really pointing out one pathology specifically (of a rainbow of potential issues just on this one area). This wasn’t some giant statement about the color and shape of institutional space in general.
Ok, my above comment is pretty badly written and I’m not sure I’m right and if I’m right I don’t think I’m right for the reason stated. Linda may be right, but I don’t agree.
In particular, I don’t answer this:
“In a centralised system Charles only have to convince the unified grantmakers that he is better, to stay on top. In a de-centralised system he has to convince everyone.”
I’m describing a situation of bad first movers and malign incentives, because this is what should be most concerning in general to EAs.
I think an answer is that actually, to start something, you shouldn’t have to convince everyone in a decentralized system. That seems unworkable and won’t happen. Instead, the likely outcome is that you only need to convince enough people to get seed funding.
This isn’t good because you have the same adverse selection or self selection problems as in my comment above. I think that for many services, first mover/lock-in effects are big and (as mentioned, but not really explained) there is malign incentives, where people can entrench and principled founders aren’t willing to wrestle in the mud (because their opportunity costs are higher or the adversarial skills are disjoint from good execution of the actual work).
(on phone again—I really need to change this wakeup routine 😄!)
This was helpful. Alongside further consideration of risks, it has made me update to thinking about an intermediate approach. Will be interested to hear what people think!
This approach could be a platform like Kickstarter that is managed and moderated by EA funders. It is an optimal home for projects that are in the gap between good enough to fund centrally by EA orgs and judged best never to fund.
For instabce, if you submit to FTX for and they think that you had a good idea but weren’t quite sure enough that you could pull it off, or that it wasn’t high value relative to competitors, then you get the opportunity to rework the application into funding request for this platform.
It then lives there so that others can see it and support it if they want. Maybe your local community members know you better or there is a single large donor who is more sympathetic to your theory of change and together these are sufficient to give you some initial funding to test the idea.
Having such platform therefore helps aggregate interesting projects and helps individuals and organisations to find and support them. It also reduces the effort involved in seeking funding by reducing it to being closer to submitting a single application.
It addresses several of the issues raised in the post and elsewhere without much additional risk and also provides a better way to do innovation competitions and store and leverage the ideas.
(I’m just writing fan fiction here, I don’t know much about your project, this is like “discount hacker news” level advice. )
This seems great and could work!
I guess an obvious issue is “adverse selection”. You’re getting proposals that couldn’t make the cut, so I would be concerned about the quality of the pool of proposals.
At some point, average quality might be too low for viability, so the fund can’t sustain itself or justify resources. Related considerations:
Adverse selection probably gets worse the more generous FTX or other funders gets
Related to the above, I guess it’s relatively common to be generous to give smaller starter grants, so the niche is might be particularly crowded.
Note that many grant makers ask for revise and resubmits, it’s relationship focused, not grant focused.
Note that adverse selection often happens on complex, hard to see characteristics. E.g. people are hucksters asking money for a business, the cause area is implausible and this is camouflaged, or the founding team is bad or misguided and this isn’t observable from their resume.
Adverse selection can get to the point it might be a stigma, e.g. good projects don’t even want to be part of this fund.
This might be perfectly viable and I’m wrong. Another suggestion that would help is to have a different angle or source of projects besides those “not quite over the line” at FTX/Open Phil.
The chess analogy don’t work. We don’t have grant experts in the same way we have chess experts.
Expertice is created by experience coupled with high quality feedback. This type of expertice exists in chess, but not much grantmaking. EA grantmaking is not old enough to have experts. This is extra true in longtermist grantmaking where you don’t get true feedback at all, but have to rely on proxies.
I’m not saying that there are no diffence in relevant skills. Beeing genneraly smart and having related knolage is very usefull in areas where no-one is an expert. But the level of skill you seem to be claming is not belivable. And if they convinced themselves of that level of supeiriority, that’s evidence of group think.
Multiple grantmakers with diffrent heruristics will help deveolop expertice, since this means that we can compare diffrent strategies, and sometimes a grantmaker get to see what happens to projects they rejected that got funding somwhere else.
I agree, but this don’t require that there are only few funders.
Now we happen to be in a situation where almost all EA money comes from a few rich people. That’s just how things are wether I like it or not. It’s their money to distrubute as they want. Trying to argue that the EA bilionares should not have the right to direct their donations as they want, would be pointless or couterproductive.
Also, I do think that these big donors are awsome people and that the world is better for their generosity. As far as I can see, they are spending their money on very important projects.
But they are not perfect! (This is not an attack!)
I think it would be very bad for EA to spread the idea that the large EA funders are some how infalable, and that small donors should avoid making their on grant decition.
Hi,
So I’ll start off by being a jerk and say that it seems there are a lot of spelling things going on in your comment.
These spelling boo-boos are, like, sort of on the nose here, for this particular topic and maybe why it got only a downvote.
What gets under my skin is that I suspect I put even less effort into writing and spelling than you and that my ability isn’t higher. I’m not a better writer or speller. I have tools or something, so my half baked ideas come out pretty smooth.
Like, I’m mansplaining but a trick is to try writing up or copying into Google docs, which fixes up a lot of grammar and writing snags. Also, people are working on more general tools to help spread and replicate principled and clear thought (FTX idea #2 or something) but that takes more time.
This is good advice in the wrong place. DMs exist dude.
If someone had access to DMs, what are possible reasons they would make this message public? Would a reasonable person knowing EA forum norms expect writing this message to help them? What would the actual impact on public perception on the other person be? Why would someone do something like this rhetorically?
By the way, this person obviously speaks a second (or third language). This is close to heroic. Me writing quickly in Swedish would be impossible.
So, if you made it past this far, about grantmaking skill and concentration: I think I’m right, but I could be wrong and it seems good to have the strongest form of this criticism.
But look, especially in this situation, it seems difficult to communicate and explain the topic of grant making skill. There’s a lot of things going on and I’m also sort of dumb. If we don’t agree on the premises it’s hard to make progress.
Because of this, I want to ask, is this really about grant making skill (which I think is extremely, comically demanding) or is it about perceived control, values or fairness, or something else?
Did you see Mackenzie Scott’s “org” distributing 8.6B? She wrote a public letter on medium explaining her views.
https://mackenzie-scott.medium.com/helping-any-of-us-can-help-us-all-f4c7487818d9
After reading this, it “feels” strange to imagine walking into Scott’s office and telling them about democracy or something, even though I don’t agree with all the funding choices.
But certainly this feeling isn’t the same for EA. But why is this?
For EA grantmaking, what’s the “promise”, what is owed, and to whom? I honestly want to learn from you.
I agree that grantmaking is hard!
There are gaps in the sytem exactly because grantmaking is hard.
No, this is not about grantmakting skills, or at least not directly. But skills in relation to the task dificulty is very relevant. But nither is it about fairness. Slowing down to worry about fairness with in EA seems dumb.
This is about not spreading harmfull missleading information to applicants, and other potential donors who are concidering if they want to make thier own donation decition or not.
I’m mostly just trying to say that can we please accknolage that the system is not perfect? How do I say this without anyone feeling attact?
Getting rejected hurts. If you tell everyone that EA has heeps of money and that the grantmakers are perfect, then it hurts about 100x more. This is a real cost. EA is loosing members because of this, and almost no-one talks about it. But it would not be so bad, if we could just agree that grantmaking is hard, and therefor grantmakers makes mistakes sometimes.
https://forum.effectivealtruism.org/posts/Khon9Bhmad7v4dNKe/the-cost-of-rejection
My current understanding is that the bigest dificulty in grantmaking is the information bandwith. The text in the application is usually not nearly enough information, which is why grantmakers rely on other channels of information. This information is nesserarly biased by their network, mainly it is much easier to get funded if you know the right people. This is all fine! I want grantmakers to use all the information they can, even if this casues unfairness. All successfull networks rely hevily on personal conections, becasue it’s just more efficient. Personal trust beats formal systems every day. I just wish we could be honest about what is going on.
I don’t expect rich people to deligate their funding decitions to unknown people outside their network, just for fairness. I don’t think that would be a good idea.
But I do want EAs who who happen to have some money to give, and happen to have significantly diffrent networks compared to the super donors, to be aware of this, to be aware of their comparative advantage to donate in their own network, instead of deligating this away to EA Funds.
What is owed is honesty. That is all.
It’s not even the case that the grant makers themsevels exagurate their own infalability, at least not explicitly. But others do, which leads to the same problems. This makes it harder to answer “who owes what”. Fortunatly I don’t care much about blame. I just want to spread more accurate informations becasue I’ve seen the harm of the missinformation. That’s why I decided to argue against your comment. Leaving those claims unchalanged would add to the problems I tried to explain here.
_____________________
Regarding spelling. I usually try harder. But this topic makes me very angry, so I try minimising the time I spend on writing this. Sorry about that.
What you’re saying makes sense and is important to me. In fact it’s mainly what I care about.
In the comment that appeared above your first reply. I said the experts (like take the billions of people in the world, and then take the best in each domain) might be so good that it’s difficult to communicate or understand them.
So my claim was that it is unwieldy for a large group of people to act like grant makers because of the nature of these experts. I left the door open to grantmakers being this good (because that seems positive and it’s strong to say they can’t be?).
I think you believe I’m arguing that current grantmakers are unquestionable. That isn’t what I wrote (you can look again at the top comment, I can’t link, I’m typing on my phone and it’s hard, seriously this physically hurts my thumbs ).
In the other comment chain with you, you replied objecting to the idea of malign behaviour requiring centralization. Here, sort of like above, I find it tempting to see you pushing back against a broader point than I originally made.
You did this because it was important to you.
I’m not writing this comment, the previous comment or any comment here to you because I want to argue. I didn’t write it because I want to be polite or even strictly because I had a “scout mentality”. I literally don’t have any attachments for or against what you said. I wanted to understand.
You expressed something important to you. I’m sorry you felt the need to write or defend with the effort and emotion you did.
The reason why this is valuable is that most of what I wrote and the top of what you wrote are just arguments.
We can take these arguments and knock them out of someone’s hand, or give better new ones instead. It’s just logic and reasoning.
It’s the values that I care about and wanted to understand. The reasons why you wanted to talk and how you felt. (This wasn’t supposed to be difficult or cause stress either).
The end of the above comment included a statement about no funding, which suggested that my comment is entirely disinterested.
I’ve since learned (this morning) of additional funding and/or interest in funding and this statement about no funding is no longer true. It was probably also misleading or unfair to have made it in the first place.