So this is getting abstract and outside my competency, I’m basically LARPing now.
I wrote something below that seems not implausible.
not funding projects that some threshold of grantmakers/other inputs consider harmful.
EA has multiple grantmakers right now, and lots of people that are aware of various infohazards, and it doesn’t seem to me like the communication of private, sensitive information has been an issue. I’m sure there’s a threshold at which this would fail (perhaps if thousands of people were all involved with discussing private, sensitive information) but I don’t think we’re close to that threshold.
I didn’t mean infohazards or downsides.
This is about intangible characteristics that seem really important in a grantee.
To give intuition, I guess one analogy is hiring. You wouldn’t hire someone off a LinkedIn profile, there’s just so much “latent” or unknown information and fit that matters. To solve this problem, people often have pretty deep networks and do reference checks on people.
This is important because if you went in big for another CSET, or something that had to start in the millions, you better know the people, the space super well.
I think this means you need to communicate well with other grant makers. For any given major grant, this might be a lot easier with 3-5 close colleagues, versus a group of 100 people.
My sense is that this is still fairly centralized and capacity constrained, since this only engages a very small fraction of the community. This stands in contrast to a highly distributed system, like EAs contributing to and voting in the FTX Project Ideas competition, which seems like it surfaced both some overlap and some considerable differences in opinion on certain projects.
I guess this is fair, that my answer is sort of kicking the can. More grant makers is more advisers too.
On the other hand, I think there’s two other ways to look at this:
Let’s say you’re in AI safety or global health,
There may only be like say about 50 experts in malaria or agents/theorems/interpretability. So it doesn’t matter how large your team is, there’s no value getting 1000 grantmakers if you only need to know 200 experts in the space.
Another point is that decentralization might make it harder to use experts, so you may not actually get deep or close understanding to use the expert.
This answer is pretty abstract and speculative. I’m not sure I’m saying anything above noise.
Can you specify what “in design of EA and meta projects” means?
Let’s say Charles He starts some meta EA service, let’s say an AI consultancy, “123 Fake AI”.
Charles’s service is actually pretty bad, he obscure methods and everyone suspects Charles to be gatekeeping and crowding out other AI consultancies. This squatting is harmful.
Charles sorts of entrenches, rewards his friends etc. So any normal individual raising issues is shouted down.
Someone has to kibosh this, and a set of unified grant makers could do this.
This is about intangible characteristics that seem really important in a grantee.
To give intuition, I guess one analogy is hiring. You wouldn’t hire someone off a LinkedIn profile, there’s just so much “latent” or unknown information and fit that matters. To solve this problem, people often have pretty deep networks and do reference checks on people.
This is important because if you went in big for another CSET, or something that had to start in the millions, you better know the people, the space super well.
I think this means you need to communicate well with other grant makers. For any given major grant, this might be a lot easier with 3-5 close colleagues, versus a group of 100 people.
I see! Interestingly there are organizations, like DAOs, that do hiring in a decentralized manner (lots of people deciding on one candidate). There probably isn’t much efficacy data on that compared to more centralized hiring, but it’s something I’m interested in knowing.
I think there are ways to assess candidates that can be less centralized, like work samples, rather than reference checks. I mainly use that when hiring, given it seems some of the best correlates of future work performance are present and past work performance on related tasks.
If sensitive info matters, I can see smaller groups being more helpful, I guess I’m not sure the degree to which that’s necessary. Basically I think that public info can also have pretty good signal.
So it doesn’t matter how large your team is, there’s no value getting 1000 grantmakers if you only need to know 200 experts in the space.
That’s a good point! Hmm, I think that does go into interesting and harder to answer questions like whether experts are needed/how useful they are, whether having people ask a bunch of different subject matter experts that they are connected with (easier with a more decentralized model) is better than asking a few that a funder has vetted (common with centralized models), whether an expert interview that can be recorded and shared is as good as interviewing the expert yourself, etc., some of which may be field-by-field.
Someone has to kibosh this, and a set of unified grant makers could do this.
Is there a reason a decentralized network couldn’t also do this? If it turns out that there are differing views, it seems that might be a hard judgement to make, whether in a centralized model or not.
Is there a reason a decentralized network couldn’t also do this? If it turns out that there are differing views, it seems that might be a hard judgement to make, whether in a centralized model or not.
So this is borderline politics as this point, but I would expect that a malign agent could capture or entrench in some sort of voting/decentralized network more easily than any high quality implementation of an EA grant making system (E.g, see politicians/posturing).
(So this is a little spicy and there’s maybe some inferential leaps here? but ) a good comment related to the need for centralization comes from what I think are very good inside views on ETH development.
In ETH development, it’s clear how centralized decision-making de-facto occurs, for all important development and functionality. This is made by a central leadership, despite there technically being voting and decentralization in a mechanical sense.
That’s pretty telling since this is like the canonical decentralized thing.
Your comments are really interesting and important.
I guess that public demand for my own personal comments is low, and I’ll probably no longer reply, feel free to PM!
Let’s say Charles He starts some meta EA service, let’s say an AI consultancy, “123 Fake AI”.
Charles’s service is actually pretty bad, he obscure methods and everyone suspects Charles to be gatekeeping and crowding out other AI consultancies. This squatting is harmful.
Charles sorts of entrenches, rewards his friends etc. So any normal individual raising issues is shouted down.
Someone has to kibosh this, and a set of unified grant makers could do this.
I don’t understand your model of crowding out? How exatly is Charles and his firends shouting everyone down? If everyone supsects 123 Fake AI to be bad, it will not be hard to get funding to set up a compeeting service.
In a centralised system Charles only have to convince the unified grantmakers that he is better, to stay on top. In a de-centralised system he has to convince everyone.
As far as I can tell, EA grantmakers and leadership are overly worried about crowding out effects. They don’t want to give money to a project if there might be a similar but better funding options later, because they think funding the first will crowd out the later. But my experience from the other side (applying and talking to other applicants) is that the effect is the compleet oposite. If you fund a type of project, others will see that this is the type of project that can be funded, and you’ll get more similar applications.
Ok, so either you have a service funded by EA money and claims to support EAs, or it’s not funded by EA money and claims to support EAs.
(Off topic: If it’s not funded by EA money, this is a yellow flag. There’s many services like coaching, mental health targeting EAs that are valuable. But it’s good to be skeptical of a commercial service that seems to try hard to aim at an EA audience—why isn’t it successful in the real world?)
The premise of my statement is that you have an EA service funded by EA money. There’s many issues if done poorly.
Often, the customers/decision makers (CEOs) are sitting ducks because they don’t know the domain that is being offered ( law/ML/IT/country or what have you) very well. At the same time, they aren’t going to pass up a free or subsidized service by EA money—even more a service with the imprimatur of EA funds.
This subsidized service and money gives a toehold to bad actors. One can perform a lot of mischief and put down competitors with a little technical skill and a lot of brash and art. (I want to show, not tell, but this is costly and I don’t need to become a dark thought or something.)
I think there are subtler issues. Like, once you start off with a low funding environment and slowly raise funding bit by bit, until you get first entry, this is sort of perfectly searching the supply curve for adverse selection.
But really, your response/objection is about something else.
There’s a lot of stuff going on but I think it’s fair to say I was really pointing out one pathology specifically (of a rainbow of potential issues just on this one area). This wasn’t some giant statement about the color and shape of institutional space in general.
Ok, my above comment is pretty badly written and I’m not sure I’m right and if I’m right I don’t think I’m right for the reason stated. Linda may be right, but I don’t agree.
In particular, I don’t answer this:
“In a centralised system Charles only have to convince the unified grantmakers that he is better, to stay on top. In a de-centralised system he has to convince everyone.”
I’m describing a situation of bad first movers and malign incentives, because this is what should be most concerning in general to EAs.
I think an answer is that actually, to start something, you shouldn’t have to convince everyone in a decentralized system. That seems unworkable and won’t happen. Instead, the likely outcome is that you only need to convince enough people to get seed funding.
This isn’t good because you have the same adverse selection or self selection problems as in my comment above. I think that for many services, first mover/lock-in effects are big and (as mentioned, but not really explained) there is malign incentives, where people can entrench and principled founders aren’t willing to wrestle in the mud (because their opportunity costs are higher or the adversarial skills are disjoint from good execution of the actual work).
These are good questions.
So this is getting abstract and outside my competency, I’m basically LARPing now.
I wrote something below that seems not implausible.
I didn’t mean infohazards or downsides.
This is about intangible characteristics that seem really important in a grantee.
To give intuition, I guess one analogy is hiring. You wouldn’t hire someone off a LinkedIn profile, there’s just so much “latent” or unknown information and fit that matters. To solve this problem, people often have pretty deep networks and do reference checks on people.
This is important because if you went in big for another CSET, or something that had to start in the millions, you better know the people, the space super well.
I think this means you need to communicate well with other grant makers. For any given major grant, this might be a lot easier with 3-5 close colleagues, versus a group of 100 people.
I guess this is fair, that my answer is sort of kicking the can. More grant makers is more advisers too.
On the other hand, I think there’s two other ways to look at this:
Let’s say you’re in AI safety or global health,
There may only be like say about 50 experts in malaria or agents/theorems/interpretability. So it doesn’t matter how large your team is, there’s no value getting 1000 grantmakers if you only need to know 200 experts in the space.
Another point is that decentralization might make it harder to use experts, so you may not actually get deep or close understanding to use the expert.
This answer is pretty abstract and speculative. I’m not sure I’m saying anything above noise.
Let’s say Charles He starts some meta EA service, let’s say an AI consultancy, “123 Fake AI”.
Charles’s service is actually pretty bad, he obscure methods and everyone suspects Charles to be gatekeeping and crowding out other AI consultancies. This squatting is harmful.
Charles sorts of entrenches, rewards his friends etc. So any normal individual raising issues is shouted down.
Someone has to kibosh this, and a set of unified grant makers could do this.
I see! Interestingly there are organizations, like DAOs, that do hiring in a decentralized manner (lots of people deciding on one candidate). There probably isn’t much efficacy data on that compared to more centralized hiring, but it’s something I’m interested in knowing.
I think there are ways to assess candidates that can be less centralized, like work samples, rather than reference checks. I mainly use that when hiring, given it seems some of the best correlates of future work performance are present and past work performance on related tasks.
If sensitive info matters, I can see smaller groups being more helpful, I guess I’m not sure the degree to which that’s necessary. Basically I think that public info can also have pretty good signal.
That’s a good point! Hmm, I think that does go into interesting and harder to answer questions like whether experts are needed/how useful they are, whether having people ask a bunch of different subject matter experts that they are connected with (easier with a more decentralized model) is better than asking a few that a funder has vetted (common with centralized models), whether an expert interview that can be recorded and shared is as good as interviewing the expert yourself, etc., some of which may be field-by-field.
Is there a reason a decentralized network couldn’t also do this? If it turns out that there are differing views, it seems that might be a hard judgement to make, whether in a centralized model or not.
So this is borderline politics as this point, but I would expect that a malign agent could capture or entrench in some sort of voting/decentralized network more easily than any high quality implementation of an EA grant making system (E.g, see politicians/posturing).
(So this is a little spicy and there’s maybe some inferential leaps here? but ) a good comment related to the need for centralization comes from what I think are very good inside views on ETH development.
In ETH development, it’s clear how centralized decision-making de-facto occurs, for all important development and functionality. This is made by a central leadership, despite there technically being voting and decentralization in a mechanical sense.
That’s pretty telling since this is like the canonical decentralized thing.
Your comments are really interesting and important.
I guess that public demand for my own personal comments is low, and I’ll probably no longer reply, feel free to PM!
I don’t understand your model of crowding out? How exatly is Charles and his firends shouting everyone down? If everyone supsects 123 Fake AI to be bad, it will not be hard to get funding to set up a compeeting service.
In a centralised system Charles only have to convince the unified grantmakers that he is better, to stay on top. In a de-centralised system he has to convince everyone.
As far as I can tell, EA grantmakers and leadership are overly worried about crowding out effects. They don’t want to give money to a project if there might be a similar but better funding options later, because they think funding the first will crowd out the later. But my experience from the other side (applying and talking to other applicants) is that the effect is the compleet oposite. If you fund a type of project, others will see that this is the type of project that can be funded, and you’ll get more similar applications.
Ok, so either you have a service funded by EA money and claims to support EAs, or it’s not funded by EA money and claims to support EAs.
(Off topic: If it’s not funded by EA money, this is a yellow flag. There’s many services like coaching, mental health targeting EAs that are valuable. But it’s good to be skeptical of a commercial service that seems to try hard to aim at an EA audience—why isn’t it successful in the real world?)
The premise of my statement is that you have an EA service funded by EA money. There’s many issues if done poorly.
Often, the customers/decision makers (CEOs) are sitting ducks because they don’t know the domain that is being offered ( law/ML/IT/country or what have you) very well. At the same time, they aren’t going to pass up a free or subsidized service by EA money—even more a service with the imprimatur of EA funds.
This subsidized service and money gives a toehold to bad actors. One can perform a lot of mischief and put down competitors with a little technical skill and a lot of brash and art. (I want to show, not tell, but this is costly and I don’t need to become a dark thought or something.)
I think there are subtler issues. Like, once you start off with a low funding environment and slowly raise funding bit by bit, until you get first entry, this is sort of perfectly searching the supply curve for adverse selection.
But really, your response/objection is about something else.
There’s a lot of stuff going on but I think it’s fair to say I was really pointing out one pathology specifically (of a rainbow of potential issues just on this one area). This wasn’t some giant statement about the color and shape of institutional space in general.
Ok, my above comment is pretty badly written and I’m not sure I’m right and if I’m right I don’t think I’m right for the reason stated. Linda may be right, but I don’t agree.
In particular, I don’t answer this:
“In a centralised system Charles only have to convince the unified grantmakers that he is better, to stay on top. In a de-centralised system he has to convince everyone.”
I’m describing a situation of bad first movers and malign incentives, because this is what should be most concerning in general to EAs.
I think an answer is that actually, to start something, you shouldn’t have to convince everyone in a decentralized system. That seems unworkable and won’t happen. Instead, the likely outcome is that you only need to convince enough people to get seed funding.
This isn’t good because you have the same adverse selection or self selection problems as in my comment above. I think that for many services, first mover/lock-in effects are big and (as mentioned, but not really explained) there is malign incentives, where people can entrench and principled founders aren’t willing to wrestle in the mud (because their opportunity costs are higher or the adversarial skills are disjoint from good execution of the actual work).