Let’s say Charles He starts some meta EA service, let’s say an AI consultancy, “123 Fake AI”.
Charles’s service is actually pretty bad, he obscure methods and everyone suspects Charles to be gatekeeping and crowding out other AI consultancies. This squatting is harmful.
Charles sorts of entrenches, rewards his friends etc. So any normal individual raising issues is shouted down.
Someone has to kibosh this, and a set of unified grant makers could do this.
I don’t understand your model of crowding out? How exatly is Charles and his firends shouting everyone down? If everyone supsects 123 Fake AI to be bad, it will not be hard to get funding to set up a compeeting service.
In a centralised system Charles only have to convince the unified grantmakers that he is better, to stay on top. In a de-centralised system he has to convince everyone.
As far as I can tell, EA grantmakers and leadership are overly worried about crowding out effects. They don’t want to give money to a project if there might be a similar but better funding options later, because they think funding the first will crowd out the later. But my experience from the other side (applying and talking to other applicants) is that the effect is the compleet oposite. If you fund a type of project, others will see that this is the type of project that can be funded, and you’ll get more similar applications.
Ok, so either you have a service funded by EA money and claims to support EAs, or it’s not funded by EA money and claims to support EAs.
(Off topic: If it’s not funded by EA money, this is a yellow flag. There’s many services like coaching, mental health targeting EAs that are valuable. But it’s good to be skeptical of a commercial service that seems to try hard to aim at an EA audience—why isn’t it successful in the real world?)
The premise of my statement is that you have an EA service funded by EA money. There’s many issues if done poorly.
Often, the customers/decision makers (CEOs) are sitting ducks because they don’t know the domain that is being offered ( law/ML/IT/country or what have you) very well. At the same time, they aren’t going to pass up a free or subsidized service by EA money—even more a service with the imprimatur of EA funds.
This subsidized service and money gives a toehold to bad actors. One can perform a lot of mischief and put down competitors with a little technical skill and a lot of brash and art. (I want to show, not tell, but this is costly and I don’t need to become a dark thought or something.)
I think there are subtler issues. Like, once you start off with a low funding environment and slowly raise funding bit by bit, until you get first entry, this is sort of perfectly searching the supply curve for adverse selection.
But really, your response/objection is about something else.
There’s a lot of stuff going on but I think it’s fair to say I was really pointing out one pathology specifically (of a rainbow of potential issues just on this one area). This wasn’t some giant statement about the color and shape of institutional space in general.
Ok, my above comment is pretty badly written and I’m not sure I’m right and if I’m right I don’t think I’m right for the reason stated. Linda may be right, but I don’t agree.
In particular, I don’t answer this:
“In a centralised system Charles only have to convince the unified grantmakers that he is better, to stay on top. In a de-centralised system he has to convince everyone.”
I’m describing a situation of bad first movers and malign incentives, because this is what should be most concerning in general to EAs.
I think an answer is that actually, to start something, you shouldn’t have to convince everyone in a decentralized system. That seems unworkable and won’t happen. Instead, the likely outcome is that you only need to convince enough people to get seed funding.
This isn’t good because you have the same adverse selection or self selection problems as in my comment above. I think that for many services, first mover/lock-in effects are big and (as mentioned, but not really explained) there is malign incentives, where people can entrench and principled founders aren’t willing to wrestle in the mud (because their opportunity costs are higher or the adversarial skills are disjoint from good execution of the actual work).
I don’t understand your model of crowding out? How exatly is Charles and his firends shouting everyone down? If everyone supsects 123 Fake AI to be bad, it will not be hard to get funding to set up a compeeting service.
In a centralised system Charles only have to convince the unified grantmakers that he is better, to stay on top. In a de-centralised system he has to convince everyone.
As far as I can tell, EA grantmakers and leadership are overly worried about crowding out effects. They don’t want to give money to a project if there might be a similar but better funding options later, because they think funding the first will crowd out the later. But my experience from the other side (applying and talking to other applicants) is that the effect is the compleet oposite. If you fund a type of project, others will see that this is the type of project that can be funded, and you’ll get more similar applications.
Ok, so either you have a service funded by EA money and claims to support EAs, or it’s not funded by EA money and claims to support EAs.
(Off topic: If it’s not funded by EA money, this is a yellow flag. There’s many services like coaching, mental health targeting EAs that are valuable. But it’s good to be skeptical of a commercial service that seems to try hard to aim at an EA audience—why isn’t it successful in the real world?)
The premise of my statement is that you have an EA service funded by EA money. There’s many issues if done poorly.
Often, the customers/decision makers (CEOs) are sitting ducks because they don’t know the domain that is being offered ( law/ML/IT/country or what have you) very well. At the same time, they aren’t going to pass up a free or subsidized service by EA money—even more a service with the imprimatur of EA funds.
This subsidized service and money gives a toehold to bad actors. One can perform a lot of mischief and put down competitors with a little technical skill and a lot of brash and art. (I want to show, not tell, but this is costly and I don’t need to become a dark thought or something.)
I think there are subtler issues. Like, once you start off with a low funding environment and slowly raise funding bit by bit, until you get first entry, this is sort of perfectly searching the supply curve for adverse selection.
But really, your response/objection is about something else.
There’s a lot of stuff going on but I think it’s fair to say I was really pointing out one pathology specifically (of a rainbow of potential issues just on this one area). This wasn’t some giant statement about the color and shape of institutional space in general.
Ok, my above comment is pretty badly written and I’m not sure I’m right and if I’m right I don’t think I’m right for the reason stated. Linda may be right, but I don’t agree.
In particular, I don’t answer this:
“In a centralised system Charles only have to convince the unified grantmakers that he is better, to stay on top. In a de-centralised system he has to convince everyone.”
I’m describing a situation of bad first movers and malign incentives, because this is what should be most concerning in general to EAs.
I think an answer is that actually, to start something, you shouldn’t have to convince everyone in a decentralized system. That seems unworkable and won’t happen. Instead, the likely outcome is that you only need to convince enough people to get seed funding.
This isn’t good because you have the same adverse selection or self selection problems as in my comment above. I think that for many services, first mover/lock-in effects are big and (as mentioned, but not really explained) there is malign incentives, where people can entrench and principled founders aren’t willing to wrestle in the mud (because their opportunity costs are higher or the adversarial skills are disjoint from good execution of the actual work).