Ok, so either you have a service funded by EA money and claims to support EAs, or it’s not funded by EA money and claims to support EAs.
(Off topic: If it’s not funded by EA money, this is a yellow flag. There’s many services like coaching, mental health targeting EAs that are valuable. But it’s good to be skeptical of a commercial service that seems to try hard to aim at an EA audience—why isn’t it successful in the real world?)
The premise of my statement is that you have an EA service funded by EA money. There’s many issues if done poorly.
Often, the customers/decision makers (CEOs) are sitting ducks because they don’t know the domain that is being offered ( law/ML/IT/country or what have you) very well. At the same time, they aren’t going to pass up a free or subsidized service by EA money—even more a service with the imprimatur of EA funds.
This subsidized service and money gives a toehold to bad actors. One can perform a lot of mischief and put down competitors with a little technical skill and a lot of brash and art. (I want to show, not tell, but this is costly and I don’t need to become a dark thought or something.)
I think there are subtler issues. Like, once you start off with a low funding environment and slowly raise funding bit by bit, until you get first entry, this is sort of perfectly searching the supply curve for adverse selection.
But really, your response/objection is about something else.
There’s a lot of stuff going on but I think it’s fair to say I was really pointing out one pathology specifically (of a rainbow of potential issues just on this one area). This wasn’t some giant statement about the color and shape of institutional space in general.
Ok, my above comment is pretty badly written and I’m not sure I’m right and if I’m right I don’t think I’m right for the reason stated. Linda may be right, but I don’t agree.
In particular, I don’t answer this:
“In a centralised system Charles only have to convince the unified grantmakers that he is better, to stay on top. In a de-centralised system he has to convince everyone.”
I’m describing a situation of bad first movers and malign incentives, because this is what should be most concerning in general to EAs.
I think an answer is that actually, to start something, you shouldn’t have to convince everyone in a decentralized system. That seems unworkable and won’t happen. Instead, the likely outcome is that you only need to convince enough people to get seed funding.
This isn’t good because you have the same adverse selection or self selection problems as in my comment above. I think that for many services, first mover/lock-in effects are big and (as mentioned, but not really explained) there is malign incentives, where people can entrench and principled founders aren’t willing to wrestle in the mud (because their opportunity costs are higher or the adversarial skills are disjoint from good execution of the actual work).
Ok, so either you have a service funded by EA money and claims to support EAs, or it’s not funded by EA money and claims to support EAs.
(Off topic: If it’s not funded by EA money, this is a yellow flag. There’s many services like coaching, mental health targeting EAs that are valuable. But it’s good to be skeptical of a commercial service that seems to try hard to aim at an EA audience—why isn’t it successful in the real world?)
The premise of my statement is that you have an EA service funded by EA money. There’s many issues if done poorly.
Often, the customers/decision makers (CEOs) are sitting ducks because they don’t know the domain that is being offered ( law/ML/IT/country or what have you) very well. At the same time, they aren’t going to pass up a free or subsidized service by EA money—even more a service with the imprimatur of EA funds.
This subsidized service and money gives a toehold to bad actors. One can perform a lot of mischief and put down competitors with a little technical skill and a lot of brash and art. (I want to show, not tell, but this is costly and I don’t need to become a dark thought or something.)
I think there are subtler issues. Like, once you start off with a low funding environment and slowly raise funding bit by bit, until you get first entry, this is sort of perfectly searching the supply curve for adverse selection.
But really, your response/objection is about something else.
There’s a lot of stuff going on but I think it’s fair to say I was really pointing out one pathology specifically (of a rainbow of potential issues just on this one area). This wasn’t some giant statement about the color and shape of institutional space in general.
Ok, my above comment is pretty badly written and I’m not sure I’m right and if I’m right I don’t think I’m right for the reason stated. Linda may be right, but I don’t agree.
In particular, I don’t answer this:
“In a centralised system Charles only have to convince the unified grantmakers that he is better, to stay on top. In a de-centralised system he has to convince everyone.”
I’m describing a situation of bad first movers and malign incentives, because this is what should be most concerning in general to EAs.
I think an answer is that actually, to start something, you shouldn’t have to convince everyone in a decentralized system. That seems unworkable and won’t happen. Instead, the likely outcome is that you only need to convince enough people to get seed funding.
This isn’t good because you have the same adverse selection or self selection problems as in my comment above. I think that for many services, first mover/lock-in effects are big and (as mentioned, but not really explained) there is malign incentives, where people can entrench and principled founders aren’t willing to wrestle in the mud (because their opportunity costs are higher or the adversarial skills are disjoint from good execution of the actual work).