Thanks for writing this up. At the risk of asking obvious question, Iâm interested in why you think entrepreneurship is valuable in EA.
One explanation for why entrepreneurship has high financial returns is information asymmetry/âadverse selection: itâs hard to tell if someone is a good CEO apart from âdoes their business do wellâ, so they are forced to have their compensation tied closely to business outcomes (instead of something like âdoes their manager think they are doing a good jobâ), which have high variance; as a result of this variance and people being risk-averse, expected returns need to be high in order to compensate these entrepreneurs.
Itâs not obvious to me that this information asymmetry exists in EA. E.g. I expect âBuck thinks X is a good group leaderâ correlates better with âX is a good group leaderâ than âBuck thinks X will be a successful startupâ correlates with âX is a successful startupâ.
It seems like there might be a âmarket failureâ in EA where people can reasonably be known to be doing good work, but are not compensated appropriately for their work, unless they do some weird bespoke thing.
You seem to be wise and thoughtful, but I donât understand the premise of this question or this belief:
One explanation for why entrepreneurship has high financial returns is information asymmetry/âadverse selection: itâs hard to tell if someone is a good CEO apart from âdoes their business do wellâ, so they are forced to have their compensation tied closely to business outcomes (instead of something like âdoes their manager think they are doing a good jobâ), which have high variance; as a result of this variance and people being risk-averse, expected returns need to be high in order to compensate these entrepreneurs.
Itâs not obvious to me that this information asymmetry exists in EA. E.g. I expect âBuck thinks X is a good group leaderâ correlates better with âX is a good group leaderâ than âBuck thinks X will be a successful startupâ correlates with âX is a successful startupâ.
But the reasoning [that existing orgs are often poor at rewarding/âsupporting/âfostering new (extraordinary) leadership] seems to apply:
For example, GiveWell was a scrappy, somewhat polemical startup, and the work done there ultimately succeeded and created Open Phil and to a large degree, the present EA movement.
I donât think any of this would have happened if Holden Karnofsky and Elie Hassenfeld had to say, go into Charity Navigator (or a dozen other low-wattage meta-charities that we will never hear of) and try to turn it around from the inside. While being somewhat vague, my models of orgs and information from EA orgs do not suggest that they are any better at this (for mostly benign, natural reasons, e.g. âfocusâ).
It seems that the main value of entrepreneurship is the creation of new orgs to have impact, both from the founder and from the many other staff/âparticipants in the org.
Typically (and maybe ideally) new orgs are in wholly new territory (underserved cause areas, untried interventions) and inherently there are fewer people who can evaluate them.
It seems like there might be a âmarket failureâ in EA where people can reasonably be known to be doing good work, but are not compensated appropriately for their work, unless they do some weird bespoke thing.
It seems that the now canonized posts Really Hard and Denise Melchinâs experiences suggest this has exactly happened, extensively even. I think it is very likely that both of these people are not just useful, but are/âcould be highly impactful in EA and do not âdeserveâ the experiences their described.
[I think the main counterpoint would be that only the top X% of people are eligible for EA work or something like that and X% is quite small. I would be willing to understand this idea, but it doesnât seem plausible/âacceptable to me. Note that currently, there is a concerted effort to foster/âsweep in very high potential longtermists and high potential EAs in early career stages, which seems invaluable and correct. In this effort, my guess is that the concurrent theme of focusing on very high quality candidates is related to experiences of the âproduction functionâ of work in AI/âlongtermism. However, I think this focus does not apply in the same way to other cause areas.]
Again, as mentioned at the top, I feel like Iâve missed the point and Iâm just beating a caricature of what you said.
Thanks! âEA organizations are badâ is a reasonable answer.
(In contrast, âfor-profit organizations are badâ doesnât seem like reasonable answer for why for-profit entrepreneurship exists, as adverse selection isnât something better organizations can reasonably get around. It seems important to distinguish these, because it tells us how much effort EA organizations should put into supporting entrepreneur-type positions.)
Thanks for writing this up. At the risk of asking obvious question, Iâm interested in why you think entrepreneurship is valuable in EA.
One explanation for why entrepreneurship has high financial returns is information asymmetry/âadverse selection: itâs hard to tell if someone is a good CEO apart from âdoes their business do wellâ, so they are forced to have their compensation tied closely to business outcomes (instead of something like âdoes their manager think they are doing a good jobâ), which have high variance; as a result of this variance and people being risk-averse, expected returns need to be high in order to compensate these entrepreneurs.
Itâs not obvious to me that this information asymmetry exists in EA. E.g. I expect âBuck thinks X is a good group leaderâ correlates better with âX is a good group leaderâ than âBuck thinks X will be a successful startupâ correlates with âX is a successful startupâ.
It seems like there might be a âmarket failureâ in EA where people can reasonably be known to be doing good work, but are not compensated appropriately for their work, unless they do some weird bespoke thing.
You seem to be wise and thoughtful, but I donât understand the premise of this question or this belief:
But the reasoning [that existing orgs are often poor at rewarding/âsupporting/âfostering new (extraordinary) leadership] seems to apply:
For example, GiveWell was a scrappy, somewhat polemical startup, and the work done there ultimately succeeded and created Open Phil and to a large degree, the present EA movement.
I donât think any of this would have happened if Holden Karnofsky and Elie Hassenfeld had to say, go into Charity Navigator (or a dozen other low-wattage meta-charities that we will never hear of) and try to turn it around from the inside. While being somewhat vague, my models of orgs and information from EA orgs do not suggest that they are any better at this (for mostly benign, natural reasons, e.g. âfocusâ).
It seems that the main value of entrepreneurship is the creation of new orgs to have impact, both from the founder and from the many other staff/âparticipants in the org.
Typically (and maybe ideally) new orgs are in wholly new territory (underserved cause areas, untried interventions) and inherently there are fewer people who can evaluate them.
It seems that the now canonized posts Really Hard and Denise Melchinâs experiences suggest this has exactly happened, extensively even. I think it is very likely that both of these people are not just useful, but are/âcould be highly impactful in EA and do not âdeserveâ the experiences their described.
[I think the main counterpoint would be that only the top X% of people are eligible for EA work or something like that and X% is quite small. I would be willing to understand this idea, but it doesnât seem plausible/âacceptable to me. Note that currently, there is a concerted effort to foster/âsweep in very high potential longtermists and high potential EAs in early career stages, which seems invaluable and correct. In this effort, my guess is that the concurrent theme of focusing on very high quality candidates is related to experiences of the âproduction functionâ of work in AI/âlongtermism. However, I think this focus does not apply in the same way to other cause areas.]
Again, as mentioned at the top, I feel like Iâve missed the point and Iâm just beating a caricature of what you said.
Thanks! âEA organizations are badâ is a reasonable answer.
(In contrast, âfor-profit organizations are badâ doesnât seem like reasonable answer for why for-profit entrepreneurship exists, as adverse selection isnât something better organizations can reasonably get around. It seems important to distinguish these, because it tells us how much effort EA organizations should put into supporting entrepreneur-type positions.)