You seem to be wise and thoughtful, but I don’t understand the premise of this question or this belief:
One explanation for why entrepreneurship has high financial returns is information asymmetry/adverse selection: it’s hard to tell if someone is a good CEO apart from “does their business do well”, so they are forced to have their compensation tied closely to business outcomes (instead of something like “does their manager think they are doing a good job”), which have high variance; as a result of this variance and people being risk-averse, expected returns need to be high in order to compensate these entrepreneurs.
It’s not obvious to me that this information asymmetry exists in EA. E.g. I expect “Buck thinks X is a good group leader” correlates better with “X is a good group leader” than “Buck thinks X will be a successful startup” correlates with “X is a successful startup”.
But the reasoning [that existing orgs are often poor at rewarding/supporting/fostering new (extraordinary) leadership] seems to apply:
For example, GiveWell was a scrappy, somewhat polemical startup, and the work done there ultimately succeeded and created Open Phil and to a large degree, the present EA movement.
I don’t think any of this would have happened if Holden Karnofsky and Elie Hassenfeld had to say, go into Charity Navigator (or a dozen other low-wattage meta-charities that we will never hear of) and try to turn it around from the inside. While being somewhat vague, my models of orgs and information from EA orgs do not suggest that they are any better at this (for mostly benign, natural reasons, e.g. “focus”).
It seems that the main value of entrepreneurship is the creation of new orgs to have impact, both from the founder and from the many other staff/participants in the org.
Typically (and maybe ideally) new orgs are in wholly new territory (underserved cause areas, untried interventions) and inherently there are fewer people who can evaluate them.
It seems like there might be a “market failure” in EA where people can reasonably be known to be doing good work, but are not compensated appropriately for their work, unless they do some weird bespoke thing.
It seems that the now canonized posts Really Hard and Denise Melchin’s experiences suggest this has exactly happened, extensively even. I think it is very likely that both of these people are not just useful, but are/could be highly impactful in EA and do not “deserve” the experiences their described.
[I think the main counterpoint would be that only the top X% of people are eligible for EA work or something like that and X% is quite small. I would be willing to understand this idea, but it doesn’t seem plausible/acceptable to me. Note that currently, there is a concerted effort to foster/sweep in very high potential longtermists and high potential EAs in early career stages, which seems invaluable and correct. In this effort, my guess is that the concurrent theme of focusing on very high quality candidates is related to experiences of the “production function” of work in AI/longtermism. However, I think this focus does not apply in the same way to other cause areas.]
Again, as mentioned at the top, I feel like I’ve missed the point and I’m just beating a caricature of what you said.
Thanks! “EA organizations are bad” is a reasonable answer.
(In contrast, “for-profit organizations are bad” doesn’t seem like reasonable answer for why for-profit entrepreneurship exists, as adverse selection isn’t something better organizations can reasonably get around. It seems important to distinguish these, because it tells us how much effort EA organizations should put into supporting entrepreneur-type positions.)
You seem to be wise and thoughtful, but I don’t understand the premise of this question or this belief:
But the reasoning [that existing orgs are often poor at rewarding/supporting/fostering new (extraordinary) leadership] seems to apply:
For example, GiveWell was a scrappy, somewhat polemical startup, and the work done there ultimately succeeded and created Open Phil and to a large degree, the present EA movement.
I don’t think any of this would have happened if Holden Karnofsky and Elie Hassenfeld had to say, go into Charity Navigator (or a dozen other low-wattage meta-charities that we will never hear of) and try to turn it around from the inside. While being somewhat vague, my models of orgs and information from EA orgs do not suggest that they are any better at this (for mostly benign, natural reasons, e.g. “focus”).
It seems that the main value of entrepreneurship is the creation of new orgs to have impact, both from the founder and from the many other staff/participants in the org.
Typically (and maybe ideally) new orgs are in wholly new territory (underserved cause areas, untried interventions) and inherently there are fewer people who can evaluate them.
It seems that the now canonized posts Really Hard and Denise Melchin’s experiences suggest this has exactly happened, extensively even. I think it is very likely that both of these people are not just useful, but are/could be highly impactful in EA and do not “deserve” the experiences their described.
[I think the main counterpoint would be that only the top X% of people are eligible for EA work or something like that and X% is quite small. I would be willing to understand this idea, but it doesn’t seem plausible/acceptable to me. Note that currently, there is a concerted effort to foster/sweep in very high potential longtermists and high potential EAs in early career stages, which seems invaluable and correct. In this effort, my guess is that the concurrent theme of focusing on very high quality candidates is related to experiences of the “production function” of work in AI/longtermism. However, I think this focus does not apply in the same way to other cause areas.]
Again, as mentioned at the top, I feel like I’ve missed the point and I’m just beating a caricature of what you said.
Thanks! “EA organizations are bad” is a reasonable answer.
(In contrast, “for-profit organizations are bad” doesn’t seem like reasonable answer for why for-profit entrepreneurship exists, as adverse selection isn’t something better organizations can reasonably get around. It seems important to distinguish these, because it tells us how much effort EA organizations should put into supporting entrepreneur-type positions.)