This is an objectively good post, it is an objectively good idea, but the texture and content of some of the comments surprise me.
Like, some thoughts on the negative side:
There isn’t much signs of models/awareness of the difference it takes to get a 10M valuation and a 10B (or even 1B valuation), and it’s the latter two that drives the post. This is a big deal because the nature/cause of success is probably totally different between these levels and that insanely smart, successful people might be capped at lower valuations. I suspect some comments here (that would update onlookers toward the billionaire idea) lack these models and what their implications are.
It’s first approximation good if indeed EA had some program that could bring say, 1/1000 of incubatees to this level of success. But the costs would be really high: it would consume 1000 potential leaders in expectation and many of these people would be hurt. The skill set is different, maybe even net negative for non-profits, because of zero sum, sharp elbows sort of work required.
It’s worth considering whether the underlying sentiment which drives posts like “Rejection” and “Very hard to find a job” are driven by realities that may not be a defect, but just the other side of the coin to worldviews/models of talent that are common in a “hits based model” (which itself might be an overly generous characterization). People don’t talk about the Chesterton fence: growing talent is hard not because people are snobby but because of things like founder effects and quality/fit is deceptively hard/important and impractical to communicate to people who don’t have it. Yet, this doesn’t even scratch the surface—I’ve seen leaders in plant based foods after an exit describe distrusting/managing out virtuous early employees after raises/growth since they no longer matched the calibre they needed.
This was written really quickly and I stopped writing here because it’s unclear there’s any demand for this comment. But I think this comment is an update toward normal, mainstream thought about this.
On the positive side:
It seems like there is a capability to build or support EAs in some program or informal process, because of the tech bent, shared worldview and connections. This might be a large legit advantage over other incubators.
I really agree EAs tend to be more conscientious and able. (However the pool of EA may change rapidly if it’s cheap to do so, and then you’re back to gatekeeping again).
Even or especially if you fully agree with a critique of the tech sector that says they are basically reinventing oligopoly and regulatory capture, this seems like a strong positive reason to “give EAs” these companies/slots..
Edit on Saturday, October 16, 2021: removed “Ummm, what?”, as per irving’s comment.
The rest of this comment is interesting, but opening with “Ummm, what?” seems bad, especially since it takes careful reading to know what you are specifically objecting to.
This is an objectively good post, it is an objectively good idea, but the texture and content of some of the comments surprise me.
Like, some thoughts on the negative side:
There isn’t much signs of models/awareness of the difference it takes to get a 10M valuation and a 10B (or even 1B valuation), and it’s the latter two that drives the post. This is a big deal because the nature/cause of success is probably totally different between these levels and that insanely smart, successful people might be capped at lower valuations. I suspect some comments here (that would update onlookers toward the billionaire idea) lack these models and what their implications are.
It’s first approximation good if indeed EA had some program that could bring say, 1/1000 of incubatees to this level of success. But the costs would be really high: it would consume 1000 potential leaders in expectation and many of these people would be hurt. The skill set is different, maybe even net negative for non-profits, because of zero sum, sharp elbows sort of work required.
It’s worth considering whether the underlying sentiment which drives posts like “Rejection” and “Very hard to find a job” are driven by realities that may not be a defect, but just the other side of the coin to worldviews/models of talent that are common in a “hits based model” (which itself might be an overly generous characterization). People don’t talk about the Chesterton fence: growing talent is hard not because people are snobby but because of things like founder effects and quality/fit is deceptively hard/important and impractical to communicate to people who don’t have it. Yet, this doesn’t even scratch the surface—I’ve seen leaders in plant based foods after an exit describe distrusting/managing out virtuous early employees after raises/growth since they no longer matched the calibre they needed.
This was written really quickly and I stopped writing here because it’s unclear there’s any demand for this comment. But I think this comment is an update toward normal, mainstream thought about this.
On the positive side:
It seems like there is a capability to build or support EAs in some program or informal process, because of the tech bent, shared worldview and connections. This might be a large legit advantage over other incubators.
I really agree EAs tend to be more conscientious and able. (However the pool of EA may change rapidly if it’s cheap to do so, and then you’re back to gatekeeping again).
Even or especially if you fully agree with a critique of the tech sector that says they are basically reinventing oligopoly and regulatory capture, this seems like a strong positive reason to “give EAs” these companies/slots..
Edit on Saturday, October 16, 2021: removed “Ummm, what?”, as per irving’s comment.
The rest of this comment is interesting, but opening with “Ummm, what?” seems bad, especially since it takes careful reading to know what you are specifically objecting to.
Edit: Thanks for fixing!