A point about hiring and grantmaking, that may already be conventional wisdom:
If you’re hiring for highly autonomous roles at a non-profit, or looking for non-profit founders to fund, then
advice derived from the startup world is often going to overweight the importance of entrepreneurialism relative to self-skepticism and reflectiveness.[1]
Non-profits, particularly non-profits with longtermist missions, are typically trying to maximize something that is way more illegible than time-discounted future profits. To give a specific example: I think it’s way harder for an organization like the Centre for Effective Altruism to tell if it’s on the right track than it is for a company like Zoom to tell if it’s on the right track. CEA can track certain specific metrics (e.g. the number of “new connections” reported at each conference it organizes), but it will often be ambiguous how strongly these metrics reflect positive impact—and there will also always be a risk that various negative indirect effects aren’t being captured by the key metrics being used. In some cases, evaluating the expected impact of work will also require making assumptions about how the world will evolve over the next couple decades (e.g. assumptions about how pressing risks from AI are).
I think this means that it’s especially important for these non-profits to employ and be headed by people who are self-skeptical and reflect deeply on decisions. Being entrepreneurial, having a bias toward action, and so on, don’t count for much if the organisation isn’t pointed in the right direction. As Ozzie Gooen has pointed out, there are many examples of massive and superficially successful initiatives (headed by very driven and entrepreneurial people) whose theories-of-impact don’t stand up to scrutiny.
A specific example from Ozzie’s post: SpaceX is a massive and extraordinarily impressive venture that was (at least according to Elon Musk) largely started to help reduce the chance of human extinction, by helping humanity become a multi-planetary species earlier than it otherwise would. But I think it’s hard to see how their work reduces extinction risk very much. If you’re worried about the climate effects of nuclear war, for example, then it seems important to remember that post-nuclear-war Earth would still have a much more hospitable climate than Mars. It’s pretty hard to imagine a disaster scenario where building Martian colonies would be better than (for example) building some bunkers on Earth.[2][3] So—relative to the organization’s stated social mission—all the talent, money, and effort SpaceX has absorbed might not ultimately come out to even close to as much as it could have.
A more concise way to put the concern here: Popular writing on talent identification is often implicitly asking the question “How can we identify future Elon Musks?” But, for the most part, longtermist non-profits shouldn’t be looking to put future Elon Musks into leadership positions .[4]
Another example: It’s possible that many highly successful environmentalist organizations/groups have ended up causing net harm to the environment, by being insufficiently self-skeptical and reflective when deciding how to approach nuclear energy issues.
I’ve encountered the argument that a Mars mission will reduce existential risk by fostering a common human identity and political unity, or hope for the future, which will in turn lead to policies that reduce other existential risks (e.g. bioterrorism or nuclear war). But I think this argument also doesn’t hold to scrutiny. Focusing just at the domestic level, for example, the Apollo program had far from universal support, and the decade that followed the moon landing definitely was very from a time of optimism and unity in the US. At the international level, it was also of course largely motivated by great power competition with the Soviet Union.
A follow-up thought: Ultimately, outside of earning-to-give ventures, we probably shouldn’t expect the longtermist community (or at least the best version of it) to house many extremely entrepreneurial people. There will be occasional leaders who are extremely high on both entrepreneurialism and reflectiveness (I can currently think of at least a couple); however, since these two traits don’t seem to be strongly correlated, this will probably only happen pretty rarely. It’s also, often, hard to keep exceptionally entrepreneurial people satisfied in non-leadership positions—since, almost by definition, autonomy is deeply important to them—so there may not be many opportunities, in general, to harness the talents of people who are exceptionally high on entrepreneurialism but mediocre on reflectiveness.
I think that’s mostly right, with a couple of caveats:
You only mentioned non-profits, but I think most of this applies to other longtermists organizations with pretty illegible missions. Maybe Anthropic is an example.
Some organizations with longtermists missions should not aim to maximise something particularly illegible. In these cases, entrepreneurialism will often be very important, including in highly autonomous roles. For example, some biosecurity organization could be trying to design and produce, at very large scales, “Super PPE”, such as masks, engineered with extreme events in mind.
Like SpaceX, which initially aimed to significantly reduce the cost, and improve the supply, of routine space flight, the Super PPE project would need to improve upon existing PPE designed for use in extreme events, which is “ bulky, highly restrictive, and insufficiently abundant”. (Alvea might be another example, but I don’t know enough about them).
This suggests a division of labour where project missions are defined by individuals outside the organization, as with Super PPE, before being executed by others, who are high on entrepreneurialism. Note that, in hiring for leadership roles in the organization, this will mean placing more weight on entrepreneurialism than on self-skepticism and reflectiveness. While Musk did a poor job defining SpaceX’s mission, he did an excellent job executing it.
Ultimately, outside of earning-to-give ventures, we probably shouldn’t expect the longtermist community (or at least the best version of it) to house many extremely entrepreneurial people. There will be occasional leaders who are extremely high on both entrepreneurialism and reflectiveness (I can currently think of at least a couple); however, since these two traits don’t seem to be strongly correlated, this will probably only happen pretty rarely.
This seems true. It also suggests that if you can be extremely high on both traits, you’ll bring significant counterfactual value.
The above post focused on the idea that certain traits—reflectiveness and self-skepticism—are more valuable in the context of non-profits (especially ones long-term missions) than they are in the context of startups.
I also think that certain traits—drivenness, risk-tolerance, and eccentricity—are less valuable in the context of non-profits than they are in the context of startups.
Hiring advice from the startup world often suggests that you should be looking for extraordinarily driven, risk-tolerant people with highly idiosyncratic perspectives on the world.[1] And, in the context of for-profit startups, it makes sense that these traits would be crucial.
A startup’s success will often depend on its ability to outcompete large, entrenched firms in some industry (e.g. taxi companies, hotels, tech giants). To do that, an extremely high level of drivenness may be necessary to compensate for lower resource levels, lower levels of expertise, and weaker connections to gatekeepers. Or you may need to be willing to take certain risks (e.g. regulatory/PR/enemy-making risks) that would slow down existing companies in pursuing certain opportunities. Or you may need to simply see an opportunity that virtually no one else would (despite huge incentives to see it), because you have an idiosyncratic way of seeing the world. Having all three of these traits (extreme drivenness, risk tolerance, idiosyncrasy) may be necessary for you to have any plausible chance of success.
I think that all of these traits are still valuable in the non-profit world, but I also think they’re comparatively less valuable (especially if you’re lucky enough to have secure funding). There’s simply less direct competition in the non-profit world. Large, entrenched non-profits also have much weaker incentives to find and exploit impact opportunities. Furthermore, the non-profit world isn’t even that big to begin with. So there’s no reason to assume all the low-hanging fruit have been plucked or to assume that large non-profits will crush you by default.[2]
For example: To accomplish something that (e.g.) the Gates Foundation hasn’t already accomplished, I think you don’t need to have extraordinary drivenness, risk-tolerance, or idiosyncrasy. [3]
Addendum that occurred to me while writing this follow-up: I also think that (at least given secure funding) these traits are less crucial in the non-profit world than they are in academia. Academic competition is more intense than non-profit competition and academics have stronger incentives to find new, true ideas than non-profits have to find and exploit opportunities to do good.
In fact—unlike in the for-profit start-up world—you should actually consider it a good outcome if a large non-profit starts copying your idea, implements it better than you, and makes your own organization redundant!
To be clear: These things—especially drivenness—are all important. But, unlike in the startup world, major success doesn’t necessarily require setting them to extreme values. I think we should be wary of laser-focusing on these traits in the way a VC would.
A point about hiring and grantmaking, that may already be conventional wisdom:
If you’re hiring for highly autonomous roles at a non-profit, or looking for non-profit founders to fund, then advice derived from the startup world is often going to overweight the importance of entrepreneurialism relative to self-skepticism and reflectiveness.[1]
Non-profits, particularly non-profits with longtermist missions, are typically trying to maximize something that is way more illegible than time-discounted future profits. To give a specific example: I think it’s way harder for an organization like the Centre for Effective Altruism to tell if it’s on the right track than it is for a company like Zoom to tell if it’s on the right track. CEA can track certain specific metrics (e.g. the number of “new connections” reported at each conference it organizes), but it will often be ambiguous how strongly these metrics reflect positive impact—and there will also always be a risk that various negative indirect effects aren’t being captured by the key metrics being used. In some cases, evaluating the expected impact of work will also require making assumptions about how the world will evolve over the next couple decades (e.g. assumptions about how pressing risks from AI are).
I think this means that it’s especially important for these non-profits to employ and be headed by people who are self-skeptical and reflect deeply on decisions. Being entrepreneurial, having a bias toward action, and so on, don’t count for much if the organisation isn’t pointed in the right direction. As Ozzie Gooen has pointed out, there are many examples of massive and superficially successful initiatives (headed by very driven and entrepreneurial people) whose theories-of-impact don’t stand up to scrutiny.
A specific example from Ozzie’s post: SpaceX is a massive and extraordinarily impressive venture that was (at least according to Elon Musk) largely started to help reduce the chance of human extinction, by helping humanity become a multi-planetary species earlier than it otherwise would. But I think it’s hard to see how their work reduces extinction risk very much. If you’re worried about the climate effects of nuclear war, for example, then it seems important to remember that post-nuclear-war Earth would still have a much more hospitable climate than Mars. It’s pretty hard to imagine a disaster scenario where building Martian colonies would be better than (for example) building some bunkers on Earth.[2][3] So—relative to the organization’s stated social mission—all the talent, money, and effort SpaceX has absorbed might not ultimately come out to even close to as much as it could have.
A more concise way to put the concern here: Popular writing on talent identification is often implicitly asking the question “How can we identify future Elon Musks?” But, for the most part, longtermist non-profits shouldn’t be looking to put future Elon Musks into leadership positions .[4]
I have in mind, for example, advice given by Y Combinator and advice given in Talent.
Another example: It’s possible that many highly successful environmentalist organizations/groups have ended up causing net harm to the environment, by being insufficiently self-skeptical and reflective when deciding how to approach nuclear energy issues.
I’ve encountered the argument that a Mars mission will reduce existential risk by fostering a common human identity and political unity, or hope for the future, which will in turn lead to policies that reduce other existential risks (e.g. bioterrorism or nuclear war). But I think this argument also doesn’t hold to scrutiny. Focusing just at the domestic level, for example, the Apollo program had far from universal support, and the decade that followed the moon landing definitely was very from a time of optimism and unity in the US. At the international level, it was also of course largely motivated by great power competition with the Soviet Union.
A follow-up thought: Ultimately, outside of earning-to-give ventures, we probably shouldn’t expect the longtermist community (or at least the best version of it) to house many extremely entrepreneurial people. There will be occasional leaders who are extremely high on both entrepreneurialism and reflectiveness (I can currently think of at least a couple); however, since these two traits don’t seem to be strongly correlated, this will probably only happen pretty rarely. It’s also, often, hard to keep exceptionally entrepreneurial people satisfied in non-leadership positions—since, almost by definition, autonomy is deeply important to them—so there may not be many opportunities, in general, to harness the talents of people who are exceptionally high on entrepreneurialism but mediocre on reflectiveness.
I think that’s mostly right, with a couple of caveats:
You only mentioned non-profits, but I think most of this applies to other longtermists organizations with pretty illegible missions. Maybe Anthropic is an example.
Some organizations with longtermists missions should not aim to maximise something particularly illegible. In these cases, entrepreneurialism will often be very important, including in highly autonomous roles. For example, some biosecurity organization could be trying to design and produce, at very large scales, “Super PPE”, such as masks, engineered with extreme events in mind.
Like SpaceX, which initially aimed to significantly reduce the cost, and improve the supply, of routine space flight, the Super PPE project would need to improve upon existing PPE designed for use in extreme events, which is “ bulky, highly restrictive, and insufficiently abundant”. (Alvea might be another example, but I don’t know enough about them).
This suggests a division of labour where project missions are defined by individuals outside the organization, as with Super PPE, before being executed by others, who are high on entrepreneurialism. Note that, in hiring for leadership roles in the organization, this will mean placing more weight on entrepreneurialism than on self-skepticism and reflectiveness. While Musk did a poor job defining SpaceX’s mission, he did an excellent job executing it.
This seems true. It also suggests that if you can be extremely high on both traits, you’ll bring significant counterfactual value.
Good points—those all seem right to me!
A follow-on:
The above post focused on the idea that certain traits—reflectiveness and self-skepticism—are more valuable in the context of non-profits (especially ones long-term missions) than they are in the context of startups.
I also think that certain traits—drivenness, risk-tolerance, and eccentricity—are less valuable in the context of non-profits than they are in the context of startups.
Hiring advice from the startup world often suggests that you should be looking for extraordinarily driven, risk-tolerant people with highly idiosyncratic perspectives on the world.[1] And, in the context of for-profit startups, it makes sense that these traits would be crucial.
A startup’s success will often depend on its ability to outcompete large, entrenched firms in some industry (e.g. taxi companies, hotels, tech giants). To do that, an extremely high level of drivenness may be necessary to compensate for lower resource levels, lower levels of expertise, and weaker connections to gatekeepers. Or you may need to be willing to take certain risks (e.g. regulatory/PR/enemy-making risks) that would slow down existing companies in pursuing certain opportunities. Or you may need to simply see an opportunity that virtually no one else would (despite huge incentives to see it), because you have an idiosyncratic way of seeing the world. Having all three of these traits (extreme drivenness, risk tolerance, idiosyncrasy) may be necessary for you to have any plausible chance of success.
I think that all of these traits are still valuable in the non-profit world, but I also think they’re comparatively less valuable (especially if you’re lucky enough to have secure funding). There’s simply less direct competition in the non-profit world. Large, entrenched non-profits also have much weaker incentives to find and exploit impact opportunities. Furthermore, the non-profit world isn’t even that big to begin with. So there’s no reason to assume all the low-hanging fruit have been plucked or to assume that large non-profits will crush you by default.[2]
For example: To accomplish something that (e.g.) the Gates Foundation hasn’t already accomplished, I think you don’t need to have extraordinary drivenness, risk-tolerance, or idiosyncrasy. [3]
Addendum that occurred to me while writing this follow-up: I also think that (at least given secure funding) these traits are less crucial in the non-profit world than they are in academia. Academic competition is more intense than non-profit competition and academics have stronger incentives to find new, true ideas than non-profits have to find and exploit opportunities to do good.
This seems to be roughly the perspective taken by the book Talent, for example.
In fact—unlike in the for-profit start-up world—you should actually consider it a good outcome if a large non-profit starts copying your idea, implements it better than you, and makes your own organization redundant!
To be clear: These things—especially drivenness—are all important. But, unlike in the startup world, major success doesn’t necessarily require setting them to extreme values. I think we should be wary of laser-focusing on these traits in the way a VC would.