Welcome to the EA community! I liked your first post on the Forum, and I hope youâll come back to make many more.
Now that thatâs been said, hereâs my response, which may sound oppositional, but which I intend to be more along the lines of âtrying to get on the same page, since I think we actually agree on a lot of stuffâ. Overall, I think your vision of success is pretty close to what people at 80K might say (though I could be wrong and I certainly donât speak for them).
Where are the Elon Musks and Peter Thiels (early career trajectory-wise) in the EA community? Why are so few EAs making it into leadership positions at some of the most critical orgs?
The thing about Elon Musk and Peter Thiel is that it was hard to tell that they would become Musk and Thiel. There are many more âfuture Elon Musksâ than there are âpeople who become Elon Musk after everything shakes outâ.
For all I know, we may have some of those people in the community; I think we certainly have a higher expected number of those people per-capita than almost any other community in the world, even if that number is something like â0.3â. (The tech-billionaire base rate is very low.)
I donât really know what you mean by âthe most critical orgsâ, since EA seems to be doing well there already:
The Open Philanthropy Project has made hundreds of millions of dollars in grants and is set to do hundreds of millions moreâthey arenât on the scale of Sequoia or Y Combinator, but theyâre similar to a mid-size venture fund (if my estimates about those funds arenât too off-base).
GiveWell is moving $40-50 million/âyear and doesnât seem likely to slow down. In fact, theyâre looking to double in size and start funding lots of new projects in areas like âchanging national lawâ.
DeepMind and OpenAI, both of which could become some of the most influential technical projects in history, have a lot of employees (including executives) who are familiar with EA or active participants in the community.
A former head of IARPA, the CIAâs R&D department (roughly speaking), is now the head of an AI think tank in Washington DC whose other staffers also have really impressive resumes. (Tantum Collins, a non-executive researcher who appears halfway down the page, is a âPrincipal for Research and Strategyâ at DeepMind and co-authored a book with Stanley McChrystal.)
Itâs true that we havenât gotten the first EA senator, or the first EA CEO of a FAANG company (Zuckerberg isnât quite there yet), but I think weâre making reasonable progress for a movement that was founded ten years ago in a philosopherâs house and didnât really âprofessionalizeâ until 2013 or so.
Meanwhile...
EA philosophy seems to have influenced, or at least caught the attention of, many people who are already extremely successful (from Gates and Musk to Vitalik Buterin and Patrick Collison).
We have support from some of the worldâs most prominent philosophers, quite a few other major-league academics (e.g. Philip Tetlock), and several of the worldâs best poker players (who not only donate a portion of their tournament winnings, but also spend their spare time running fundraisers for cash grants and AI safety).
We have a section thatâs at least 50% devoted to EA causes in a popular online publication.
Thereâs definitely room to grow and improve, but the trajectory looks⊠well, pretty good. Anecdotally, I didnât pay much attention to new developments in EA between mid-2016 and mid-2018.
When talking to someone really talented graduating from university and deciding what to do next, Iâd probably ask them why what theyâre doing immediately might allow for outsize returns /â unreasonably fast growth (in terms of skills, network, credibility, money, etc.). If no compelling answer, Iâd say theyâre setting themselves up for relative mediocrity /â slow path to massive impact.
I generally agree with this, though one should be careful with oneâs rocket ship, lest it crash. Theranos is the most obvious example; Tesla may yet become another, and plenty of others burned up in the atmosphere without getting much public attention.
I agree that all of the things you listed are great. But note that almost all of them look like âconvince already-successful people of EA ideasâ rather than âtalented young EAs doing exceptional thingsâ. For the purposes of this discussion, the main question isnât when we get the first EA senator, but whether the advice weâre giving to young EAs will make them more likely to become senators or billion-dollar donors or other cool things. And yes, thereâs a strong selection bias here because obviously if youâre young, youâve had less time to do cool things. But I still think your argument weighs only weakly against Vishalâs advocacy of what Iâm tempted to call the âSilicon Valley mindsetâ.
So the empirical question here is something like, if more EAs steer their careers based on a Silicon Valley mindset (as opposed to an EA mindset), will the movement overall be able to do more good? Personally I think thatâs true for driven, high-conscientiousness generalists, e.g. the sort of people OpenPhil hires. For other people, I guess what I advocate in the post above is sort of a middle ground between Vishalâs âgo for extreme growthâ and the more standard EA advice to âgo for the most important cause areasâ.
Iâm ok with calling this the âSilicon Valley mindsetââsince it recommends a growth-oriented career mindset, like the Breakout List philosophy, with the ultimate success metric being impactâthough itâs important to note that Iâm not advocating for everybody to go start companies. Rather, Iâm describing a shift in focus towards extreme career capital growth asap (rather than direct impact asap) in any reasonably relevant domain, subject to the constraint of robustly avoiding value drift. This seems like the optimal approach for top talent, in aggregate, if weâre optimizing for cumulative impact over many decades, and if we think we can apply the venture capitalist mindset to impact (thinking of early-career talent as akin to early-stage startups).
Sorry for not realizing you worked at DeepMind; my comment would have looked different had I known about our shared context. (Also, consider writing a bio!)
I think weâre aligned in our desire to see more early-career EAs apply to those roles (and on most other things). My post aimed to:
1. Provide some background on some of the more âsuccessfulâ people associated with EA.
2. Point out that ârecruiting people with lots of career capitalâ may be comparable to âacquiring career capitalâ as a strategy to maximize impact. Of course, the latter makes the former easier, if you actually succeed, but it also takes more time.
On point (2): What fraction of the money/âsocial capital EA will someday acquire âalready existsâ? Is our future going to look more like âlots of EA people succeededâ, or âlots of successful people found EAâ?
Historically, both strategies seem to have worked for different social movements; the most successful neoliberals grew into their influence, while the Fabian Society relied on recruiting top talent. (Iâm not a history expert, and this could be far too simple.)
--
One concern I have about the âmaximize career capitalâ strategy is that it has tricky social implications; itâs easy for a âmost people should do Xâ message to become âeveryone who doesnât do X is wrongâ, as Richard points out. But career capital acquisition doesnât lead to as much direct competition between EAs, and could produce more skill-per-person in the process, so perhaps itâs actually just better for most people.
Some of my difficulty in grasping the big picture for the community as a whole is that I donât have a sense for what early-career EAs are actually working on. Sometimes, it feels like everyone is a grad student or FAANG programmer (not much potential for outsize returns). At other times, it feels like everyone is trying to start a company or a charity (lots of potential, lots of risk).
Is there any specific path you think not enough people in the community are taking from a âbig wins earlyâ perspective? Joining startups? Studying a particular field?
--
Finally, on the subject of risk, I think Iâm going to take this comment and turn it into a post. (Brief summary: Someday, when we look back on the impact of EA, weâll have a good sense for whose work was âmost impactfulâ, but that shouldnât matter nearly as much to our future selves as the fact that many unsuccessful people still tried their best to do good, and were also part of the movementâs âgrand storyâ.) I hope we keep respecting good strategy and careful thinking, whether those things are attached to high-risk or low-risk pursuits.
I donât have enough data to know if there are specific paths not enough people are taking, but Iâm pretty certain thereâs a question that not enough people are asking within the paths theyâre taking: how is what Iâm doing *right now* going to lead to a 10x/â100x/â1,000x win, in expectation? Whatâs the Move 37 Iâm making, that nobody else is seeing? This mentality that can be applied in pretty much any career path.
Note that your argument here is roughly Ben Paceâs position in this post which we co-wrote. I argued against Benâs position in the post because I thought it was too extreme, but I agree with both of you that most EAs arenât going far enough in that direction.
Welcome to the EA community! I liked your first post on the Forum, and I hope youâll come back to make many more.
Now that thatâs been said, hereâs my response, which may sound oppositional, but which I intend to be more along the lines of âtrying to get on the same page, since I think we actually agree on a lot of stuffâ. Overall, I think your vision of success is pretty close to what people at 80K might say (though I could be wrong and I certainly donât speak for them).
The thing about Elon Musk and Peter Thiel is that it was hard to tell that they would become Musk and Thiel. There are many more âfuture Elon Musksâ than there are âpeople who become Elon Musk after everything shakes outâ.
For all I know, we may have some of those people in the community; I think we certainly have a higher expected number of those people per-capita than almost any other community in the world, even if that number is something like â0.3â. (The tech-billionaire base rate is very low.)
I donât really know what you mean by âthe most critical orgsâ, since EA seems to be doing well there already:
The Open Philanthropy Project has made hundreds of millions of dollars in grants and is set to do hundreds of millions moreâthey arenât on the scale of Sequoia or Y Combinator, but theyâre similar to a mid-size venture fund (if my estimates about those funds arenât too off-base).
GiveWell is moving $40-50 million/âyear and doesnât seem likely to slow down. In fact, theyâre looking to double in size and start funding lots of new projects in areas like âchanging national lawâ.
DeepMind and OpenAI, both of which could become some of the most influential technical projects in history, have a lot of employees (including executives) who are familiar with EA or active participants in the community.
A former head of IARPA, the CIAâs R&D department (roughly speaking), is now the head of an AI think tank in Washington DC whose other staffers also have really impressive resumes. (Tantum Collins, a non-executive researcher who appears halfway down the page, is a âPrincipal for Research and Strategyâ at DeepMind and co-authored a book with Stanley McChrystal.)
Itâs true that we havenât gotten the first EA senator, or the first EA CEO of a FAANG company (Zuckerberg isnât quite there yet), but I think weâre making reasonable progress for a movement that was founded ten years ago in a philosopherâs house and didnât really âprofessionalizeâ until 2013 or so.
Meanwhile...
EA philosophy seems to have influenced, or at least caught the attention of, many people who are already extremely successful (from Gates and Musk to Vitalik Buterin and Patrick Collison).
We have support from some of the worldâs most prominent philosophers, quite a few other major-league academics (e.g. Philip Tetlock), and several of the worldâs best poker players (who not only donate a portion of their tournament winnings, but also spend their spare time running fundraisers for cash grants and AI safety).
We have a section thatâs at least 50% devoted to EA causes in a popular online publication.
Thereâs definitely room to grow and improve, but the trajectory looks⊠well, pretty good. Anecdotally, I didnât pay much attention to new developments in EA between mid-2016 and mid-2018.
I generally agree with this, though one should be careful with oneâs rocket ship, lest it crash. Theranos is the most obvious example; Tesla may yet become another, and plenty of others burned up in the atmosphere without getting much public attention.
--
I work for CEA, but these views are my own.
I agree that all of the things you listed are great. But note that almost all of them look like âconvince already-successful people of EA ideasâ rather than âtalented young EAs doing exceptional thingsâ. For the purposes of this discussion, the main question isnât when we get the first EA senator, but whether the advice weâre giving to young EAs will make them more likely to become senators or billion-dollar donors or other cool things. And yes, thereâs a strong selection bias here because obviously if youâre young, youâve had less time to do cool things. But I still think your argument weighs only weakly against Vishalâs advocacy of what Iâm tempted to call the âSilicon Valley mindsetâ.
So the empirical question here is something like, if more EAs steer their careers based on a Silicon Valley mindset (as opposed to an EA mindset), will the movement overall be able to do more good? Personally I think thatâs true for driven, high-conscientiousness generalists, e.g. the sort of people OpenPhil hires. For other people, I guess what I advocate in the post above is sort of a middle ground between Vishalâs âgo for extreme growthâ and the more standard EA advice to âgo for the most important cause areasâ.
Iâm ok with calling this the âSilicon Valley mindsetââsince it recommends a growth-oriented career mindset, like the Breakout List philosophy, with the ultimate success metric being impactâthough itâs important to note that Iâm not advocating for everybody to go start companies. Rather, Iâm describing a shift in focus towards extreme career capital growth asap (rather than direct impact asap) in any reasonably relevant domain, subject to the constraint of robustly avoiding value drift. This seems like the optimal approach for top talent, in aggregate, if weâre optimizing for cumulative impact over many decades, and if we think we can apply the venture capitalist mindset to impact (thinking of early-career talent as akin to early-stage startups).
Thanks for this reply!
Sorry for not realizing you worked at DeepMind; my comment would have looked different had I known about our shared context. (Also, consider writing a bio!)
I think weâre aligned in our desire to see more early-career EAs apply to those roles (and on most other things). My post aimed to:
1. Provide some background on some of the more âsuccessfulâ people associated with EA.
2. Point out that ârecruiting people with lots of career capitalâ may be comparable to âacquiring career capitalâ as a strategy to maximize impact. Of course, the latter makes the former easier, if you actually succeed, but it also takes more time.
On point (2): What fraction of the money/âsocial capital EA will someday acquire âalready existsâ? Is our future going to look more like âlots of EA people succeededâ, or âlots of successful people found EAâ?
Historically, both strategies seem to have worked for different social movements; the most successful neoliberals grew into their influence, while the Fabian Society relied on recruiting top talent. (Iâm not a history expert, and this could be far too simple.)
--
One concern I have about the âmaximize career capitalâ strategy is that it has tricky social implications; itâs easy for a âmost people should do Xâ message to become âeveryone who doesnât do X is wrongâ, as Richard points out. But career capital acquisition doesnât lead to as much direct competition between EAs, and could produce more skill-per-person in the process, so perhaps itâs actually just better for most people.
Some of my difficulty in grasping the big picture for the community as a whole is that I donât have a sense for what early-career EAs are actually working on. Sometimes, it feels like everyone is a grad student or FAANG programmer (not much potential for outsize returns). At other times, it feels like everyone is trying to start a company or a charity (lots of potential, lots of risk).
Is there any specific path you think not enough people in the community are taking from a âbig wins earlyâ perspective? Joining startups? Studying a particular field?
--
Finally, on the subject of risk, I think Iâm going to take this comment and turn it into a post. (Brief summary: Someday, when we look back on the impact of EA, weâll have a good sense for whose work was âmost impactfulâ, but that shouldnât matter nearly as much to our future selves as the fact that many unsuccessful people still tried their best to do good, and were also part of the movementâs âgrand storyâ.) I hope we keep respecting good strategy and careful thinking, whether those things are attached to high-risk or low-risk pursuits.
I donât have enough data to know if there are specific paths not enough people are taking, but Iâm pretty certain thereâs a question that not enough people are asking within the paths theyâre taking: how is what Iâm doing *right now* going to lead to a 10x/â100x/â1,000x win, in expectation? Whatâs the Move 37 Iâm making, that nobody else is seeing? This mentality that can be applied in pretty much any career path.
Note that your argument here is roughly Ben Paceâs position in this post which we co-wrote. I argued against Benâs position in the post because I thought it was too extreme, but I agree with both of you that most EAs arenât going far enough in that direction.