Thread is too long to fully process, but I’ll try to re-phrase what seems to be a crucial & perhaps-not-disputed point here:
If you have big enough wins on your record, early on, you can do pretty much anything.
If you’re optimizing for max impact in a decades-long career (which Michelle & Richard both seem to agree is the right framing), then pursuing opportunities with extreme growth trajectories seems like a good strategy.
Where are the Elon Musks and Peter Thiels (early career trajectory-wise) in the EA community? Why are so few EAs making it into leadership positions at some of the most critical orgs?
When talking to someone really talented graduating from university and deciding what to do next, I’d probably ask them why what they’re doing immediately might allow for outsize returns / unreasonably fast growth (in terms of skills, network, credibility, money, etc.). If no compelling answer, I’d say they’re setting themselves up for relative mediocrity / slow path to massive impact. It’s similar to the Sheryl Sandberg(?) quote implying that one should join a breakout stage company to supercharge one’s career, no matter what the role: “If you’re offered a seat on a rocket ship, don’t ask what seat. Just get on.”
I think the point I’m making here is an extension of this: early on, don’t ask which rocket ship, either. Just get on one, and you’ll win. (Make sure to build systems that prevent value drift).
Path to impact is much easier once you’ve solved network, skills, finances, credibility, leadership ability, confidence (this last one is crucial and under-discussed). At that point, time becomes the only bottleneck.
This is written primarily for generalist (i.e. non-technical / non-research) talent. Technical & research-oriented careers probably follow different patterns, though the underlying principles probably still apply.
[@80k/Richard: I’d be curious to get a 1-ish-sentence response, e.g. “You’re wrong, and need to go do some reading and come back so we can have an informed discussion” or “You’re right, and this matches up with how we think about things”. PS, this is my first-ever online interaction with the EA community!]
Welcome to the EA community! I liked your first post on the Forum, and I hope you’ll come back to make many more.
Now that that’s been said, here’s my response, which may sound oppositional, but which I intend to be more along the lines of “trying to get on the same page, since I think we actually agree on a lot of stuff”. Overall, I think your vision of success is pretty close to what people at 80K might say (though I could be wrong and I certainly don’t speak for them).
Where are the Elon Musks and Peter Thiels (early career trajectory-wise) in the EA community? Why are so few EAs making it into leadership positions at some of the most critical orgs?
The thing about Elon Musk and Peter Thiel is that it was hard to tell that they would become Musk and Thiel. There are many more “future Elon Musks” than there are “people who become Elon Musk after everything shakes out”.
For all I know, we may have some of those people in the community; I think we certainly have a higher expected number of those people per-capita than almost any other community in the world, even if that number is something like “0.3”. (The tech-billionaire base rate is very low.)
I don’t really know what you mean by “the most critical orgs”, since EA seems to be doing well there already:
The Open Philanthropy Project has made hundreds of millions of dollars in grants and is set to do hundreds of millions more—they aren’t on the scale of Sequoia or Y Combinator, but they’re similar to a mid-size venture fund (if my estimates about those funds aren’t too off-base).
GiveWell is moving $40-50 million/year and doesn’t seem likely to slow down. In fact, they’re looking to double in size and start funding lots of new projects in areas like “changing national law”.
DeepMind and OpenAI, both of which could become some of the most influential technical projects in history, have a lot of employees (including executives) who are familiar with EA or active participants in the community.
A former head of IARPA, the CIA’s R&D department (roughly speaking), is now the head of an AI think tank in Washington DC whose other staffers also have really impressive resumes. (Tantum Collins, a non-executive researcher who appears halfway down the page, is a “Principal for Research and Strategy” at DeepMind and co-authored a book with Stanley McChrystal.)
It’s true that we haven’t gotten the first EA senator, or the first EA CEO of a FAANG company (Zuckerberg isn’t quite there yet), but I think we’re making reasonable progress for a movement that was founded ten years ago in a philosopher’s house and didn’t really “professionalize” until 2013 or so.
Meanwhile...
EA philosophy seems to have influenced, or at least caught the attention of, many people who are already extremely successful (from Gates and Musk to Vitalik Buterin and Patrick Collison).
We have support from some of the world’s most prominent philosophers, quite a few other major-league academics (e.g. Philip Tetlock), and several of the world’s best poker players (who not only donate a portion of their tournament winnings, but also spend their spare time running fundraisers for cash grants and AI safety).
We have a section that’s at least 50% devoted to EA causes in a popular online publication.
There’s definitely room to grow and improve, but the trajectory looks… well, pretty good. Anecdotally, I didn’t pay much attention to new developments in EA between mid-2016 and mid-2018.
When talking to someone really talented graduating from university and deciding what to do next, I’d probably ask them why what they’re doing immediately might allow for outsize returns / unreasonably fast growth (in terms of skills, network, credibility, money, etc.). If no compelling answer, I’d say they’re setting themselves up for relative mediocrity / slow path to massive impact.
I generally agree with this, though one should be careful with one’s rocket ship, lest it crash. Theranos is the most obvious example; Tesla may yet become another, and plenty of others burned up in the atmosphere without getting much public attention.
I agree that all of the things you listed are great. But note that almost all of them look like “convince already-successful people of EA ideas” rather than “talented young EAs doing exceptional things”. For the purposes of this discussion, the main question isn’t when we get the first EA senator, but whether the advice we’re giving to young EAs will make them more likely to become senators or billion-dollar donors or other cool things. And yes, there’s a strong selection bias here because obviously if you’re young, you’ve had less time to do cool things. But I still think your argument weighs only weakly against Vishal’s advocacy of what I’m tempted to call the “Silicon Valley mindset”.
So the empirical question here is something like, if more EAs steer their careers based on a Silicon Valley mindset (as opposed to an EA mindset), will the movement overall be able to do more good? Personally I think that’s true for driven, high-conscientiousness generalists, e.g. the sort of people OpenPhil hires. For other people, I guess what I advocate in the post above is sort of a middle ground between Vishal’s “go for extreme growth” and the more standard EA advice to “go for the most important cause areas”.
I’m ok with calling this the “Silicon Valley mindset”—since it recommends a growth-oriented career mindset, like the Breakout List philosophy, with the ultimate success metric being impact—though it’s important to note that I’m not advocating for everybody to go start companies. Rather, I’m describing a shift in focus towards extreme career capital growth asap (rather than direct impact asap) in any reasonably relevant domain, subject to the constraint of robustly avoiding value drift. This seems like the optimal approach for top talent, in aggregate, if we’re optimizing for cumulative impact over many decades, and if we think we can apply the venture capitalist mindset to impact (thinking of early-career talent as akin to early-stage startups).
Sorry for not realizing you worked at DeepMind; my comment would have looked different had I known about our shared context. (Also, consider writing a bio!)
I think we’re aligned in our desire to see more early-career EAs apply to those roles (and on most other things). My post aimed to:
1. Provide some background on some of the more “successful” people associated with EA.
2. Point out that “recruiting people with lots of career capital” may be comparable to “acquiring career capital” as a strategy to maximize impact. Of course, the latter makes the former easier, if you actually succeed, but it also takes more time.
On point (2): What fraction of the money/social capital EA will someday acquire “already exists”? Is our future going to look more like “lots of EA people succeeded”, or “lots of successful people found EA”?
Historically, both strategies seem to have worked for different social movements; the most successful neoliberals grew into their influence, while the Fabian Society relied on recruiting top talent. (I’m not a history expert, and this could be far too simple.)
--
One concern I have about the “maximize career capital” strategy is that it has tricky social implications; it’s easy for a “most people should do X” message to become “everyone who doesn’t do X is wrong”, as Richard points out. But career capital acquisition doesn’t lead to as much direct competition between EAs, and could produce more skill-per-person in the process, so perhaps it’s actually just better for most people.
Some of my difficulty in grasping the big picture for the community as a whole is that I don’t have a sense for what early-career EAs are actually working on. Sometimes, it feels like everyone is a grad student or FAANG programmer (not much potential for outsize returns). At other times, it feels like everyone is trying to start a company or a charity (lots of potential, lots of risk).
Is there any specific path you think not enough people in the community are taking from a “big wins early” perspective? Joining startups? Studying a particular field?
--
Finally, on the subject of risk, I think I’m going to take this comment and turn it into a post. (Brief summary: Someday, when we look back on the impact of EA, we’ll have a good sense for whose work was “most impactful”, but that shouldn’t matter nearly as much to our future selves as the fact that many unsuccessful people still tried their best to do good, and were also part of the movement’s “grand story”.) I hope we keep respecting good strategy and careful thinking, whether those things are attached to high-risk or low-risk pursuits.
I don’t have enough data to know if there are specific paths not enough people are taking, but I’m pretty certain there’s a question that not enough people are asking within the paths they’re taking: how is what I’m doing *right now* going to lead to a 10x/100x/1,000x win, in expectation? What’s the Move 37 I’m making, that nobody else is seeing? This mentality that can be applied in pretty much any career path.
Note that your argument here is roughly Ben Pace’s position in this post which we co-wrote. I argued against Ben’s position in the post because I thought it was too extreme, but I agree with both of you that most EAs aren’t going far enough in that direction.
Thread is too long to fully process, but I’ll try to re-phrase what seems to be a crucial & perhaps-not-disputed point here:
If you have big enough wins on your record, early on, you can do pretty much anything.
If you’re optimizing for max impact in a decades-long career (which Michelle & Richard both seem to agree is the right framing), then pursuing opportunities with extreme growth trajectories seems like a good strategy.
Where are the Elon Musks and Peter Thiels (early career trajectory-wise) in the EA community? Why are so few EAs making it into leadership positions at some of the most critical orgs?
When talking to someone really talented graduating from university and deciding what to do next, I’d probably ask them why what they’re doing immediately might allow for outsize returns / unreasonably fast growth (in terms of skills, network, credibility, money, etc.). If no compelling answer, I’d say they’re setting themselves up for relative mediocrity / slow path to massive impact. It’s similar to the Sheryl Sandberg(?) quote implying that one should join a breakout stage company to supercharge one’s career, no matter what the role: “If you’re offered a seat on a rocket ship, don’t ask what seat. Just get on.”
I think the point I’m making here is an extension of this: early on, don’t ask which rocket ship, either. Just get on one, and you’ll win. (Make sure to build systems that prevent value drift).
Path to impact is much easier once you’ve solved network, skills, finances, credibility, leadership ability, confidence (this last one is crucial and under-discussed). At that point, time becomes the only bottleneck.
This is written primarily for generalist (i.e. non-technical / non-research) talent. Technical & research-oriented careers probably follow different patterns, though the underlying principles probably still apply.
[@80k/Richard: I’d be curious to get a 1-ish-sentence response, e.g. “You’re wrong, and need to go do some reading and come back so we can have an informed discussion” or “You’re right, and this matches up with how we think about things”. PS, this is my first-ever online interaction with the EA community!]
Welcome to the EA community! I liked your first post on the Forum, and I hope you’ll come back to make many more.
Now that that’s been said, here’s my response, which may sound oppositional, but which I intend to be more along the lines of “trying to get on the same page, since I think we actually agree on a lot of stuff”. Overall, I think your vision of success is pretty close to what people at 80K might say (though I could be wrong and I certainly don’t speak for them).
The thing about Elon Musk and Peter Thiel is that it was hard to tell that they would become Musk and Thiel. There are many more “future Elon Musks” than there are “people who become Elon Musk after everything shakes out”.
For all I know, we may have some of those people in the community; I think we certainly have a higher expected number of those people per-capita than almost any other community in the world, even if that number is something like “0.3”. (The tech-billionaire base rate is very low.)
I don’t really know what you mean by “the most critical orgs”, since EA seems to be doing well there already:
The Open Philanthropy Project has made hundreds of millions of dollars in grants and is set to do hundreds of millions more—they aren’t on the scale of Sequoia or Y Combinator, but they’re similar to a mid-size venture fund (if my estimates about those funds aren’t too off-base).
GiveWell is moving $40-50 million/year and doesn’t seem likely to slow down. In fact, they’re looking to double in size and start funding lots of new projects in areas like “changing national law”.
DeepMind and OpenAI, both of which could become some of the most influential technical projects in history, have a lot of employees (including executives) who are familiar with EA or active participants in the community.
A former head of IARPA, the CIA’s R&D department (roughly speaking), is now the head of an AI think tank in Washington DC whose other staffers also have really impressive resumes. (Tantum Collins, a non-executive researcher who appears halfway down the page, is a “Principal for Research and Strategy” at DeepMind and co-authored a book with Stanley McChrystal.)
It’s true that we haven’t gotten the first EA senator, or the first EA CEO of a FAANG company (Zuckerberg isn’t quite there yet), but I think we’re making reasonable progress for a movement that was founded ten years ago in a philosopher’s house and didn’t really “professionalize” until 2013 or so.
Meanwhile...
EA philosophy seems to have influenced, or at least caught the attention of, many people who are already extremely successful (from Gates and Musk to Vitalik Buterin and Patrick Collison).
We have support from some of the world’s most prominent philosophers, quite a few other major-league academics (e.g. Philip Tetlock), and several of the world’s best poker players (who not only donate a portion of their tournament winnings, but also spend their spare time running fundraisers for cash grants and AI safety).
We have a section that’s at least 50% devoted to EA causes in a popular online publication.
There’s definitely room to grow and improve, but the trajectory looks… well, pretty good. Anecdotally, I didn’t pay much attention to new developments in EA between mid-2016 and mid-2018.
I generally agree with this, though one should be careful with one’s rocket ship, lest it crash. Theranos is the most obvious example; Tesla may yet become another, and plenty of others burned up in the atmosphere without getting much public attention.
--
I work for CEA, but these views are my own.
I agree that all of the things you listed are great. But note that almost all of them look like “convince already-successful people of EA ideas” rather than “talented young EAs doing exceptional things”. For the purposes of this discussion, the main question isn’t when we get the first EA senator, but whether the advice we’re giving to young EAs will make them more likely to become senators or billion-dollar donors or other cool things. And yes, there’s a strong selection bias here because obviously if you’re young, you’ve had less time to do cool things. But I still think your argument weighs only weakly against Vishal’s advocacy of what I’m tempted to call the “Silicon Valley mindset”.
So the empirical question here is something like, if more EAs steer their careers based on a Silicon Valley mindset (as opposed to an EA mindset), will the movement overall be able to do more good? Personally I think that’s true for driven, high-conscientiousness generalists, e.g. the sort of people OpenPhil hires. For other people, I guess what I advocate in the post above is sort of a middle ground between Vishal’s “go for extreme growth” and the more standard EA advice to “go for the most important cause areas”.
I’m ok with calling this the “Silicon Valley mindset”—since it recommends a growth-oriented career mindset, like the Breakout List philosophy, with the ultimate success metric being impact—though it’s important to note that I’m not advocating for everybody to go start companies. Rather, I’m describing a shift in focus towards extreme career capital growth asap (rather than direct impact asap) in any reasonably relevant domain, subject to the constraint of robustly avoiding value drift. This seems like the optimal approach for top talent, in aggregate, if we’re optimizing for cumulative impact over many decades, and if we think we can apply the venture capitalist mindset to impact (thinking of early-career talent as akin to early-stage startups).
Thanks for this reply!
Sorry for not realizing you worked at DeepMind; my comment would have looked different had I known about our shared context. (Also, consider writing a bio!)
I think we’re aligned in our desire to see more early-career EAs apply to those roles (and on most other things). My post aimed to:
1. Provide some background on some of the more “successful” people associated with EA.
2. Point out that “recruiting people with lots of career capital” may be comparable to “acquiring career capital” as a strategy to maximize impact. Of course, the latter makes the former easier, if you actually succeed, but it also takes more time.
On point (2): What fraction of the money/social capital EA will someday acquire “already exists”? Is our future going to look more like “lots of EA people succeeded”, or “lots of successful people found EA”?
Historically, both strategies seem to have worked for different social movements; the most successful neoliberals grew into their influence, while the Fabian Society relied on recruiting top talent. (I’m not a history expert, and this could be far too simple.)
--
One concern I have about the “maximize career capital” strategy is that it has tricky social implications; it’s easy for a “most people should do X” message to become “everyone who doesn’t do X is wrong”, as Richard points out. But career capital acquisition doesn’t lead to as much direct competition between EAs, and could produce more skill-per-person in the process, so perhaps it’s actually just better for most people.
Some of my difficulty in grasping the big picture for the community as a whole is that I don’t have a sense for what early-career EAs are actually working on. Sometimes, it feels like everyone is a grad student or FAANG programmer (not much potential for outsize returns). At other times, it feels like everyone is trying to start a company or a charity (lots of potential, lots of risk).
Is there any specific path you think not enough people in the community are taking from a “big wins early” perspective? Joining startups? Studying a particular field?
--
Finally, on the subject of risk, I think I’m going to take this comment and turn it into a post. (Brief summary: Someday, when we look back on the impact of EA, we’ll have a good sense for whose work was “most impactful”, but that shouldn’t matter nearly as much to our future selves as the fact that many unsuccessful people still tried their best to do good, and were also part of the movement’s “grand story”.) I hope we keep respecting good strategy and careful thinking, whether those things are attached to high-risk or low-risk pursuits.
I don’t have enough data to know if there are specific paths not enough people are taking, but I’m pretty certain there’s a question that not enough people are asking within the paths they’re taking: how is what I’m doing *right now* going to lead to a 10x/100x/1,000x win, in expectation? What’s the Move 37 I’m making, that nobody else is seeing? This mentality that can be applied in pretty much any career path.
Note that your argument here is roughly Ben Pace’s position in this post which we co-wrote. I argued against Ben’s position in the post because I thought it was too extreme, but I agree with both of you that most EAs aren’t going far enough in that direction.