Figuring out how to give the right advice to the right person is a hard challenge. That’s why I framed skilling up outside EA as being a good alternative to “banging your head against the wall indefinitely.” I think the link I added to the bottom of this post addresses the “many paths” component.
The main goal of my post, though, is to talk about why there’s a bar (hurdle rate) in the first place. And, if readers are persuaded of its necessity, to suggest what to do if you’ve become convinced that you can’t surpass it at this stage in your journey.
It would be helpful to find a test to distinguish EAs who should keep trying from those who should exit, skill up, and return later. Probably one-on-one mentorship, coupled with data on what sorts of things EA orgs look for in an applicant, and the distribution of applicant quality, would be the way to devise such a test.
A team capable of executing a high-quality project to create such a test would (if I were an EA fund) definitely be worthy of a grant!
Also, here’s a somewhat relevant intervention idea that seems interesting to me, copied from an upcoming post in my sequence on improving the EA-aligned research pipeline (so this passage focuses on research roles, but you can easily extrapolate the ideas):
Improving the vetting of (potential) researchers, and/or better “sharing” that vetting
For example:
Improving selection processes at EA-aligned research organisations
Increasing the number and usefulness of referrals of candidates from one selection process (e.g., for a job or a grant) to another selection process.
This already happens, but could perhaps be improved by:
Increasing how often it happens
Increasing how well-targeted the referrals are
Increasing the amount of information provided to the second selection process?
Increasing how much of the second selection process the candidate can “skip”?
Creating something like a “Triplebyte for EA researchers”, which could scalably evaluate aspiring/junior researchers, identify talented/promising ones, and then recommend them to hirers/grantmakers^[This idea was suggested as a possibility by a commenter on a draft of this post.]
This could resolve most of the vetting constraints if it could operate efficiently and was trusted by the relevant hirers/grantmakers
Triplebyte’s value proposition to its clients (the companies who pay for its services) is an improved technical interview process. They claim to offer tests that achieve three forms of value:
Less biased
More predictive of success-linked technical prowess
Convenient (since companies don’t have to run the technical interviews themselves)
If there’s room for an “EA Triplebyte,” that would suggest that EA orgs have at least one of those three problems.
So it seems like your first step would be to look in-depth at the ways EA orgs assess technical research skills.
Are they looking at the same sorts of skills? Are their tests any good? Are the tests time-consuming and burdensome for EA orgs? Alternatively, do many EA orgs pass up on needed hires because they don’t have the short-term capacity to evaluate them?
Then you’d need to consider what alternative tests would be a better measurement of technical research prowess, and how to show that they are better predictive of success than present technical interviews.
It would also be important to determine the scale of the problem. Eyeballing this list, there’s maybe 75 EA-related organizations. How many hires do they make per month? How often does their search fail for lack of qualified candidates? How many hours do they spend on technical interviews each time? Will you be testing not for EA-specific for general research capacity (massively broadening your market, but also increasing the challenge of addressing all their needs)?
Finally, you’d need to roll that up into a convenient, trustiworthy and reliable package that clients are excited to use instead of their current approach.
This seems like a massive amount of work, demanding a strong team, adequate funding and prior interest by EA orgs, and long-term commitment. It also sounds like it might be really valuable if done well.
Thanks for these thoughts—I think I’ll add a link to your comment from that section of my post.
I think your analysis basically sounds correct to me. I would also be quite surprised if this came into existence (and was actually used by multiple orgs) in the next 10 years, and I don’t think it’s likely to be the highest priority intervention for improving the EA-aligned research pipeline, though I’d be keen to at least see people flesh out and explore the idea a bit more.
FWIW, I’m guessing that this commenter on my doc meant something a little bit more distant from Triplebyte specifically than what your comment suggests—in particular, I don’t think the idea would be just to conduct technical interviews, but also other parts of the selection process. At least, that’s how I interpreted the comment, and seems better to me, given that I think it’s relatively rare for EA orgs to actually have technical interviews in their selection processes. (There may often be a few questions like that, but without it being the main focus for the interview. Though I also might be misinterpreting what a technical interview is—I haven’t worked in areas like engineering or IT.)
My sense is that Triplebyte focuses on “can this person think like an engineer” and “which specific math/programming skills do they have, and how strong are they?” Then companies do a second round of interviews where they evaluate Triplebyte candidates for company culture. Triplebyte handles the general, companies handle the idiosyncratic.
It just seems to me that Triplebyte is powered by a mature industry that’s had decades of time and massive amounts of money invested into articulating its own needs and interests. Whereas I don’t think EA is old or big or wealthy enough to have a sharp sense of exactly what the stable needs are.
For a sense of scale, there are almost 4 million programmers in the USA. Triplebyte launched just 5 years ago. It took millions of people working as programmers to generate adequate demand and capacity for that service to be successful.
All in all, my guess is that what we’re missing is charismatic founder-types. The kind of people who can take one of the problems on our long lists of cause areas, turn it into a real plan, pull together funding and a team (of underutilized people), and make it go.
Figuring out how to teach that skill, or replace it with some other foundation mechanism, would of course be great. It’s necessary. Otherwise, we’re kind of just cannibalizing one highly-capable project to create another. Which is pretty much what we do when we try to attract strong outside talent and “convert” them to EA.
Part of the reason I haven’t spent more time trying to found something right off the bat is that I thought EA could benefit more if I developed a skillset in technology. But another reason is that I just don’t have the slack. I think to found something, you need significant savings and a clear sense of what to do if it fails, such that you can afford to take years of your life, potentially, without a real income.
Most neophytes don’t have that kind of slack. That’s why I especially lean on the side of “if it hurts, don’t do it.”
I don’t have any negativity toward the encouragement to try things and be audacious. At the same time, there’s a massive amount of hype and exploitative stuff in the entrepreneurship world. This “Think of the guy who wrote Winzip! He made millions of dollars, and you can do it too!” line that business gurus use to suck people in to their self-help sites and Youtube channels and so on.
The EA movement had some low-hanging fruit to pick early on. It’s obviously a huge win for us to have great resources like 80k, or significant organizations like OpenPhil. Some of these were founded by world-class experts (Pete Singer) and billionaires, but some (80k) were founded by some young audacious people not too far out of grad school. But those needs, it seems to me, are filled. The world’s pretty rich. It’s easier to address a funding shortfall or an information shortfall, than to get concrete useful direct work done.
Likewise in the business world, it’s easier to find money for a project and outline the general principles of how to run a good business, than to actually develop and successfully market a valuable new product. There’s plenty of money out there, and not a ton of obvious choices to spend it on. Silicon Valley’s looking for unicorns. We’re looking for unicorns too. There aren’t many unicorns.
I think that the “EA establishment’s” responsibility to neophytes is to tell them frankly that there’s a very high bar, it’s there for a reason, and for your own sake, don’t hurt yourself over and over by failing to clear it. Go make yourself big and strong somewhere else, then come back here and show us what you can do. Tell people it’s hard, and invite them back when they’re ready for that kind of challenge.
I think to found something, you need significant savings and a clear sense of what to do if it fails, such that you can afford to take years of your life, potentially, without a real income.
I don’t think this is true, at least not as a general rule. I think you can do both (have a safe career and pursue something entrepreneurial) if you make small, focused bets to begin with and build from there. Related discussion here.
My sense is that Triplebyte focuses on “can this person think like an engineer” and “which specific math/programming skills do they have, and how strong are they?” Then companies do a second round of interviews where they evaluate Triplebyte candidates for company culture. Triplebyte handles the general, companies handle the idiosyncratic.
I used to work as an interviewer for TripleByte. Most companies using TripleByte put TripleByte-certified candidates through their standard technical onsite. From what I was able to gather, the value prop for companies working with TripleByte is mostly about 1. expanding their sourcing pipeline to include more quality candidates and 2. cutting down on the amount of time their engineers spend administering screens to candidates who aren’t very good.
Some of your comments make it sound like a TB like service for EA has to be a lot better than what EA orgs are currently doing to screen candidates. Personally, I suspect there’s a lot of labor-saving value to capture if it is merely just as good (or even a bit worse) than current screens. It might also help organizations consider a broader range of people.
Thanks for that context, John. Given that value prop, companies might use a TB-like service under two constraints:
They are bottlenecked by having too few applicants. In this case, they have excess interviewing capacity, or more jobs than applicants. They hope that by investigating more applicants through TB, they can find someone outstanding.
Their internal headhunting process has an inferior quality distribution relative to the candidates they get through TB. In this case, they believe that TB can provide them with a better class of applicants than their own job search mechanisms can identify. In effect, they are outsourcing their headhunting for a particular job category.
Given that EA orgs seem primarily to lack specific forms of domain expertise, as well as well-defined project ideas/teams, what would an EA Triplebyte have to achieve?
They’d need to be able to interface with EA orgs and identify the specific forms of domain expertise that are required. Then they’d need to be able to go out and recruit those experts, who might never have heard of EA, and get them interested in the job. They’d be an interface to the expertise these orgs require. Push a button, get an expert.
That seems plausible. Triplebyte evokes the image of a huge recruiting service meant to fill cubicles with basically-competent programmers who are pre-screened for the in-house technical interview. Not to find unusually specific skills for particular kinds of specialist jobs, which it seems is what EA requires at this time.
That sort of headhunting job could be done by just one person. Their job would be to do a whole lot of cold-calling, getting meetings with important people, doing the legwork that EA orgs don’t have time for. Need five minutes of a Senator’s time? Looking to pull together a conference of immunologists to discuss biosafety issues from an EA perspective? That’s the sort of thing this sort of org would strive to make more convenient for EA orgs.
As they gained experience, they would also be able to help EA orgs anticipate what sort of projects the domain experts they’d depend upon would be likely to spring for. I imagine that some EA orgs must periodically come up with, say, ideas that would require some significant scientific input. Some of those ideas might be more attractive to the scientists than others. If an org like this existed, it might be able to tell those EA orgs which ones the scientists are likely to spring for.
That does seem like the kind of job that could productively exist at the intersection of EA orgs. They’d need to understand EA concepts and the relationships between institutions well enough to speak “on behalf of the movement,” while gaining a similar understanding of domains like the scientific, political, business, philanthropic, or military establishment of particular countries.
I agree that there are fewer lower hanging fruit than there used to be. On the other hand, there’s more guidance on what to do and more support for how to do it (perhaps “better maps to the trees” and “better ladders”—I think I’m plagiarising someone else on the ladder bit). I’d guess that it is now overall somewhat or significantly harder for someone in the position Ben Todd was in to make something as useful as 80k, but it doesn’t seem totally clear.
And in any case, something as useful as 80k is a high bar! I think something could be much less useful and still be very useful. And someone perhaps could “skill up” more than Ben Todd had, but only for like a couple years.
And I think there really are still a lot of fairly low hanging fruit. I think some evidence for this is the continuing number of EA projects that seem to fill niches that seem like they obviously should be filled, seem to be providing value, and are created by pretty early-career people. (I can expand on this if you want, but I think e.g. looking at lists of EA orgs already gives a sense of what I mean.)
I agree with many parts of your comment, but I continue to think only some sizeable fraction of people should be advised to “Go make yourself big and strong somewhere else, then come back here and show us what you can do”, while also:
many people should try both approaches at first
many people should focus mostly on the explicitly EA paths (usually after trying both approaches and getting some evidence about comparative advantage)
many people should go make themselves big and strong and impactful somewhere else,and then just stay there, doing great stuff
I think it’s perhaps a little irresponsible to give public advice that’s narrower than that—narrower advice makes sense if you’re talking to a specific person and you have evidence about which of those categories of people they’re part of, but not for a public audience.
(I think it’s also fine to give public advice like “on the margin, somewhat more people should be doing X, and some ways to tell if you specifically should be doing X are Y and Z”. I think 80k’s advice tends to look like that. Though even that often gets boiled down by other people to “quick, everyone should do X!”, and then creates problems.)
I do think getting it would be good to get more clarity on what proportion of EAs are spending too much vs too little time pursuing explicitly EA-aligned roles (given their ultimate goals, fit, etc.), and more clarity on what proxies can be used to help people work out which group they’re in.
Though I think some decent insights can already be gleaned from, for example, posts tagged Working at EA vs Non-EA Orgs or things linked to from those posts, and one on one career advice conversations.
(And I think we can also improve on the current situation—where some people are writing themselves off and others have too narrow a focus—by just making sure we always clearly acknowledge individual variation, there being many different good paths, it taking time to work out what one is a fit for, etc.)
(I also like that Slate Star Codex post you link to, and agree that it’s relevant here.)
Figuring out how to give the right advice to the right person is a hard challenge. That’s why I framed skilling up outside EA as being a good alternative to “banging your head against the wall indefinitely.” I think the link I added to the bottom of this post addresses the “many paths” component.
The main goal of my post, though, is to talk about why there’s a bar (hurdle rate) in the first place. And, if readers are persuaded of its necessity, to suggest what to do if you’ve become convinced that you can’t surpass it at this stage in your journey.
It would be helpful to find a test to distinguish EAs who should keep trying from those who should exit, skill up, and return later. Probably one-on-one mentorship, coupled with data on what sorts of things EA orgs look for in an applicant, and the distribution of applicant quality, would be the way to devise such a test.
A team capable of executing a high-quality project to create such a test would (if I were an EA fund) definitely be worthy of a grant!
Also, here’s a somewhat relevant intervention idea that seems interesting to me, copied from an upcoming post in my sequence on improving the EA-aligned research pipeline (so this passage focuses on research roles, but you can easily extrapolate the ideas):
Improving the vetting of (potential) researchers, and/or better “sharing” that vetting
For example:
Improving selection processes at EA-aligned research organisations
Increasing the number and usefulness of referrals of candidates from one selection process (e.g., for a job or a grant) to another selection process.
This already happens, but could perhaps be improved by:
Increasing how often it happens
Increasing how well-targeted the referrals are
Increasing the amount of information provided to the second selection process?
Increasing how much of the second selection process the candidate can “skip”?
Creating something like a “Triplebyte for EA researchers”, which could scalably evaluate aspiring/junior researchers, identify talented/promising ones, and then recommend them to hirers/grantmakers^[This idea was suggested as a possibility by a commenter on a draft of this post.]
This could resolve most of the vetting constraints if it could operate efficiently and was trusted by the relevant hirers/grantmakers
Triplebyte’s value proposition to its clients (the companies who pay for its services) is an improved technical interview process. They claim to offer tests that achieve three forms of value:
Less biased
More predictive of success-linked technical prowess
Convenient (since companies don’t have to run the technical interviews themselves)
If there’s room for an “EA Triplebyte,” that would suggest that EA orgs have at least one of those three problems.
So it seems like your first step would be to look in-depth at the ways EA orgs assess technical research skills.
Are they looking at the same sorts of skills? Are their tests any good? Are the tests time-consuming and burdensome for EA orgs? Alternatively, do many EA orgs pass up on needed hires because they don’t have the short-term capacity to evaluate them?
Then you’d need to consider what alternative tests would be a better measurement of technical research prowess, and how to show that they are better predictive of success than present technical interviews.
It would also be important to determine the scale of the problem. Eyeballing this list, there’s maybe 75 EA-related organizations. How many hires do they make per month? How often does their search fail for lack of qualified candidates? How many hours do they spend on technical interviews each time? Will you be testing not for EA-specific for general research capacity (massively broadening your market, but also increasing the challenge of addressing all their needs)?
Finally, you’d need to roll that up into a convenient, trustiworthy and reliable package that clients are excited to use instead of their current approach.
This seems like a massive amount of work, demanding a strong team, adequate funding and prior interest by EA orgs, and long-term commitment. It also sounds like it might be really valuable if done well.
Thanks for these thoughts—I think I’ll add a link to your comment from that section of my post.
I think your analysis basically sounds correct to me. I would also be quite surprised if this came into existence (and was actually used by multiple orgs) in the next 10 years, and I don’t think it’s likely to be the highest priority intervention for improving the EA-aligned research pipeline, though I’d be keen to at least see people flesh out and explore the idea a bit more.
FWIW, I’m guessing that this commenter on my doc meant something a little bit more distant from Triplebyte specifically than what your comment suggests—in particular, I don’t think the idea would be just to conduct technical interviews, but also other parts of the selection process. At least, that’s how I interpreted the comment, and seems better to me, given that I think it’s relatively rare for EA orgs to actually have technical interviews in their selection processes. (There may often be a few questions like that, but without it being the main focus for the interview. Though I also might be misinterpreting what a technical interview is—I haven’t worked in areas like engineering or IT.)
My sense is that Triplebyte focuses on “can this person think like an engineer” and “which specific math/programming skills do they have, and how strong are they?” Then companies do a second round of interviews where they evaluate Triplebyte candidates for company culture. Triplebyte handles the general, companies handle the idiosyncratic.
It just seems to me that Triplebyte is powered by a mature industry that’s had decades of time and massive amounts of money invested into articulating its own needs and interests. Whereas I don’t think EA is old or big or wealthy enough to have a sharp sense of exactly what the stable needs are.
For a sense of scale, there are almost 4 million programmers in the USA. Triplebyte launched just 5 years ago. It took millions of people working as programmers to generate adequate demand and capacity for that service to be successful.
All in all, my guess is that what we’re missing is charismatic founder-types. The kind of people who can take one of the problems on our long lists of cause areas, turn it into a real plan, pull together funding and a team (of underutilized people), and make it go.
Figuring out how to teach that skill, or replace it with some other foundation mechanism, would of course be great. It’s necessary. Otherwise, we’re kind of just cannibalizing one highly-capable project to create another. Which is pretty much what we do when we try to attract strong outside talent and “convert” them to EA.
Part of the reason I haven’t spent more time trying to found something right off the bat is that I thought EA could benefit more if I developed a skillset in technology. But another reason is that I just don’t have the slack. I think to found something, you need significant savings and a clear sense of what to do if it fails, such that you can afford to take years of your life, potentially, without a real income.
Most neophytes don’t have that kind of slack. That’s why I especially lean on the side of “if it hurts, don’t do it.”
I don’t have any negativity toward the encouragement to try things and be audacious. At the same time, there’s a massive amount of hype and exploitative stuff in the entrepreneurship world. This “Think of the guy who wrote Winzip! He made millions of dollars, and you can do it too!” line that business gurus use to suck people in to their self-help sites and Youtube channels and so on.
The EA movement had some low-hanging fruit to pick early on. It’s obviously a huge win for us to have great resources like 80k, or significant organizations like OpenPhil. Some of these were founded by world-class experts (Pete Singer) and billionaires, but some (80k) were founded by some young audacious people not too far out of grad school. But those needs, it seems to me, are filled. The world’s pretty rich. It’s easier to address a funding shortfall or an information shortfall, than to get concrete useful direct work done.
Likewise in the business world, it’s easier to find money for a project and outline the general principles of how to run a good business, than to actually develop and successfully market a valuable new product. There’s plenty of money out there, and not a ton of obvious choices to spend it on. Silicon Valley’s looking for unicorns. We’re looking for unicorns too. There aren’t many unicorns.
I think that the “EA establishment’s” responsibility to neophytes is to tell them frankly that there’s a very high bar, it’s there for a reason, and for your own sake, don’t hurt yourself over and over by failing to clear it. Go make yourself big and strong somewhere else, then come back here and show us what you can do. Tell people it’s hard, and invite them back when they’re ready for that kind of challenge.
I don’t think this is true, at least not as a general rule. I think you can do both (have a safe career and pursue something entrepreneurial) if you make small, focused bets to begin with and build from there. Related discussion here.
I agree, I should have included “or a safe career/fallback option” to that.
I used to work as an interviewer for TripleByte. Most companies using TripleByte put TripleByte-certified candidates through their standard technical onsite. From what I was able to gather, the value prop for companies working with TripleByte is mostly about 1. expanding their sourcing pipeline to include more quality candidates and 2. cutting down on the amount of time their engineers spend administering screens to candidates who aren’t very good.
Some of your comments make it sound like a TB like service for EA has to be a lot better than what EA orgs are currently doing to screen candidates. Personally, I suspect there’s a lot of labor-saving value to capture if it is merely just as good (or even a bit worse) than current screens. It might also help organizations consider a broader range of people.
Thanks for that context, John. Given that value prop, companies might use a TB-like service under two constraints:
They are bottlenecked by having too few applicants. In this case, they have excess interviewing capacity, or more jobs than applicants. They hope that by investigating more applicants through TB, they can find someone outstanding.
Their internal headhunting process has an inferior quality distribution relative to the candidates they get through TB. In this case, they believe that TB can provide them with a better class of applicants than their own job search mechanisms can identify. In effect, they are outsourcing their headhunting for a particular job category.
Given that EA orgs seem primarily to lack specific forms of domain expertise, as well as well-defined project ideas/teams, what would an EA Triplebyte have to achieve?
They’d need to be able to interface with EA orgs and identify the specific forms of domain expertise that are required. Then they’d need to be able to go out and recruit those experts, who might never have heard of EA, and get them interested in the job. They’d be an interface to the expertise these orgs require. Push a button, get an expert.
That seems plausible. Triplebyte evokes the image of a huge recruiting service meant to fill cubicles with basically-competent programmers who are pre-screened for the in-house technical interview. Not to find unusually specific skills for particular kinds of specialist jobs, which it seems is what EA requires at this time.
That sort of headhunting job could be done by just one person. Their job would be to do a whole lot of cold-calling, getting meetings with important people, doing the legwork that EA orgs don’t have time for. Need five minutes of a Senator’s time? Looking to pull together a conference of immunologists to discuss biosafety issues from an EA perspective? That’s the sort of thing this sort of org would strive to make more convenient for EA orgs.
As they gained experience, they would also be able to help EA orgs anticipate what sort of projects the domain experts they’d depend upon would be likely to spring for. I imagine that some EA orgs must periodically come up with, say, ideas that would require some significant scientific input. Some of those ideas might be more attractive to the scientists than others. If an org like this existed, it might be able to tell those EA orgs which ones the scientists are likely to spring for.
That does seem like the kind of job that could productively exist at the intersection of EA orgs. They’d need to understand EA concepts and the relationships between institutions well enough to speak “on behalf of the movement,” while gaining a similar understanding of domains like the scientific, political, business, philanthropic, or military establishment of particular countries.
An EA diplomat.
I agree that there are fewer lower hanging fruit than there used to be. On the other hand, there’s more guidance on what to do and more support for how to do it (perhaps “better maps to the trees” and “better ladders”—I think I’m plagiarising someone else on the ladder bit). I’d guess that it is now overall somewhat or significantly harder for someone in the position Ben Todd was in to make something as useful as 80k, but it doesn’t seem totally clear.
And in any case, something as useful as 80k is a high bar! I think something could be much less useful and still be very useful. And someone perhaps could “skill up” more than Ben Todd had, but only for like a couple years.
And I think there really are still a lot of fairly low hanging fruit. I think some evidence for this is the continuing number of EA projects that seem to fill niches that seem like they obviously should be filled, seem to be providing value, and are created by pretty early-career people. (I can expand on this if you want, but I think e.g. looking at lists of EA orgs already gives a sense of what I mean.)
I agree with many parts of your comment, but I continue to think only some sizeable fraction of people should be advised to “Go make yourself big and strong somewhere else, then come back here and show us what you can do”, while also:
many people should try both approaches at first
many people should focus mostly on the explicitly EA paths (usually after trying both approaches and getting some evidence about comparative advantage)
many people should go make themselves big and strong and impactful somewhere else, and then just stay there, doing great stuff
I think it’s perhaps a little irresponsible to give public advice that’s narrower than that—narrower advice makes sense if you’re talking to a specific person and you have evidence about which of those categories of people they’re part of, but not for a public audience.
(I think it’s also fine to give public advice like “on the margin, somewhat more people should be doing X, and some ways to tell if you specifically should be doing X are Y and Z”. I think 80k’s advice tends to look like that. Though even that often gets boiled down by other people to “quick, everyone should do X!”, and then creates problems.)
I do think getting it would be good to get more clarity on what proportion of EAs are spending too much vs too little time pursuing explicitly EA-aligned roles (given their ultimate goals, fit, etc.), and more clarity on what proxies can be used to help people work out which group they’re in.
Though I think some decent insights can already be gleaned from, for example, posts tagged Working at EA vs Non-EA Orgs or things linked to from those posts, and one on one career advice conversations.
(And I think we can also improve on the current situation—where some people are writing themselves off and others have too narrow a focus—by just making sure we always clearly acknowledge individual variation, there being many different good paths, it taking time to work out what one is a fit for, etc.)
(I also like that Slate Star Codex post you link to, and agree that it’s relevant here.)