It seems to me that this post is making roughly the following claims:
Itās true that EA-aligned funders have a lot of money, that many people want to do āEA workā, and that, despite this, many of them are not getting funded
But this doesnāt mean the EA-aligned funders should lower their bar for providing funding
It also isnāt a good idea for those people to just keep pursuing āEA workā
Instead, āEA workā should be seen as something you do later, and until then you should focus on gaining knowledge, skills, networks, credentials, etc. outside EA
(Implied: And this is almost entirely for the instrumental value it provides via helping you get and do well in āEA workā later)
Is this roughly what you were trying to get across?
I think I strongly agree with some nearby claims, but I find several of those claims problematic. Hereās the nearby claims Iād agree with.
Itās true that EA-aligned funders have a lot of money, that many people want to do āEA workā, and that, despite this, many of them are not getting funded
This is same claim as aboveāI agree here
But this doesnātnecessarily mean the EA-aligned funders should lower their bar for providing funding
Iām currently agnostic about whether various EA funders should lower, raise, or maintain their current bars for funding
It also isnāt a good idea for all those people to spend a lot of time pursuing work at explicitly EA orgs or on explicitly EA projects. (And in any case, probably all of those people should spend at least some time pursuing other types of work.)
Instead, many of those people should focus on pursuing work at other types of orgs or other types of projects
In many cases, a key reason for (4) will be for the instrumental value of getting knowledge, skills, credentials, etc., which can later help the person get more explicitly EA work. Itās also nice that this costs fewer EA resources, e.g. mentorship time from EAs.
But also in many cases, a key reason for (4) will be that the non-EA work is very EA-aligned, in the sense that it itself has a substantial direct impact.
On 3: I think it can be a good idea for many people to spend some effort pursuing explicitly EA work, and for some people to focus primarily on pursuing such work, even early in their careers.
E.g., this is what I did, and I now have strong reason to believe it was a good move for me. (My evidence includes the fact I had more success in EA job applications than other job applications, and the views expressed by various people who gave feedback on my career plan. That said, I did still apply for quite a few non-EA roles, especially early on when I didnāt yet have evidence pushing in favour of me doing explicitly EA roles. And I had already worked for 2 years before starting in an āEA jobā.)
It also seems worth stating explicitly that I think that thatās basically a matter of comparative advantage, rather than how generically talented a person is. E.g., I think Iād be less good at working in government or in academia than a fair number of other people in the EA community would be, and this is part of what pushes in favour of me focusing on explicitly EA roles.
Hi Michael, thanks for your responses! Iām mainly addressing the metaphorical runner on the right in the photograph at the start of the post.
I am also agnostic about where the bar should be. But having a bar means that you have to maintain the bar in place. You donāt move the bar just because you couldnāt find a place to spend all your money.
For me, EA has been an activating and liberating force. It gives me a sense of direction, motivation to continue, and practical advice. Iāve run EA research and community development projects with Vaidehi Agarwalla, and published my own writing here and on LessWrong. These outlets, plus my pursuit of a scientific research career, have been satisfying outlets for my altruistic drive.
Not everything has been successfulābut I learned a lot along the way, and feel optimistic about the future.
Yet I see other people who seem very concerned and often disappointed at the difficulty they have in their own relationship with EA. Particularly, getting EA jobs and grants, or dealing with the feeling of āI want to save the world, but I donāt know how!ā Iām extremely optimistic that EA is and will continue to make an outsize positive impact on the world. What Iām more afraid of is that weāll generate what I call ābycatch.ā
I guess in terms of this metaphor, part of what Iām saying is that there are also some people who arenāt āin the raceā but really would do great if they joined it (maybe after a few false starts), and other people who are in the race but would be better off switching to tennis instead (and thatās great too!).
And Iām a little worried about saying something like āHey, the race is super hard! But donāt feel bad, you can go train up somewhere else for a while, and then come back!ā
Because some people could do great in the race already! (Even if several applications donāt work out or whatever; thereās a lot of variation between what different roles need, and a lot of random chance, etc.) And some of these people are erring in the direction of self-selecting out, feeling imposter syndrome, getting 4 job rejections and then thinking āwell, that proves it, Iām not right for these rolesā, etc.
Meanwhile, other people shouldnāt ātrain up and come backā, but rather go do great things elsewhere long-term! (Not necessarily leaving the community, but just not working at an explicitly EA org or with funding from EA funders.) And some of these people are erring in the direction of having their heart set of getting into an āEA jobā eventually, even if they have to train up elsewhere first.
---
Iād also be worried about messaging like āEveryone needs to get in this particular race right now! We have lots of money and lots to do and yāall need to come over here!ā And it definitely seems good to push against that. But I think we can try to find a middle ground that acknowledges there are many different paths, and different ones will be better for different people at different times, and thatās ok. (E.g., I think this post by Rob Wiblin does that nicely.)
Figuring out how to give the right advice to the right person is a hard challenge. Thatās why I framed skilling up outside EA as being a good alternative to ābanging your head against the wall indefinitely.ā I think the link I added to the bottom of this post addresses the āmany pathsā component.
The main goal of my post, though, is to talk about why thereās a bar (hurdle rate) in the first place. And, if readers are persuaded of its necessity, to suggest what to do if youāve become convinced that you canāt surpass it at this stage in your journey.
It would be helpful to find a test to distinguish EAs who should keep trying from those who should exit, skill up, and return later. Probably one-on-one mentorship, coupled with data on what sorts of things EA orgs look for in an applicant, and the distribution of applicant quality, would be the way to devise such a test.
A team capable of executing a high-quality project to create such a test would (if I were an EA fund) definitely be worthy of a grant!
Also, hereās a somewhat relevant intervention idea that seems interesting to me, copied from an upcoming post in my sequence on improving the EA-aligned research pipeline (so this passage focuses on research roles, but you can easily extrapolate the ideas):
Improving the vetting of (potential) researchers, and/āor better āsharingā that vetting
For example:
Improving selection processes at EA-aligned research organisations
Increasing the number and usefulness of referrals of candidates from one selection process (e.g., for a job or a grant) to another selection process.
This already happens, but could perhaps be improved by:
Increasing how often it happens
Increasing how well-targeted the referrals are
Increasing the amount of information provided to the second selection process?
Increasing how much of the second selection process the candidate can āskipā?
Creating something like a āTriplebyte for EA researchersā, which could scalably evaluate aspiring/ājunior researchers, identify talented/āpromising ones, and then recommend them to hirers/āgrantmakers^[This idea was suggested as a possibility by a commenter on a draft of this post.]
This could resolve most of the vetting constraints if it could operate efficiently and was trusted by the relevant hirers/āgrantmakers
Triplebyteās value proposition to its clients (the companies who pay for its services) is an improved technical interview process. They claim to offer tests that achieve three forms of value:
Less biased
More predictive of success-linked technical prowess
Convenient (since companies donāt have to run the technical interviews themselves)
If thereās room for an āEA Triplebyte,ā that would suggest that EA orgs have at least one of those three problems.
So it seems like your first step would be to look in-depth at the ways EA orgs assess technical research skills.
Are they looking at the same sorts of skills? Are their tests any good? Are the tests time-consuming and burdensome for EA orgs? Alternatively, do many EA orgs pass up on needed hires because they donāt have the short-term capacity to evaluate them?
Then youād need to consider what alternative tests would be a better measurement of technical research prowess, and how to show that they are better predictive of success than present technical interviews.
It would also be important to determine the scale of the problem. Eyeballing this list, thereās maybe 75 EA-related organizations. How many hires do they make per month? How often does their search fail for lack of qualified candidates? How many hours do they spend on technical interviews each time? Will you be testing not for EA-specific for general research capacity (massively broadening your market, but also increasing the challenge of addressing all their needs)?
Finally, youād need to roll that up into a convenient, trustiworthy and reliable package that clients are excited to use instead of their current approach.
This seems like a massive amount of work, demanding a strong team, adequate funding and prior interest by EA orgs, and long-term commitment. It also sounds like it might be really valuable if done well.
Thanks for these thoughtsāI think Iāll add a link to your comment from that section of my post.
I think your analysis basically sounds correct to me. I would also be quite surprised if this came into existence (and was actually used by multiple orgs) in the next 10 years, and I donāt think itās likely to be the highest priority intervention for improving the EA-aligned research pipeline, though Iād be keen to at least see people flesh out and explore the idea a bit more.
FWIW, Iām guessing that this commenter on my doc meant something a little bit more distant from Triplebyte specifically than what your comment suggestsāin particular, I donāt think the idea would be just to conduct technical interviews, but also other parts of the selection process. At least, thatās how I interpreted the comment, and seems better to me, given that I think itās relatively rare for EA orgs to actually have technical interviews in their selection processes. (There may often be a few questions like that, but without it being the main focus for the interview. Though I also might be misinterpreting what a technical interview isāI havenāt worked in areas like engineering or IT.)
My sense is that Triplebyte focuses on ācan this person think like an engineerā and āwhich specific math/āprogramming skills do they have, and how strong are they?ā Then companies do a second round of interviews where they evaluate Triplebyte candidates for company culture. Triplebyte handles the general, companies handle the idiosyncratic.
It just seems to me that Triplebyte is powered by a mature industry thatās had decades of time and massive amounts of money invested into articulating its own needs and interests. Whereas I donāt think EA is old or big or wealthy enough to have a sharp sense of exactly what the stable needs are.
For a sense of scale, there are almost 4 million programmers in the USA. Triplebyte launched just 5 years ago. It took millions of people working as programmers to generate adequate demand and capacity for that service to be successful.
All in all, my guess is that what weāre missing is charismatic founder-types. The kind of people who can take one of the problems on our long lists of cause areas, turn it into a real plan, pull together funding and a team (of underutilized people), and make it go.
Figuring out how to teach that skill, or replace it with some other foundation mechanism, would of course be great. Itās necessary. Otherwise, weāre kind of just cannibalizing one highly-capable project to create another. Which is pretty much what we do when we try to attract strong outside talent and āconvertā them to EA.
Part of the reason I havenāt spent more time trying to found something right off the bat is that I thought EA could benefit more if I developed a skillset in technology. But another reason is that I just donāt have the slack. I think to found something, you need significant savings and a clear sense of what to do if it fails, such that you can afford to take years of your life, potentially, without a real income.
Most neophytes donāt have that kind of slack. Thatās why I especially lean on the side of āif it hurts, donāt do it.ā
I donāt have any negativity toward the encouragement to try things and be audacious. At the same time, thereās a massive amount of hype and exploitative stuff in the entrepreneurship world. This āThink of the guy who wrote Winzip! He made millions of dollars, and you can do it too!ā line that business gurus use to suck people in to their self-help sites and Youtube channels and so on.
The EA movement had some low-hanging fruit to pick early on. Itās obviously a huge win for us to have great resources like 80k, or significant organizations like OpenPhil. Some of these were founded by world-class experts (Pete Singer) and billionaires, but some (80k) were founded by some young audacious people not too far out of grad school. But those needs, it seems to me, are filled. The worldās pretty rich. Itās easier to address a funding shortfall or an information shortfall, than to get concrete useful direct work done.
Likewise in the business world, itās easier to find money for a project and outline the general principles of how to run a good business, than to actually develop and successfully market a valuable new product. Thereās plenty of money out there, and not a ton of obvious choices to spend it on. Silicon Valleyās looking for unicorns. Weāre looking for unicorns too. There arenāt many unicorns.
I think that the āEA establishmentāsā responsibility to neophytes is to tell them frankly that thereās a very high bar, itās there for a reason, and for your own sake, donāt hurt yourself over and over by failing to clear it. Go make yourself big and strong somewhere else, then come back here and show us what you can do. Tell people itās hard, and invite them back when theyāre ready for that kind of challenge.
I think to found something, you need significant savings and a clear sense of what to do if it fails, such that you can afford to take years of your life, potentially, without a real income.
I donāt think this is true, at least not as a general rule. I think you can do both (have a safe career and pursue something entrepreneurial) if you make small, focused bets to begin with and build from there. Related discussion here.
My sense is that Triplebyte focuses on ācan this person think like an engineerā and āwhich specific math/āprogramming skills do they have, and how strong are they?ā Then companies do a second round of interviews where they evaluate Triplebyte candidates for company culture. Triplebyte handles the general, companies handle the idiosyncratic.
I used to work as an interviewer for TripleByte. Most companies using TripleByte put TripleByte-certified candidates through their standard technical onsite. From what I was able to gather, the value prop for companies working with TripleByte is mostly about 1. expanding their sourcing pipeline to include more quality candidates and 2. cutting down on the amount of time their engineers spend administering screens to candidates who arenāt very good.
Some of your comments make it sound like a TB like service for EA has to be a lot better than what EA orgs are currently doing to screen candidates. Personally, I suspect thereās a lot of labor-saving value to capture if it is merely just as good (or even a bit worse) than current screens. It might also help organizations consider a broader range of people.
Thanks for that context, John. Given that value prop, companies might use a TB-like service under two constraints:
They are bottlenecked by having too few applicants. In this case, they have excess interviewing capacity, or more jobs than applicants. They hope that by investigating more applicants through TB, they can find someone outstanding.
Their internal headhunting process has an inferior quality distribution relative to the candidates they get through TB. In this case, they believe that TB can provide them with a better class of applicants than their own job search mechanisms can identify. In effect, they are outsourcing their headhunting for a particular job category.
Given that EA orgs seem primarily to lack specific forms of domain expertise, as well as well-defined project ideas/āteams, what would an EA Triplebyte have to achieve?
Theyād need to be able to interface with EA orgs and identify the specific forms of domain expertise that are required. Then theyād need to be able to go out and recruit those experts, who might never have heard of EA, and get them interested in the job. Theyād be an interface to the expertise these orgs require. Push a button, get an expert.
That seems plausible. Triplebyte evokes the image of a huge recruiting service meant to fill cubicles with basically-competent programmers who are pre-screened for the in-house technical interview. Not to find unusually specific skills for particular kinds of specialist jobs, which it seems is what EA requires at this time.
That sort of headhunting job could be done by just one person. Their job would be to do a whole lot of cold-calling, getting meetings with important people, doing the legwork that EA orgs donāt have time for. Need five minutes of a Senatorās time? Looking to pull together a conference of immunologists to discuss biosafety issues from an EA perspective? Thatās the sort of thing this sort of org would strive to make more convenient for EA orgs.
As they gained experience, they would also be able to help EA orgs anticipate what sort of projects the domain experts theyād depend upon would be likely to spring for. I imagine that some EA orgs must periodically come up with, say, ideas that would require some significant scientific input. Some of those ideas might be more attractive to the scientists than others. If an org like this existed, it might be able to tell those EA orgs which ones the scientists are likely to spring for.
That does seem like the kind of job that could productively exist at the intersection of EA orgs. Theyād need to understand EA concepts and the relationships between institutions well enough to speak āon behalf of the movement,ā while gaining a similar understanding of domains like the scientific, political, business, philanthropic, or military establishment of particular countries.
I agree that there are fewer lower hanging fruit than there used to be. On the other hand, thereās more guidance on what to do and more support for how to do it (perhaps ābetter maps to the treesā and ābetter laddersāāI think Iām plagiarising someone else on the ladder bit). Iād guess that it is now overall somewhat or significantly harder for someone in the position Ben Todd was in to make something as useful as 80k, but it doesnāt seem totally clear.
And in any case, something as useful as 80k is a high bar! I think something could be much less useful and still be very useful. And someone perhaps could āskill upā more than Ben Todd had, but only for like a couple years.
And I think there really are still a lot of fairly low hanging fruit. I think some evidence for this is the continuing number of EA projects that seem to fill niches that seem like they obviously should be filled, seem to be providing value, and are created by pretty early-career people. (I can expand on this if you want, but I think e.g. looking at lists of EA orgs already gives a sense of what I mean.)
I agree with many parts of your comment, but I continue to think only some sizeable fraction of people should be advised to āGo make yourself big and strong somewhere else, then come back here and show us what you can doā, while also:
many people should try both approaches at first
many people should focus mostly on the explicitly EA paths (usually after trying both approaches and getting some evidence about comparative advantage)
many people should go make themselves big and strong and impactful somewhere else,and then just stay there, doing great stuff
I think itās perhaps a little irresponsible to give public advice thatās narrower than thatānarrower advice makes sense if youāre talking to a specific person and you have evidence about which of those categories of people theyāre part of, but not for a public audience.
(I think itās also fine to give public advice like āon the margin, somewhat more people should be doing X, and some ways to tell if you specifically should be doing X are Y and Zā. I think 80kās advice tends to look like that. Though even that often gets boiled down by other people to āquick, everyone should do X!ā, and then creates problems.)
I do think getting it would be good to get more clarity on what proportion of EAs are spending too much vs too little time pursuing explicitly EA-aligned roles (given their ultimate goals, fit, etc.), and more clarity on what proxies can be used to help people work out which group theyāre in.
Though I think some decent insights can already be gleaned from, for example, posts tagged Working at EA vs Non-EA Orgs or things linked to from those posts, and one on one career advice conversations.
(And I think we can also improve on the current situationāwhere some people are writing themselves off and others have too narrow a focusāby just making sure we always clearly acknowledge individual variation, there being many different good paths, it taking time to work out what one is a fit for, etc.)
(I also like that Slate Star Codex post you link to, and agree that itās relevant here.)
Longer reaction:
It seems to me that this post is making roughly the following claims:
Itās true that EA-aligned funders have a lot of money, that many people want to do āEA workā, and that, despite this, many of them are not getting funded
But this doesnāt mean the EA-aligned funders should lower their bar for providing funding
It also isnāt a good idea for those people to just keep pursuing āEA workā
Instead, āEA workā should be seen as something you do later, and until then you should focus on gaining knowledge, skills, networks, credentials, etc. outside EA
(Implied: And this is almost entirely for the instrumental value it provides via helping you get and do well in āEA workā later)
Is this roughly what you were trying to get across?
I think I strongly agree with some nearby claims, but I find several of those claims problematic. Hereās the nearby claims Iād agree with.
Itās true that EA-aligned funders have a lot of money, that many people want to do āEA workā, and that, despite this, many of them are not getting funded
This is same claim as aboveāI agree here
But this doesnāt necessarily mean the EA-aligned funders should lower their bar for providing funding
Iām currently agnostic about whether various EA funders should lower, raise, or maintain their current bars for funding
It also isnāt a good idea for all those people to spend a lot of time pursuing work at explicitly EA orgs or on explicitly EA projects. (And in any case, probably all of those people should spend at least some time pursuing other types of work.)
Instead, many of those people should focus on pursuing work at other types of orgs or other types of projects
In many cases, a key reason for (4) will be for the instrumental value of getting knowledge, skills, credentials, etc., which can later help the person get more explicitly EA work. Itās also nice that this costs fewer EA resources, e.g. mentorship time from EAs.
But also in many cases, a key reason for (4) will be that the non-EA work is very EA-aligned, in the sense that it itself has a substantial direct impact.
On 3: I think it can be a good idea for many people to spend some effort pursuing explicitly EA work, and for some people to focus primarily on pursuing such work, even early in their careers.
E.g., this is what I did, and I now have strong reason to believe it was a good move for me. (My evidence includes the fact I had more success in EA job applications than other job applications, and the views expressed by various people who gave feedback on my career plan. That said, I did still apply for quite a few non-EA roles, especially early on when I didnāt yet have evidence pushing in favour of me doing explicitly EA roles. And I had already worked for 2 years before starting in an āEA jobā.)
It also seems worth stating explicitly that I think that thatās basically a matter of comparative advantage, rather than how generically talented a person is. E.g., I think Iād be less good at working in government or in academia than a fair number of other people in the EA community would be, and this is part of what pushes in favour of me focusing on explicitly EA roles.
Hi Michael, thanks for your responses! Iām mainly addressing the metaphorical runner on the right in the photograph at the start of the post.
I am also agnostic about where the bar should be. But having a bar means that you have to maintain the bar in place. You donāt move the bar just because you couldnāt find a place to spend all your money.
For me, EA has been an activating and liberating force. It gives me a sense of direction, motivation to continue, and practical advice. Iāve run EA research and community development projects with Vaidehi Agarwalla, and published my own writing here and on LessWrong. These outlets, plus my pursuit of a scientific research career, have been satisfying outlets for my altruistic drive.
Not everything has been successfulābut I learned a lot along the way, and feel optimistic about the future.
Yet I see other people who seem very concerned and often disappointed at the difficulty they have in their own relationship with EA. Particularly, getting EA jobs and grants, or dealing with the feeling of āI want to save the world, but I donāt know how!ā Iām extremely optimistic that EA is and will continue to make an outsize positive impact on the world. What Iām more afraid of is that weāll generate what I call ābycatch.ā
I guess in terms of this metaphor, part of what Iām saying is that there are also some people who arenāt āin the raceā but really would do great if they joined it (maybe after a few false starts), and other people who are in the race but would be better off switching to tennis instead (and thatās great too!).
And Iām a little worried about saying something like āHey, the race is super hard! But donāt feel bad, you can go train up somewhere else for a while, and then come back!ā
Because some people could do great in the race already! (Even if several applications donāt work out or whatever; thereās a lot of variation between what different roles need, and a lot of random chance, etc.) And some of these people are erring in the direction of self-selecting out, feeling imposter syndrome, getting 4 job rejections and then thinking āwell, that proves it, Iām not right for these rolesā, etc.
Meanwhile, other people shouldnāt ātrain up and come backā, but rather go do great things elsewhere long-term! (Not necessarily leaving the community, but just not working at an explicitly EA org or with funding from EA funders.) And some of these people are erring in the direction of having their heart set of getting into an āEA jobā eventually, even if they have to train up elsewhere first.
---
Iād also be worried about messaging like āEveryone needs to get in this particular race right now! We have lots of money and lots to do and yāall need to come over here!ā And it definitely seems good to push against that. But I think we can try to find a middle ground that acknowledges there are many different paths, and different ones will be better for different people at different times, and thatās ok. (E.g., I think this post by Rob Wiblin does that nicely.)
Figuring out how to give the right advice to the right person is a hard challenge. Thatās why I framed skilling up outside EA as being a good alternative to ābanging your head against the wall indefinitely.ā I think the link I added to the bottom of this post addresses the āmany pathsā component.
The main goal of my post, though, is to talk about why thereās a bar (hurdle rate) in the first place. And, if readers are persuaded of its necessity, to suggest what to do if youāve become convinced that you canāt surpass it at this stage in your journey.
It would be helpful to find a test to distinguish EAs who should keep trying from those who should exit, skill up, and return later. Probably one-on-one mentorship, coupled with data on what sorts of things EA orgs look for in an applicant, and the distribution of applicant quality, would be the way to devise such a test.
A team capable of executing a high-quality project to create such a test would (if I were an EA fund) definitely be worthy of a grant!
Also, hereās a somewhat relevant intervention idea that seems interesting to me, copied from an upcoming post in my sequence on improving the EA-aligned research pipeline (so this passage focuses on research roles, but you can easily extrapolate the ideas):
Improving the vetting of (potential) researchers, and/āor better āsharingā that vetting
For example:
Improving selection processes at EA-aligned research organisations
Increasing the number and usefulness of referrals of candidates from one selection process (e.g., for a job or a grant) to another selection process.
This already happens, but could perhaps be improved by:
Increasing how often it happens
Increasing how well-targeted the referrals are
Increasing the amount of information provided to the second selection process?
Increasing how much of the second selection process the candidate can āskipā?
Creating something like a āTriplebyte for EA researchersā, which could scalably evaluate aspiring/ājunior researchers, identify talented/āpromising ones, and then recommend them to hirers/āgrantmakers^[This idea was suggested as a possibility by a commenter on a draft of this post.]
This could resolve most of the vetting constraints if it could operate efficiently and was trusted by the relevant hirers/āgrantmakers
Triplebyteās value proposition to its clients (the companies who pay for its services) is an improved technical interview process. They claim to offer tests that achieve three forms of value:
Less biased
More predictive of success-linked technical prowess
Convenient (since companies donāt have to run the technical interviews themselves)
If thereās room for an āEA Triplebyte,ā that would suggest that EA orgs have at least one of those three problems.
So it seems like your first step would be to look in-depth at the ways EA orgs assess technical research skills.
Are they looking at the same sorts of skills? Are their tests any good? Are the tests time-consuming and burdensome for EA orgs? Alternatively, do many EA orgs pass up on needed hires because they donāt have the short-term capacity to evaluate them?
Then youād need to consider what alternative tests would be a better measurement of technical research prowess, and how to show that they are better predictive of success than present technical interviews.
It would also be important to determine the scale of the problem. Eyeballing this list, thereās maybe 75 EA-related organizations. How many hires do they make per month? How often does their search fail for lack of qualified candidates? How many hours do they spend on technical interviews each time? Will you be testing not for EA-specific for general research capacity (massively broadening your market, but also increasing the challenge of addressing all their needs)?
Finally, youād need to roll that up into a convenient, trustiworthy and reliable package that clients are excited to use instead of their current approach.
This seems like a massive amount of work, demanding a strong team, adequate funding and prior interest by EA orgs, and long-term commitment. It also sounds like it might be really valuable if done well.
Thanks for these thoughtsāI think Iāll add a link to your comment from that section of my post.
I think your analysis basically sounds correct to me. I would also be quite surprised if this came into existence (and was actually used by multiple orgs) in the next 10 years, and I donāt think itās likely to be the highest priority intervention for improving the EA-aligned research pipeline, though Iād be keen to at least see people flesh out and explore the idea a bit more.
FWIW, Iām guessing that this commenter on my doc meant something a little bit more distant from Triplebyte specifically than what your comment suggestsāin particular, I donāt think the idea would be just to conduct technical interviews, but also other parts of the selection process. At least, thatās how I interpreted the comment, and seems better to me, given that I think itās relatively rare for EA orgs to actually have technical interviews in their selection processes. (There may often be a few questions like that, but without it being the main focus for the interview. Though I also might be misinterpreting what a technical interview isāI havenāt worked in areas like engineering or IT.)
My sense is that Triplebyte focuses on ācan this person think like an engineerā and āwhich specific math/āprogramming skills do they have, and how strong are they?ā Then companies do a second round of interviews where they evaluate Triplebyte candidates for company culture. Triplebyte handles the general, companies handle the idiosyncratic.
It just seems to me that Triplebyte is powered by a mature industry thatās had decades of time and massive amounts of money invested into articulating its own needs and interests. Whereas I donāt think EA is old or big or wealthy enough to have a sharp sense of exactly what the stable needs are.
For a sense of scale, there are almost 4 million programmers in the USA. Triplebyte launched just 5 years ago. It took millions of people working as programmers to generate adequate demand and capacity for that service to be successful.
All in all, my guess is that what weāre missing is charismatic founder-types. The kind of people who can take one of the problems on our long lists of cause areas, turn it into a real plan, pull together funding and a team (of underutilized people), and make it go.
Figuring out how to teach that skill, or replace it with some other foundation mechanism, would of course be great. Itās necessary. Otherwise, weāre kind of just cannibalizing one highly-capable project to create another. Which is pretty much what we do when we try to attract strong outside talent and āconvertā them to EA.
Part of the reason I havenāt spent more time trying to found something right off the bat is that I thought EA could benefit more if I developed a skillset in technology. But another reason is that I just donāt have the slack. I think to found something, you need significant savings and a clear sense of what to do if it fails, such that you can afford to take years of your life, potentially, without a real income.
Most neophytes donāt have that kind of slack. Thatās why I especially lean on the side of āif it hurts, donāt do it.ā
I donāt have any negativity toward the encouragement to try things and be audacious. At the same time, thereās a massive amount of hype and exploitative stuff in the entrepreneurship world. This āThink of the guy who wrote Winzip! He made millions of dollars, and you can do it too!ā line that business gurus use to suck people in to their self-help sites and Youtube channels and so on.
The EA movement had some low-hanging fruit to pick early on. Itās obviously a huge win for us to have great resources like 80k, or significant organizations like OpenPhil. Some of these were founded by world-class experts (Pete Singer) and billionaires, but some (80k) were founded by some young audacious people not too far out of grad school. But those needs, it seems to me, are filled. The worldās pretty rich. Itās easier to address a funding shortfall or an information shortfall, than to get concrete useful direct work done.
Likewise in the business world, itās easier to find money for a project and outline the general principles of how to run a good business, than to actually develop and successfully market a valuable new product. Thereās plenty of money out there, and not a ton of obvious choices to spend it on. Silicon Valleyās looking for unicorns. Weāre looking for unicorns too. There arenāt many unicorns.
I think that the āEA establishmentāsā responsibility to neophytes is to tell them frankly that thereās a very high bar, itās there for a reason, and for your own sake, donāt hurt yourself over and over by failing to clear it. Go make yourself big and strong somewhere else, then come back here and show us what you can do. Tell people itās hard, and invite them back when theyāre ready for that kind of challenge.
I donāt think this is true, at least not as a general rule. I think you can do both (have a safe career and pursue something entrepreneurial) if you make small, focused bets to begin with and build from there. Related discussion here.
I agree, I should have included āor a safe career/āfallback optionā to that.
I used to work as an interviewer for TripleByte. Most companies using TripleByte put TripleByte-certified candidates through their standard technical onsite. From what I was able to gather, the value prop for companies working with TripleByte is mostly about 1. expanding their sourcing pipeline to include more quality candidates and 2. cutting down on the amount of time their engineers spend administering screens to candidates who arenāt very good.
Some of your comments make it sound like a TB like service for EA has to be a lot better than what EA orgs are currently doing to screen candidates. Personally, I suspect thereās a lot of labor-saving value to capture if it is merely just as good (or even a bit worse) than current screens. It might also help organizations consider a broader range of people.
Thanks for that context, John. Given that value prop, companies might use a TB-like service under two constraints:
They are bottlenecked by having too few applicants. In this case, they have excess interviewing capacity, or more jobs than applicants. They hope that by investigating more applicants through TB, they can find someone outstanding.
Their internal headhunting process has an inferior quality distribution relative to the candidates they get through TB. In this case, they believe that TB can provide them with a better class of applicants than their own job search mechanisms can identify. In effect, they are outsourcing their headhunting for a particular job category.
Given that EA orgs seem primarily to lack specific forms of domain expertise, as well as well-defined project ideas/āteams, what would an EA Triplebyte have to achieve?
Theyād need to be able to interface with EA orgs and identify the specific forms of domain expertise that are required. Then theyād need to be able to go out and recruit those experts, who might never have heard of EA, and get them interested in the job. Theyād be an interface to the expertise these orgs require. Push a button, get an expert.
That seems plausible. Triplebyte evokes the image of a huge recruiting service meant to fill cubicles with basically-competent programmers who are pre-screened for the in-house technical interview. Not to find unusually specific skills for particular kinds of specialist jobs, which it seems is what EA requires at this time.
That sort of headhunting job could be done by just one person. Their job would be to do a whole lot of cold-calling, getting meetings with important people, doing the legwork that EA orgs donāt have time for. Need five minutes of a Senatorās time? Looking to pull together a conference of immunologists to discuss biosafety issues from an EA perspective? Thatās the sort of thing this sort of org would strive to make more convenient for EA orgs.
As they gained experience, they would also be able to help EA orgs anticipate what sort of projects the domain experts theyād depend upon would be likely to spring for. I imagine that some EA orgs must periodically come up with, say, ideas that would require some significant scientific input. Some of those ideas might be more attractive to the scientists than others. If an org like this existed, it might be able to tell those EA orgs which ones the scientists are likely to spring for.
That does seem like the kind of job that could productively exist at the intersection of EA orgs. Theyād need to understand EA concepts and the relationships between institutions well enough to speak āon behalf of the movement,ā while gaining a similar understanding of domains like the scientific, political, business, philanthropic, or military establishment of particular countries.
An EA diplomat.
I agree that there are fewer lower hanging fruit than there used to be. On the other hand, thereās more guidance on what to do and more support for how to do it (perhaps ābetter maps to the treesā and ābetter laddersāāI think Iām plagiarising someone else on the ladder bit). Iād guess that it is now overall somewhat or significantly harder for someone in the position Ben Todd was in to make something as useful as 80k, but it doesnāt seem totally clear.
And in any case, something as useful as 80k is a high bar! I think something could be much less useful and still be very useful. And someone perhaps could āskill upā more than Ben Todd had, but only for like a couple years.
And I think there really are still a lot of fairly low hanging fruit. I think some evidence for this is the continuing number of EA projects that seem to fill niches that seem like they obviously should be filled, seem to be providing value, and are created by pretty early-career people. (I can expand on this if you want, but I think e.g. looking at lists of EA orgs already gives a sense of what I mean.)
I agree with many parts of your comment, but I continue to think only some sizeable fraction of people should be advised to āGo make yourself big and strong somewhere else, then come back here and show us what you can doā, while also:
many people should try both approaches at first
many people should focus mostly on the explicitly EA paths (usually after trying both approaches and getting some evidence about comparative advantage)
many people should go make themselves big and strong and impactful somewhere else, and then just stay there, doing great stuff
I think itās perhaps a little irresponsible to give public advice thatās narrower than thatānarrower advice makes sense if youāre talking to a specific person and you have evidence about which of those categories of people theyāre part of, but not for a public audience.
(I think itās also fine to give public advice like āon the margin, somewhat more people should be doing X, and some ways to tell if you specifically should be doing X are Y and Zā. I think 80kās advice tends to look like that. Though even that often gets boiled down by other people to āquick, everyone should do X!ā, and then creates problems.)
I do think getting it would be good to get more clarity on what proportion of EAs are spending too much vs too little time pursuing explicitly EA-aligned roles (given their ultimate goals, fit, etc.), and more clarity on what proxies can be used to help people work out which group theyāre in.
Though I think some decent insights can already be gleaned from, for example, posts tagged Working at EA vs Non-EA Orgs or things linked to from those posts, and one on one career advice conversations.
(And I think we can also improve on the current situationāwhere some people are writing themselves off and others have too narrow a focusāby just making sure we always clearly acknowledge individual variation, there being many different good paths, it taking time to work out what one is a fit for, etc.)
(I also like that Slate Star Codex post you link to, and agree that itās relevant here.)