There’s a thing in EA where encouraging someone to apply for a job or grant gets coded as “supportive”, maybe even a very tiny gift. But that’s only true when [chance of getting job/grant] x [value of job/grant over next best alternative] > [cost of applying].
One really clear case was when I was encouraged to apply for a grant my project wasn’t a natural fit for, because “it’s quick and there are few applicants”. This seemed safe, since the deadline was in a few hours. But in those few hours the number of applications skyrocketed- I want to say 5x but my memory is shaky- presumably because I wasn’t the only person the grantmaker encouraged. I ended up wasting several hours of my and co-founders time before dropping out, because the project really was not a good fit for the grant.
[if the grantmaker is reading this and recognizes themselves: I’m not mad at you personally].
I’ve been guilty of this too, defaulting to encouraging people to try for something without considering the costs of making the attempt, or the chance of success. It feels so much nicer than telling someone “yeah you’re probably not good enough”.
A lot of EA job postings encourage people to apply even if they don’t think they’re a good fit. I expect this is done partially because orgs genuinely don’t want to lose great applicants who underestimate themselves, and partially because it’s an extremely cheap way to feel anti-elitist.
I don’t know what the solution is here. Many people are miscalibrated on their value or their competition, all else being equal you do want to catch those people. But casting wider net entails more bycatch.
It’s hard to accuse an org of being mean to someone who they encouraged to apply for a job or grant. But I think that should be in the space of possibilities, and we should put more emphasis on invitations to apply for jobs/grants/etc being clear, and less on welcoming. This avoids wasting the time of people who were predictably never going to get the job.
I think this falls into a broader class of behaviors I’d call aspirational inclusiveness.
I do think shifting the relative weight from welcoming to clear is good. But I’d frame it as a “yes and” kind of shift. The encouragement message should be followed up with a dose of hard numbers.
Something I’ve appreciated from a few applications is the hiring manager’s initial guess for how the process will turn out. Something like “Stage 1 has X people and our very tentative guess is future stages will go like this”.
Scenarios can also substitute in areas where numbers may be misleading or hard to obtain. I’ve gotten this from mentors before, like here’s what could happen if your new job goes great. Here’s what could happen if your new job goes badly. Here’s the stuff you can control and here’s the stuff you can’t control.
Something I’ve tried to practice in my advice is giving some ballpark number and reference class. I tell someone they should consider skilling up in hard area or pursuing competitive field, then I tell them I expect success in <5% of people I give the advice to, and then say you may still want to do it because of certain reasons
Yes, it’s all very noisy. But numbers seem far far better than expecting applicants to read between the lines on what a heartwarming message is supposed to mean, especially early-career folks who would understandably assign a high probability of success with it
One thing is just that discouragement is culturally quite hard and there are strong disincentives to do so; eg I think I definitely get more flak for telling people they shouldn’t do X than telling them they should (including a recent incidence which was rather personally costly). And I think I’m much more capable of diplomatic language than the median person in such situations; some of my critical or discouraging comments on this forum are popular.
I also know at least 2 different people who were told (probably wrongly) many years ago that they can’t be good researchers, and they still bring it up as recently as this year. Presumably people falsely told they can be good researchers (or correctly told that they cannot) are less likely to e.g. show up at EA Global. So it’s easier for people in positions of relative power or prestige to see the positive consequences of encouragement, and the negative consequences of discouragement, than the reverse.
Sometimes when people ask me about their chances, I try to give them off-the-cuff numerical probabilities. Usually the people I’m talking to appreciate it but sometimes people around them (or around me) get mad at me.
(Tbf, I have never tried scoring these fast guesses, so I have no idea how accurate they are).
How my perspective has changed on this during the last few years is to advise others not to give much weight to a single point of feedback. Especially for those who’ve told me only one or two people have discouraged them from be(com)ing a researcher, I tell them not to stop trying in spite of that. That’s even when the person giving the discouraging feedback is in a position of relative power or prestige.
The last year seems to have proven that the power or prestige someone has gained in EA is a poor proxy for how much weight their judgment should be given on any, single EA-relsted topic. If Will MacAskill and many of his closest peers are doubting how they’ve conceived of EA for years in the wake of the FTX collapse, I expect most individual effective altruists confident enough to judge another’s entire career trajectory are themselves likely overconfident.
Another example is AI safety. I’ve talked to dozens of aspiring AI safety researchers who’ve felt very discouraged
An illusory consensus thrust upon them that their work was essentially worthless because it didn’t superficially resemble the work being done by the Machine Intelligence Research Institute or whatever other approach was in vogue at the time. For years, I suspected that was bullshit.
Some of the brightest effective altruists I’ve met were being inundated by personal criticism harsher than any even Eliezer Yudkowsky would give. I told those depressed, novice AIS researchers to ignore those dozens of jerks who concluded the way to give constructive criticism, like they presumed Eliezer would, was to emulate a sociopath. These people were just playing a game of ‘follow the leader’ not even the “leaders” would condone. I distrusted their hot takes based on clout and vibes about who was competent and who wasn’t
Meanwhile, increasingly over the last year or two, more and more of the AIS field, including some of its most reputed luminaires, have come out of the woodwork more and more to say, essentially, “lol, turns out we didn’t know what we were doing with alignment the whole time, we’re definitely probably all gonna die soon, unless we can convince Sam Altman to hit the off switch at OpenAI.” I feel vindicated in my skepticism of the quality of the judgement of many of our peers.
Thanks for this post, as I’ve been trying to find a high-impact job that’s a good personal fit for 9 months now. I have noticed that EA organizations use what appears to be a cookie-cutter recruitment process with remarkable similarities across organizations and cause areas. This process is also radically different from what non-EA nonprofit organizations use for recruitment. Presumably EA organizations adopted this process because there’s evidence behind its effectiveness but I’d love to see what that evidence actually is. I suspect it privileges younger, (childless?) applicants with time to burn, but I don’t have data to back up this suspicion other than viewing the staff pages of EA orgs.
Can you say more about cookie-cutter recruitment? I don’t have a good sense of what you mean here.
I think solving this is tricky. I want hiring to be efficient, but most ways hiring orgs can get information take time, and that’s always going to be easier for people with more free time. I think EA has an admirable norm of paying for trials and deserves a lot of credit for that.
One possible solution is to have applicants create a prediction market on their chance of getting a job/grant, before applying—this helps grant applicants get a sense of how good their prospects are. (example 1, 2) Of course, there’s a cost to setting up a market and making the relevant info legible to traders, but it should be a lot less than the cost of writing the actual application.
Another solution I’ve been entertaining is to have grantmakers/companies screen applications in rounds, or collaboratively, such that the first phase of application is very very quick (eg “drop in your Linkedin profile and 2 sentences about why you’re a good fit”).
I’d be interested in seeing some organizations try out the very very quick method. Heck, I’d be willing to help set it up and trial run it. My rough/vague perception is that a lot of the information in a job application is superfluous.
I also remember Ben West posting some data about how a variety of “how EA is this person” metrics held very little predictive value in his own hiring rounds.
There’s a thing in EA where encouraging someone to apply for a job or grant gets coded as “supportive”, maybe even a very tiny gift. But that’s only true when [chance of getting job/grant] x [value of job/grant over next best alternative] > [cost of applying].
One really clear case was when I was encouraged to apply for a grant my project wasn’t a natural fit for, because “it’s quick and there are few applicants”. This seemed safe, since the deadline was in a few hours. But in those few hours the number of applications skyrocketed- I want to say 5x but my memory is shaky- presumably because I wasn’t the only person the grantmaker encouraged. I ended up wasting several hours of my and co-founders time before dropping out, because the project really was not a good fit for the grant.
[if the grantmaker is reading this and recognizes themselves: I’m not mad at you personally].
I’ve been guilty of this too, defaulting to encouraging people to try for something without considering the costs of making the attempt, or the chance of success. It feels so much nicer than telling someone “yeah you’re probably not good enough”.
A lot of EA job postings encourage people to apply even if they don’t think they’re a good fit. I expect this is done partially because orgs genuinely don’t want to lose great applicants who underestimate themselves, and partially because it’s an extremely cheap way to feel anti-elitist.
I don’t know what the solution is here. Many people are miscalibrated on their value or their competition, all else being equal you do want to catch those people. But casting wider net entails more bycatch.
It’s hard to accuse an org of being mean to someone who they encouraged to apply for a job or grant. But I think that should be in the space of possibilities, and we should put more emphasis on invitations to apply for jobs/grants/etc being clear, and less on welcoming. This avoids wasting the time of people who were predictably never going to get the job.
I think this falls into a broader class of behaviors I’d call aspirational inclusiveness.
I do think shifting the relative weight from welcoming to clear is good. But I’d frame it as a “yes and” kind of shift. The encouragement message should be followed up with a dose of hard numbers.
Something I’ve appreciated from a few applications is the hiring manager’s initial guess for how the process will turn out. Something like “Stage 1 has X people and our very tentative guess is future stages will go like this”.
Scenarios can also substitute in areas where numbers may be misleading or hard to obtain. I’ve gotten this from mentors before, like here’s what could happen if your new job goes great. Here’s what could happen if your new job goes badly. Here’s the stuff you can control and here’s the stuff you can’t control.
Something I’ve tried to practice in my advice is giving some ballpark number and reference class. I tell someone they should consider skilling up in hard area or pursuing competitive field, then I tell them I expect success in <5% of people I give the advice to, and then say you may still want to do it because of certain reasons
Yes, it’s all very noisy. But numbers seem far far better than expecting applicants to read between the lines on what a heartwarming message is supposed to mean, especially early-career folks who would understandably assign a high probability of success with it
Oh I like this phrase a lot
Yeah this sounds right.
One thing is just that discouragement is culturally quite hard and there are strong disincentives to do so; eg I think I definitely get more flak for telling people they shouldn’t do X than telling them they should (including a recent incidence which was rather personally costly). And I think I’m much more capable of diplomatic language than the median person in such situations; some of my critical or discouraging comments on this forum are popular.
I also know at least 2 different people who were told (probably wrongly) many years ago that they can’t be good researchers, and they still bring it up as recently as this year. Presumably people falsely told they can be good researchers (or correctly told that they cannot) are less likely to e.g. show up at EA Global. So it’s easier for people in positions of relative power or prestige to see the positive consequences of encouragement, and the negative consequences of discouragement, than the reverse.
Sometimes when people ask me about their chances, I try to give them off-the-cuff numerical probabilities. Usually the people I’m talking to appreciate it but sometimes people around them (or around me) get mad at me.
(Tbf, I have never tried scoring these fast guesses, so I have no idea how accurate they are).
How my perspective has changed on this during the last few years is to advise others not to give much weight to a single point of feedback. Especially for those who’ve told me only one or two people have discouraged them from be(com)ing a researcher, I tell them not to stop trying in spite of that. That’s even when the person giving the discouraging feedback is in a position of relative power or prestige.
The last year seems to have proven that the power or prestige someone has gained in EA is a poor proxy for how much weight their judgment should be given on any, single EA-relsted topic. If Will MacAskill and many of his closest peers are doubting how they’ve conceived of EA for years in the wake of the FTX collapse, I expect most individual effective altruists confident enough to judge another’s entire career trajectory are themselves likely overconfident.
Another example is AI safety. I’ve talked to dozens of aspiring AI safety researchers who’ve felt very discouraged An illusory consensus thrust upon them that their work was essentially worthless because it didn’t superficially resemble the work being done by the Machine Intelligence Research Institute or whatever other approach was in vogue at the time. For years, I suspected that was bullshit.
Some of the brightest effective altruists I’ve met were being inundated by personal criticism harsher than any even Eliezer Yudkowsky would give. I told those depressed, novice AIS researchers to ignore those dozens of jerks who concluded the way to give constructive criticism, like they presumed Eliezer would, was to emulate a sociopath. These people were just playing a game of ‘follow the leader’ not even the “leaders” would condone. I distrusted their hot takes based on clout and vibes about who was competent and who wasn’t
Meanwhile, increasingly over the last year or two, more and more of the AIS field, including some of its most reputed luminaires, have come out of the woodwork more and more to say, essentially, “lol, turns out we didn’t know what we were doing with alignment the whole time, we’re definitely probably all gonna die soon, unless we can convince Sam Altman to hit the off switch at OpenAI.” I feel vindicated in my skepticism of the quality of the judgement of many of our peers.
Thanks for this post, as I’ve been trying to find a high-impact job that’s a good personal fit for 9 months now. I have noticed that EA organizations use what appears to be a cookie-cutter recruitment process with remarkable similarities across organizations and cause areas. This process is also radically different from what non-EA nonprofit organizations use for recruitment. Presumably EA organizations adopted this process because there’s evidence behind its effectiveness but I’d love to see what that evidence actually is. I suspect it privileges younger, (childless?) applicants with time to burn, but I don’t have data to back up this suspicion other than viewing the staff pages of EA orgs.
Can you say more about cookie-cutter recruitment? I don’t have a good sense of what you mean here.
I think solving this is tricky. I want hiring to be efficient, but most ways hiring orgs can get information take time, and that’s always going to be easier for people with more free time. I think EA has an admirable norm of paying for trials and deserves a lot of credit for that.
One possible solution is to have applicants create a prediction market on their chance of getting a job/grant, before applying—this helps grant applicants get a sense of how good their prospects are. (example 1, 2) Of course, there’s a cost to setting up a market and making the relevant info legible to traders, but it should be a lot less than the cost of writing the actual application.
Another solution I’ve been entertaining is to have grantmakers/companies screen applications in rounds, or collaboratively, such that the first phase of application is very very quick (eg “drop in your Linkedin profile and 2 sentences about why you’re a good fit”).
I’d be interested in seeing some organizations try out the very very quick method. Heck, I’d be willing to help set it up and trial run it. My rough/vague perception is that a lot of the information in a job application is superfluous.
I also remember Ben West posting some data about how a variety of “how EA is this person” metrics held very little predictive value in his own hiring rounds.