I suppose I’m not directly answering your question, but I think it
might be pretty hard to answer well, if you want to try to account
for replaceability properly, because many people can end up in
different positions because of you taking or not taking a job at an
EA org, and it wouldn’t be easy to track them.
If one hasn’t taken into account replaceability, or the
displacement chain, how do you know it is better to work in EA
orgs rather than ETG (for X dollars).
Milan Griffes reports with a replaceability of 10% (guess) and
attributing 60% (guess) contribution to the donor, that his impact was
244k. Now if you remove the replaceability it is 2.4m.
I doubt anyone has tried to. See this and my recent post.
As for your post, I saw it as well, and gained on the “displacement
chain” verbiage and calculation. It was very difficult for me to
follow the discussion on difference in priorities. In any case, I
think we need atleast one real example to test a claim.
New charities will sometimes be started to make more EA org positions, and they wouldn’t get far if they didn’t have people who were the right fit for them. Rethink Priorities and Charity Entrepreneurship are relatively new (although very funding-constrained, and this might be the bottleneck for their hiring and the bottleneck for starting new charities like them). Charity Entrepreneurship is starting many more EA orgs with their incubation program (incubated charities here). Maybe worth reaching out to them to see what their applicant pool is like?
I think there are also specific talent bottlenecks, see [1], [2], [3]. Actually, this last one comes from Animal Advocacy Careers, a charity incubated by Charity Entrepreneurship to meet the effective animal advocacy talent bottlenecks.
Btw, I think you have the wrong link for Carricks.
Not sure about Rethink Priorities but minor correction is that last time I spoke to CE about this, they didn’t see funding as a substantial constraint for them. They felt more constrained by high quality applicants to their programme.
Edit: CE are now looking for funding, so are at least partly funding constrained!
Charity Entrepreneurship is starting many more EA orgs with
their incubation program (incubated charities here). Maybe worth
reaching out to them to see what their applicant pool is like?
Good idea. I will contact them as well to see the talent pool. If they
still need “high-quality people”, somehow getting better (gaining) in that
direction seems like a good opportunity.
I think there are also specific talent bottlenecks, see [1], [2],
[3].
Micheal, I have written an article here:
http://agent18.github.io/is-ea-bottlenecked-2.html in my unfinished
blogspace about [1] and [2]. I really don’t find evidence for their
claims of bottlenecks. Or I don’t understand what they are trying to
say. For example, GR in GPR is recommended by 80khours in their
high-impact-careers post, also in the surveys, also in the separate
problem profiles etc… but yet during open phil’s round on there is
literally 100s of “good resumes” and “many candidates worthy of
positions” but OP could not consume all of them.
Actually, this last one comes from Animal Advocacy Careers, a
charity incubated by Charity Entrepreneurship to meet the effective
animal advocacy talent bottlenecks.
This I really need to look into. Thanks for that.
Btw, I think you have the wrong link for Carricks.
Thanks. Corrected it. Sorry about that.
Bottom line
I don’t know how people evaluate which career to choose. Many people
are redirecting me to posts from 80khours. But I find only claims
there. When I ask organizations on value generated replaceability I
don’t get any info from them. I think people do a guess at max, falling prey to vague words
like Career Capital or possibly primarily focusing on what they are
good at or I don’t know.
Anyways… It seems like a dead end to think that I can actually
evaluate what I should be doing. Your thoughts?
How did you end up choosing to go to DarwinAI? Why not something else
like GR in GPR or FAAANG?
How did you end up choosing to go to DarwinAI? Why not something else like GR in GPR or FAAANG?
I’d say it was kind of decided for me, since those other options were ruled out at the time. I applied to internships at some EA orgs, but didn’t have any luck. Then, I did a Master’s in computational math. Then, I started working part-time at a machine learning lab at the university while I looked for full-time work. I applied to AI internships at the big tech companies, but didn’t have any luck. I got my job at DarwinAI because I was working for two of its cofounders at the lab. I had no industry experience before that.
I’m currently trying to transition to effective animal advocacy research, reading more research, offering to review research before publication, applying to internships and positions at the orgs, and studying more economics/stats, one of the bottlenecks discussed here, with quantitative finance as a second choice, and back to deep learning in the industry as my third. I feel that EA orgs have been a bit weak on causal inference (from observational data), which falls under econometrics/stats.
I’m currently trying to transition to effective animal advocacy
research, reading more research, offering to review research before
publication, applying to internships and positions at the orgs, and
studying more economics/stats, one of the bottlenecks discussed here,
Your options sounds solid. I guess your 28 and can thus still get into
relatively different quantitative Finance.
But, how did you decide that it is best for you to dedicate your time
to AAR? You could be working at GiveWell/Open Phil as a GR, or in
OpenAI/MIRI in AI safety research (especially with your CS and Math
background), you could also be working in ETG at the FAANG. Also
80khours no where seems to suggest that AAR of all the things are
“high-impact-careers” nor does the EA survey say anything about it. In
fact the survey talks about GR and AI safety.
And did you account for replaceability and other factors? If so, how did you arrive at these numbers?
I feel that EA orgs have been a bit weak on causal inference (from
observational data), which falls under econometrics/stats.
So you hope to apply causal inference in AAR?
Lastly I want to thank you from the heart for taking your time and effot to
respond to me. Appreciate it brother.
I guess your 28 and can thus still get into relatively different quantitative Finance.
26, but 2 years isn’t a big difference. :)
But, how did you decide that it is best for you to dedicate your time to AAR? You could be working at GiveWell/Open Phil as a GR, or in OpenAI/MIRI in AI safety research (especially with your CS and Math background), you could also be working in ETG at the FAANG. Also 80khours no where seems to suggest that AAR of all the things are “high-impact-careers” nor does the EA survey say anything about it. In fact the survey talks about GR and AI safety.
So I’m choosing AAR over other causes due to my cause prioritization, which depends on both my ethical views (I’m suffering-focused) and empirical views (I have reservations about longtermist interventions, since there’s little feedback, and I don’t feel confident in any of their predictions and hence cost-effectiveness estimates). 80,000 Hours is very much pushing longtermism now. I’m more open to being convinced about suffering risks, specifically.
I’m leaning against a job consisting almost entirely of programming, since I came to not enjoy it that much, so I don’t think I’d be motivated to work hard enough to make it to $200K/year in income. I like reading and doing research, though, so AI research and quantitative finance might still be good options, even if they involve programming.
And did you account for replaceability and other factors? If so, how did you arrive at these numbers?
(...)
So you hope to apply causal inference in AAR?
I didn’t do any explicit calculations. The considerations I wrote about replaceability in my post and the discussion here have had me thinking that I should take ETG to donate to animal charities more seriously.
I think econometrics is not very replaceable in animal advocacy research now, and it could impact the grants made by OPP and animal welfare funds, as well as ACE’s recommendations.
I’ll try a rough comparison now. I think there’s more than $20 million going around each year in effective animal advocacy, largely from OPP. I could donate ~1% ($200K) of that in ETG if I’m lucky. On the other hand, if do research for which I’d be hard to replace and that leads to different prioritization of interventions, I could counterfactually shift a good chunk of that money to (possibly far) more cost-effective opportunities. I’d guess that corporate campaigns alone are taking >20% of EEA’s resources; good intervention research (on corporate campaigns or other interventions) could increase or decrease that considerably. Currently only a few people at Humane League Labs and a few (other) economists (basically studying the effects of reforms in California) have done or are doing this kind of econometrics and causal inference research. Maybe around the equivalent of 4 working on this full-time now. So my guess is that another person working on this could counterfactually shift > 1% of EAA funding in expectation to opportunities twice as cost-effective. This seems to beat ETG donating $200k/year.
Lastly I want to thank you from the heart for taking your time and effot to respond to me. Appreciate it brother.
Happy to help! This was useful for me, too. :)
(Oh, besides economics, I’m also considering grad school in philosophy, perhaps for research on population ethics, suffering-focused views and consciousness.)
If one hasn’t taken into account replaceability, or the displacement chain, how do you know it is better to work in EA orgs rather than ETG (for X dollars).
Milan Griffes reports with a replaceability of 10% (guess) and attributing 60% (guess) contribution to the donor, that his impact was 244k. Now if you remove the replaceability it is 2.4m.
And the 80khours article you cited on replaceability seems to be so off with its suggestions. 80khours are suggesting that “Often if you turn down a skilled job, the role simply won’t be filled at all because there’s no suitable substitute available”. Whilst the only evidence I can find says completely otherwise: Carricks take on AI S&P, Peter representing RC, Open Phil’s hiring round, Jon Behar’s comments, EAF’s hiring round.
As for your post, I saw it as well, and gained on the “displacement chain” verbiage and calculation. It was very difficult for me to follow the discussion on difference in priorities. In any case, I think we need atleast one real example to test a claim.
How are people so confident in saying that working at an EAO is better than doing ETG especially considering how “full” the talent pool is (Carricks take on AI S&P, Peter representing RC, Open Phil’s hiring round, Jon Behar’s comments)?
What is the evidence?
New charities will sometimes be started to make more EA org positions, and they wouldn’t get far if they didn’t have people who were the right fit for them. Rethink Priorities and Charity Entrepreneurship are relatively new (although very funding-constrained, and this might be the bottleneck for their hiring and the bottleneck for starting new charities like them). Charity Entrepreneurship is starting many more EA orgs with their incubation program (incubated charities here). Maybe worth reaching out to them to see what their applicant pool is like?
I think there are also specific talent bottlenecks, see [1], [2], [3]. Actually, this last one comes from Animal Advocacy Careers, a charity incubated by Charity Entrepreneurship to meet the effective animal advocacy talent bottlenecks.
Btw, I think you have the wrong link for Carricks.
Thanks for mentioning AAC!
Not sure about Rethink Priorities but minor correction is that last time I spoke to CE about this, they didn’t see funding as a substantial constraint for them. They felt more constrained by high quality applicants to their programme.
Edit: CE are now looking for funding, so are at least partly funding constrained!
Good idea. I will contact them as well to see the talent pool. If they still need “high-quality people”, somehow getting better (gaining) in that direction seems like a good opportunity.
Micheal, I have written an article here: http://agent18.github.io/is-ea-bottlenecked-2.html in my unfinished blogspace about [1] and [2]. I really don’t find evidence for their claims of bottlenecks. Or I don’t understand what they are trying to say. For example, GR in GPR is recommended by 80khours in their high-impact-careers post, also in the surveys, also in the separate problem profiles etc… but yet during open phil’s round on there is literally 100s of “good resumes” and “many candidates worthy of positions” but OP could not consume all of them.
Peter Hurford can also be seen talking about the lack of Talent constrian in GR (I think)
This I really need to look into. Thanks for that.
Thanks. Corrected it. Sorry about that.
Bottom line
I don’t know how people evaluate which career to choose. Many people are redirecting me to posts from 80khours. But I find only claims there. When I ask organizations on value generated replaceability I don’t get any info from them. I think people do a guess at max, falling prey to vague words like Career Capital or possibly primarily focusing on what they are good at or I don’t know.
Anyways… It seems like a dead end to think that I can actually evaluate what I should be doing. Your thoughts?
How did you end up choosing to go to DarwinAI? Why not something else like GR in GPR or FAAANG?
I’d say it was kind of decided for me, since those other options were ruled out at the time. I applied to internships at some EA orgs, but didn’t have any luck. Then, I did a Master’s in computational math. Then, I started working part-time at a machine learning lab at the university while I looked for full-time work. I applied to AI internships at the big tech companies, but didn’t have any luck. I got my job at DarwinAI because I was working for two of its cofounders at the lab. I had no industry experience before that.
I’m currently trying to transition to effective animal advocacy research, reading more research, offering to review research before publication, applying to internships and positions at the orgs, and studying more economics/stats, one of the bottlenecks discussed here, with quantitative finance as a second choice, and back to deep learning in the industry as my third. I feel that EA orgs have been a bit weak on causal inference (from observational data), which falls under econometrics/stats.
Your options sounds solid. I guess your 28 and can thus still get into relatively different quantitative Finance.
But, how did you decide that it is best for you to dedicate your time to AAR? You could be working at GiveWell/Open Phil as a GR, or in OpenAI/MIRI in AI safety research (especially with your CS and Math background), you could also be working in ETG at the FAANG. Also 80khours no where seems to suggest that AAR of all the things are “high-impact-careers” nor does the EA survey say anything about it. In fact the survey talks about GR and AI safety.
And did you account for replaceability and other factors? If so, how did you arrive at these numbers?
So you hope to apply causal inference in AAR?
Lastly I want to thank you from the heart for taking your time and effot to respond to me. Appreciate it brother.
26, but 2 years isn’t a big difference. :)
So I’m choosing AAR over other causes due to my cause prioritization, which depends on both my ethical views (I’m suffering-focused) and empirical views (I have reservations about longtermist interventions, since there’s little feedback, and I don’t feel confident in any of their predictions and hence cost-effectiveness estimates). 80,000 Hours is very much pushing longtermism now. I’m more open to being convinced about suffering risks, specifically.
I’m leaning against a job consisting almost entirely of programming, since I came to not enjoy it that much, so I don’t think I’d be motivated to work hard enough to make it to $200K/year in income. I like reading and doing research, though, so AI research and quantitative finance might still be good options, even if they involve programming.
I didn’t do any explicit calculations. The considerations I wrote about replaceability in my post and the discussion here have had me thinking that I should take ETG to donate to animal charities more seriously.
I think econometrics is not very replaceable in animal advocacy research now, and it could impact the grants made by OPP and animal welfare funds, as well as ACE’s recommendations.
I’ll try a rough comparison now. I think there’s more than $20 million going around each year in effective animal advocacy, largely from OPP. I could donate ~1% ($200K) of that in ETG if I’m lucky. On the other hand, if do research for which I’d be hard to replace and that leads to different prioritization of interventions, I could counterfactually shift a good chunk of that money to (possibly far) more cost-effective opportunities. I’d guess that corporate campaigns alone are taking >20% of EEA’s resources; good intervention research (on corporate campaigns or other interventions) could increase or decrease that considerably. Currently only a few people at Humane League Labs and a few (other) economists (basically studying the effects of reforms in California) have done or are doing this kind of econometrics and causal inference research. Maybe around the equivalent of 4 working on this full-time now. So my guess is that another person working on this could counterfactually shift > 1% of EAA funding in expectation to opportunities twice as cost-effective. This seems to beat ETG donating $200k/year.
Happy to help! This was useful for me, too. :)
(Oh, besides economics, I’m also considering grad school in philosophy, perhaps for research on population ethics, suffering-focused views and consciousness.)