There has been a lot of discussion about the issue of underutilization or “bycatch” within Effective Altruism and AI safety in particular. This refers to the problem of a large number of people, many of them highly interested in a cause area, not being able to do effective work on the timescales that they hoped for. Among the most popular recent articles on the subject was “Don’t Be Bycatch,” though it seems to me that all the articles on the underutilization problem are a series of commentaries on “After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation.” However, I’m not confident that there were no posts on this before then, and I’d be happy to see any suggestions of earlier ones if people can find them.
Some other articles that have informed my thinking on this issue, and which have been widely read within Effective Altruism, are:
However, there is yet to be an organized series of posts examining and discussing the underutilization issue and what to do about it. In addition, existing discussions seem to understate a range of problems caused here. With the “On Underutilization” sequence, I aim to rectify this.
In my first few posts in this sequence, I will be going through these problems in turn, making the case that, although the underutilization problem is already recognized within EA, it is a far greater problem than people are willing to admit. In later posts, I will go through possible solutions to underutilization in turn, addressing their advantages and disadvantages as well as reasons they may be neglected or not.
I would like to finish up this announcement with a disclaimer. While this sequence will disproportionately cover AI safety, as that is the field on which I have done the most research, many conclusions seem like they would apply more broadly to Effective Altruism at large. However, of note is the fact that more non-longtermist projects are constrained by available funding than longtermist projects, and so the issues I intend to raise regarding the relative prestige of direct work and earning to give are even more relevant here.
Announcing: On Underutilization
There has been a lot of discussion about the issue of underutilization or “bycatch” within Effective Altruism and AI safety in particular. This refers to the problem of a large number of people, many of them highly interested in a cause area, not being able to do effective work on the timescales that they hoped for. Among the most popular recent articles on the subject was “Don’t Be Bycatch,” though it seems to me that all the articles on the underutilization problem are a series of commentaries on “After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation.” However, I’m not confident that there were no posts on this before then, and I’d be happy to see any suggestions of earlier ones if people can find them.
Some other articles that have informed my thinking on this issue, and which have been widely read within Effective Altruism, are:
Have You Tried Hiring People?
AI Safety’s Talent Pipeline is Over-optimised for Researchers
What to do with people?
However, there is yet to be an organized series of posts examining and discussing the underutilization issue and what to do about it. In addition, existing discussions seem to understate a range of problems caused here. With the “On Underutilization” sequence, I aim to rectify this.
In my first few posts in this sequence, I will be going through these problems in turn, making the case that, although the underutilization problem is already recognized within EA, it is a far greater problem than people are willing to admit. In later posts, I will go through possible solutions to underutilization in turn, addressing their advantages and disadvantages as well as reasons they may be neglected or not.
I would like to finish up this announcement with a disclaimer. While this sequence will disproportionately cover AI safety, as that is the field on which I have done the most research, many conclusions seem like they would apply more broadly to Effective Altruism at large. However, of note is the fact that more non-longtermist projects are constrained by available funding than longtermist projects, and so the issues I intend to raise regarding the relative prestige of direct work and earning to give are even more relevant here.