Error
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
This is a good thought! I actually went through a month or two of being pretty excited about doing something like this early last year. Unfortunately I think there are quite a few issues around how well the data we have from advising represents what paths EAs in general are aiming for, such that we (80,000 Hours) are not the natural home for this project. We discussed including a question on this in the EA survey with Rethink last year, though I understand they ran out of time/space for it.
I think there’s an argument that we should start collecting/publicising whatever (de-identified) data we can get anyway, because any additional info on this is useful and it’s not that hard for 80,000 Hours to get. I think the reason that doing this feels less compelling to me is that this information would only answer a small part of the question we’re ultimately interested in.
We want to know the expected impact of a marginal person going to work in a given area.
To answer that, we’d need something like:
The number of EAs aiming at a given area, weighted by dedication, seniority and likelihood of success.
The same data for people who are not EAs but are aiming to make progress on the same problem. In some of our priority paths, EAs are a small portion of the relevant people.
An estimate of the extent to which different paths have diminishing returns and complementarity. (That linked post might be worth reading for more of our thoughts on coordinating as a community.)
We then probably want something around time—how close to making an impact are the people currently aiming at this path, how long does it take someone who doesn’t have any experience to make an impact, how much do we want talent there now vs later etc.
I think without doing that extra analysis, I wouldn’t really know how to interpret the results and we’ve found that releasing substandard data can get people on the wrong track. I think that doing this analysis well would be pretty great, but it’s also a big project with a lot of tricky judgement calls, so it doesn’t seem at the top of our priority list.
What should be done in the meantime? I think this piece is currently the best guide we have on how to systematically work through your career decisions. Many of the factors you mentioned are considered (although not precisely quantified) when we recommend priority paths because we try to consider neglectedness (both now and our guess at the next few years). For example, we think AI policy and AI technical safety could both absorb a lot more people before hitting large diminishing returns so we’re happy to recommend that people invest in the relevant career capital. Even if lots of people do so, we expect this investment to still pay off.
I’ve seen indications and arguments that suggest this is true when 80,000 Hours releases data or statements they don’t want people to take too seriously. Do you (or does anyone else) have thoughts on whether it’s the case that anyone releasing “substandard” (but somewhat relevant and accurate) data on a topic will tend to be worse than there being no explicit data on a topic?
Basically, I’m tentatively inclined to think that some explicit data is often better than no explicit data, as long as it’s properly caveated, because people can just update their beliefs only by the appropriate amount. (Though that’s definitely not fully or always true; see e.g. here.) But then 80k is very prestigious and trusted by much of the EA community, so I can see why people might take statements or data from 80k too seriously, even if 80k tells them not to.
So maybe it’d be net positive for something like what the OP requests to be done by the EA Survey or some random EA, but net negative if 80k did it?
I agree we have a coordination problem. It might be easier to reach this using the Annual EA Census rather than through 80,000 Hours. It would also be worth emphasizing future needs more strongly.
I had some thoughts on how to use the survey in this comment.
I don’t think this idea was mine originally, but it would go a long way just to have two pi charts: the current distribution of careers in EA, and the optimal distribution.
I actually don’t think that would help a ton, because 80K already prioritizes careers based on their perceived delta between supply and demand. The coordination problem comes because it can take years to generate additional supply, and 80K has only limited visibility into that supply as it’s being generated.
I think there is enough difficulty in achieving specialization that you are better off ignoring coordination concerns here in favor of choosing based on personal inclination. It’s hard to put in all the time it takes to become an expert in something, it’s even harder when you don’t love that something for its own sake, and my own suspicion is that without that love you will never achieve to the highest level of expertise, so best to look for the confluence of what you most love and what is most useful than to worry about coordinating over usefulness. You and everyone else is not sufficiently interchangeable when it comes to developing sufficient specialization to be helpful to EA causes.
I’d agree with the idea people should take personal fit very seriously, with passion/motivation for a career path being a key part of that. And I’d agree with your rationale for that.
But I also think that many people could become really, genuinely fired up about a wider range of career paths than they might currently think (if they haven’t yet tried or thought about those career paths). And I also think that many people could be similarly good fits for, or similarly passionate about, multiple career paths. For these people, which career path will have the greatest need for more people like them in a few years can be very useful as a way of shortlisting the things to test one’s ability to become passionate about, and/or a “tie-breaker” between paths one has already shortlisted based on passions/motivations/fit.
For example, I’m currently quite passionate about research, but have reason to believe I could become quite passionate about operations-type roles, about roles at the intersection of those two paths (like research management), and maybe about other paths like communications or non-profit entrepreneurship. So which of those roles—rather than which roles in general—will be the most marginally useful in a few years time seems quite relevant for my career planning.
(I think this is probably more like a different emphasis to your comment, rather than a starkly conflicting view.)
Just wanted to mention this problem is orthogonal to the related problem of generating enough work to do in the first place, and before you start thinking about how to cut up the pie better you might want to consider making the pie bigger instead.
Unless making the pie bigger is less neglected. I guess this problem can be applied to itself :)