This is a good thought! I actually went through a month or two of being pretty excited about doing something like this early last year. Unfortunately I think there are quite a few issues around how well the data we have from advising represents what paths EAs in general are aiming for, such that we (80,000 Hours) are not the natural home for this project. We discussed including a question on this in the EA survey with Rethink last year, though I understand they ran out of time/space for it.
I think there’s an argument that we should start collecting/publicising whatever (de-identified) data we can get anyway, because any additional info on this is useful and it’s not that hard for 80,000 Hours to get. I think the reason that doing this feels less compelling to me is that this information would only answer a small part of the question we’re ultimately interested in.
We want to know the expected impact of a marginal person going to work in a given area.
To answer that, we’d need something like:
The number of EAs aiming at a given area, weighted by dedication, seniority and likelihood of success.
The same data for people who are not EAs but are aiming to make progress on the same problem. In some of our priority paths, EAs are a small portion of the relevant people.
An estimate of the extent to which different paths have diminishing returns and complementarity. (That linked post might be worth reading for more of our thoughts on coordinating as a community.)
We then probably want something around time—how close to making an impact are the people currently aiming at this path, how long does it take someone who doesn’t have any experience to make an impact, how much do we want talent there now vs later etc.
I think without doing that extra analysis, I wouldn’t really know how to interpret the results and we’ve found that releasing substandard data can get people on the wrong track. I think that doing this analysis well would be pretty great, but it’s also a big project with a lot of tricky judgement calls, so it doesn’t seem at the top of our priority list.
What should be done in the meantime? I think this piece is currently the best guide we have on how to systematically work through your career decisions. Many of the factors you mentioned are considered (although not precisely quantified) when we recommend priority paths because we try to consider neglectedness (both now and our guess at the next few years). For example, we think AI policy and AI technical safety could both absorb a lot more people before hitting large diminishing returns so we’re happy to recommend that people invest in the relevant career capital. Even if lots of people do so, we expect this investment to still pay off.
we’ve found that releasing substandard data can get people on the wrong track
I’ve seen indications and arguments that suggest this is true when 80,000 Hours releases data or statements they don’t want people to take too seriously. Do you (or does anyone else) have thoughts on whether it’s the case that anyone releasing “substandard” (but somewhat relevant and accurate) data on a topic will tend to be worse than there being no explicit data on a topic?
Basically, I’m tentatively inclined to think that some explicit data is often better than no explicit data, as long as it’s properly caveated, because people can just update their beliefs only by the appropriate amount. (Though that’s definitely not fully or always true; see e.g. here.) But then 80k is very prestigious and trusted by much of the EA community, so I can see why people might take statements or data from 80k too seriously, even if 80k tells them not to.
So maybe it’d be net positive for something like what the OP requests to be done by the EA Survey or some random EA, but net negative if 80k did it?
This is a good thought! I actually went through a month or two of being pretty excited about doing something like this early last year. Unfortunately I think there are quite a few issues around how well the data we have from advising represents what paths EAs in general are aiming for, such that we (80,000 Hours) are not the natural home for this project. We discussed including a question on this in the EA survey with Rethink last year, though I understand they ran out of time/space for it.
I think there’s an argument that we should start collecting/publicising whatever (de-identified) data we can get anyway, because any additional info on this is useful and it’s not that hard for 80,000 Hours to get. I think the reason that doing this feels less compelling to me is that this information would only answer a small part of the question we’re ultimately interested in.
We want to know the expected impact of a marginal person going to work in a given area.
To answer that, we’d need something like:
The number of EAs aiming at a given area, weighted by dedication, seniority and likelihood of success.
The same data for people who are not EAs but are aiming to make progress on the same problem. In some of our priority paths, EAs are a small portion of the relevant people.
An estimate of the extent to which different paths have diminishing returns and complementarity. (That linked post might be worth reading for more of our thoughts on coordinating as a community.)
We then probably want something around time—how close to making an impact are the people currently aiming at this path, how long does it take someone who doesn’t have any experience to make an impact, how much do we want talent there now vs later etc.
I think without doing that extra analysis, I wouldn’t really know how to interpret the results and we’ve found that releasing substandard data can get people on the wrong track. I think that doing this analysis well would be pretty great, but it’s also a big project with a lot of tricky judgement calls, so it doesn’t seem at the top of our priority list.
What should be done in the meantime? I think this piece is currently the best guide we have on how to systematically work through your career decisions. Many of the factors you mentioned are considered (although not precisely quantified) when we recommend priority paths because we try to consider neglectedness (both now and our guess at the next few years). For example, we think AI policy and AI technical safety could both absorb a lot more people before hitting large diminishing returns so we’re happy to recommend that people invest in the relevant career capital. Even if lots of people do so, we expect this investment to still pay off.
I’ve seen indications and arguments that suggest this is true when 80,000 Hours releases data or statements they don’t want people to take too seriously. Do you (or does anyone else) have thoughts on whether it’s the case that anyone releasing “substandard” (but somewhat relevant and accurate) data on a topic will tend to be worse than there being no explicit data on a topic?
Basically, I’m tentatively inclined to think that some explicit data is often better than no explicit data, as long as it’s properly caveated, because people can just update their beliefs only by the appropriate amount. (Though that’s definitely not fully or always true; see e.g. here.) But then 80k is very prestigious and trusted by much of the EA community, so I can see why people might take statements or data from 80k too seriously, even if 80k tells them not to.
So maybe it’d be net positive for something like what the OP requests to be done by the EA Survey or some random EA, but net negative if 80k did it?