For what it’s worth, I sympathise with the need to make some hard prioritisation decisions—that’s what EA is about, after all. Nonetheless, it seems like the choice to focus on top universities has been an insufficiently examined heuristic. After all, the following claim...
top universities are the places with the highest concentrations of people who ultimately have a very large influence on the world.
… is definitely false unless the only categorisation we’re doing of people is ‘the university they go to’. We can subdivide people into any categories we have data on, and while ‘university’ provides a convenient starting point for a young impact-focused organisation, it seems like a now-maturing impact-focused organisation should aspire to do better.
For a simple example, staying focused on universities, most university departments receive their own individual rankings, which are also publicly available (I think the final score for the university is basically some weighted average of these, possibly with some extra factors thrown in).
I’m partially motivated to write this comment because I know of someone who opted to go to the university with the better department for their subject, and has recently found out that, by opting to go to the university with the lower overall ranking, they’re formally downgraded by both immigration departments and EA orgs.
So it seems like EA orgs could do better simply by running a one-off project that pooled departmental rankings and prioritising based on that. It would probably be a reasonably substantial (but low skill) one-off cost with a slight ongoing maintenance cost, but if ‘finding the best future talent’ is so important to EA orgs, it seems worth putting some ongoing effort into doing it better. [ETA—apparently there are some premade rankings that do this!]
This is only one trivial suggestion—I suspect there are many more sources of public data that seem like they could be taken into account to make a fairer and (which IMO is equivalent) more accurate prioritisation system. Since as the OP points out, selecting for the top 100 universities is a form of strong de facto prejudice against people from countries that don’t host one, it might be also worth adding some multiplier to people at the top departments in their country—and so on. There might be quantifiable considerations that have nothing to do with university choice.
Having said that, if CEA or any other org does do something like this, I hope they’ll
a) have the courage to make unpopular weighting decisions when the data clearly justifies them and
b) do it publicly, open sourcing their weighted model, so that anyone interested can see that the data does clearly justify it—hopefully avoiding another PELTIVgate.
I’m curious which direction the disagree voters are disagreeing—are they expressing the view that quantifying people like this at all is bad, or that if you’re going to do it, this is a more effective way?
Perhaps that it doesn’t seem to clearly relate to the context of Jessica’s comment, which I understand to be about prioritizing support for EA student groups at “top” universities. That decision seemingly has to be made on a university level, and—unless the university is particularly strong in a priority area—overall rating is probably the best measure.
Whether field-specific rating is a better measure in other contexts, and whether it is reasonably practical to use it in those contexts, is likely a case-by-case determination. I’d also note that in US undergraduate programs, admission is generally to the university as a whole allowing the student to select any major. I suspect your position is stronger where admission is to a specific program/department and thus the specific program’s reputation is relevant to the applicant characteristics needed to get in.
For what it’s worth, I sympathise with the need to make some hard prioritisation decisions—that’s what EA is about, after all. Nonetheless, it seems like the choice to focus on top universities has been an insufficiently examined heuristic. After all, the following claim...
… is definitely false unless the only categorisation we’re doing of people is ‘the university they go to’. We can subdivide people into any categories we have data on, and while ‘university’ provides a convenient starting point for a young impact-focused organisation, it seems like a now-maturing impact-focused organisation should aspire to do better.
For a simple example, staying focused on universities, most university departments receive their own individual rankings, which are also publicly available (I think the final score for the university is basically some weighted average of these, possibly with some extra factors thrown in).
I’m partially motivated to write this comment because I know of someone who opted to go to the university with the better department for their subject, and has recently found out that, by opting to go to the university with the lower overall ranking, they’re formally downgraded by both immigration departments and EA orgs.
So it seems like EA orgs could do better simply by running a one-off project that pooled departmental rankings and prioritising based on that. It would probably be a reasonably substantial (but low skill) one-off cost with a slight ongoing maintenance cost, but if ‘finding the best future talent’ is so important to EA orgs, it seems worth putting some ongoing effort into doing it better. [ETA—apparently there are some premade rankings that do this!]
This is only one trivial suggestion—I suspect there are many more sources of public data that seem like they could be taken into account to make a fairer and (which IMO is equivalent) more accurate prioritisation system. Since as the OP points out, selecting for the top 100 universities is a form of strong de facto prejudice against people from countries that don’t host one, it might be also worth adding some multiplier to people at the top departments in their country—and so on. There might be quantifiable considerations that have nothing to do with university choice.
Having said that, if CEA or any other org does do something like this, I hope they’ll
a) have the courage to make unpopular weighting decisions when the data clearly justifies them and
b) do it publicly, open sourcing their weighted model, so that anyone interested can see that the data does clearly justify it—hopefully avoiding another PELTIVgate.
I’m curious which direction the disagree voters are disagreeing—are they expressing the view that quantifying people like this at all is bad, or that if you’re going to do it, this is a more effective way?
Perhaps that it doesn’t seem to clearly relate to the context of Jessica’s comment, which I understand to be about prioritizing support for EA student groups at “top” universities. That decision seemingly has to be made on a university level, and—unless the university is particularly strong in a priority area—overall rating is probably the best measure.
Whether field-specific rating is a better measure in other contexts, and whether it is reasonably practical to use it in those contexts, is likely a case-by-case determination. I’d also note that in US undergraduate programs, admission is generally to the university as a whole allowing the student to select any major. I suspect your position is stronger where admission is to a specific program/department and thus the specific program’s reputation is relevant to the applicant characteristics needed to get in.