I was going to write this post, so I definitely agree :)
In general, EA recommendations produce suboptimal herding behavior. This is because individuals can’t choose a whole distribution over career paths, only a single career path. Let’s say our best guess at the best areas for people to work in is that there’s a 30% chance it’s AI, a 20% chance it’s biosecurity, a 20% chance it’s animal welfare, a 20% chance it’s global development, and a 10% chance it’s something else. Then that would also be the ideal distribution of careers (ignoring personal fit concerns for the moment). But even if every single person had this estimate, all of them would be optimizing by choosing to work in AI, which is not the optimal distribution. Every person optimizing their social impact actually leads to a suboptimal outcome!
The main countervailing force is personal fit. People do not just optimize for the expected impact of a career path, they select into the career paths where they think they would be most impactful. Insofar as people’s aptitudes are more evenly distributed, this evens out the distribution of career paths that people choose and brings it closer to the uncertainty-adjusted optimal distribution.
But this is not a guaranteed outcome. It depends on what kind of people EA attracts. If EA attracts primarily people with CS/software aptitudes, then we would see disproportionate selection into AI relative to other areas. So I think another source of irrationality in EA prioritization is the disproportionate attraction of people with some aptitudes rather than others.
I was going to write this post, so I definitely agree :)
In general, EA recommendations produce suboptimal herding behavior. This is because individuals can’t choose a whole distribution over career paths, only a single career path. Let’s say our best guess at the best areas for people to work in is that there’s a 30% chance it’s AI, a 20% chance it’s biosecurity, a 20% chance it’s animal welfare, a 20% chance it’s global development, and a 10% chance it’s something else. Then that would also be the ideal distribution of careers (ignoring personal fit concerns for the moment). But even if every single person had this estimate, all of them would be optimizing by choosing to work in AI, which is not the optimal distribution. Every person optimizing their social impact actually leads to a suboptimal outcome!
The main countervailing force is personal fit. People do not just optimize for the expected impact of a career path, they select into the career paths where they think they would be most impactful. Insofar as people’s aptitudes are more evenly distributed, this evens out the distribution of career paths that people choose and brings it closer to the uncertainty-adjusted optimal distribution.
But this is not a guaranteed outcome. It depends on what kind of people EA attracts. If EA attracts primarily people with CS/software aptitudes, then we would see disproportionate selection into AI relative to other areas. So I think another source of irrationality in EA prioritization is the disproportionate attraction of people with some aptitudes rather than others.