Thanks for the nuanced response. FWIW, this seems reasonable to me as well:
I agree that it’s important to separate out all of these factors, but I think it’s totally reasonable for your assessment of some of these factors to update your assessment of others.
Separately, I think that people are sometimes overconfident in their assessment of some of these factors (e.g. intelligence), because they over-update on signals that seem particularly legible to them (e.g. math accolades), and that this can cause cascading issues with this line of reasoning. But that’s a distinct concern from the one I quoted from the post.
In my experience, smart people have a pretty high rate of failing to do useful research (by researching in an IMO useless direction, or being unproductive), so I’d never be that confident in someone’s research direction just based on them seeming really smart, even if they were famously smart.
I’ve personally observed this as well; I’m glad to hear that other people have also come to this conclusion.
I think the key distinction here is between necessity and sufficiency. Intelligence is (at least with a certain threshold) necessary to do good technical research, but it isn’t sufficient. Impressive quantitative achievements, like competing in the international math olympiad, are sufficient to demonstrate intelligence (again, above a certain threshold), but not necessary (most smart people don’t compete in IMO and, outside of specific prestigious academic institutions, haven’t even heard of it). But mixing this up can lead to poor conclusions, like one I heard the other night: “Doing better technical research is easy; we just have to recruit the IMO winners!”
To strengthen your point—as an IMO medalist: IMO participation signifies some kind of intelligence for sure, and maybe even ability to do research in math (although I’ve had a professor in my math degree, also an IMO medalist, who disagreed), but I’m not convinced a lot of it transfers to any other kind of research.
Thanks for the nuanced response. FWIW, this seems reasonable to me as well:
Separately, I think that people are sometimes overconfident in their assessment of some of these factors (e.g. intelligence), because they over-update on signals that seem particularly legible to them (e.g. math accolades), and that this can cause cascading issues with this line of reasoning. But that’s a distinct concern from the one I quoted from the post.
I’ve personally observed this as well; I’m glad to hear that other people have also come to this conclusion.
I think the key distinction here is between necessity and sufficiency. Intelligence is (at least with a certain threshold) necessary to do good technical research, but it isn’t sufficient. Impressive quantitative achievements, like competing in the international math olympiad, are sufficient to demonstrate intelligence (again, above a certain threshold), but not necessary (most smart people don’t compete in IMO and, outside of specific prestigious academic institutions, haven’t even heard of it). But mixing this up can lead to poor conclusions, like one I heard the other night: “Doing better technical research is easy; we just have to recruit the IMO winners!”
To strengthen your point—as an IMO medalist: IMO participation signifies some kind of intelligence for sure, and maybe even ability to do research in math (although I’ve had a professor in my math degree, also an IMO medalist, who disagreed), but I’m not convinced a lot of it transfers to any other kind of research.
Yeah, IMO medals definitely don’t suffice for me to think it’s extremely likely someone will be AFAICT good at doing research.