I strongly agree with this particular statement from the post, but have refrained stating it publicly before out of concern that it would reduce my access to EA funding and spaces.
EAs should consciously separate:
An individual’s suitability for a particular project, job, or role
Their expertise and skill in the relevant area(s)
The degree to which they are perceived to be “highly intelligent”
Their perceived level of value-alignment with EA orthodoxy
Their seniority within the EA community
Their personal wealth and/or power
I’ve been surprised how many researchers, grant-makers, and community organizers around me do seem to interchange these things. For example, I recently was surprised to hear someone who controls relevant funding and community space access remark to a group “I rank [Researcher X] as an A-Tier researcher. I don’t actually know what they work on, but they just seem really smart.” I found this very epistemically concerning, but other people didn’t seem to.
I’d like to understand this reasoning better. Is there anyone who disagrees with the statement (aka, disagrees that these factors should be consciously separated) who could help me to understand their position?
I agree that it’s important to separate out all of these factors, but I think it’s totally reasonable for your assessment of some of these factors to update your assessment of others.
For example:
People who are “highly intelligent” are generally more suitable for projects/jobs/roles.
People who agree with the foundational claims underlying a theory of change are more suitable for projects/jobs/roles that are based on that theory of change.
For example, I recently was surprised to hear someone who controls relevant funding and community space access remark to a group “I rank [Researcher X] as an A-Tier researcher. I don’t actually know what they work on, but they just seem really smart.” I found this very epistemically concerning, but other people didn’t seem to.
I agree that this feels somewhat concerning; I’m not sure it’s an example of people failing to consciously separate these things though. Here’s how I feel about this kind of thing:
It’s totally reasonable to be more optimistic about someone’s research because they seem smart (even if you don’t know anything about the research).
In my experience, smart people have a pretty high rate of failing to do useful research (by researching in an IMO useless direction, or being unproductive), so I’d never be that confident in someone’s research direction just based on them seeming really smart, even if they were famously smart. (E.g. Scott Aaronson is famously brilliant and when I talk to him it’s obvious to me that he knows way more theoretical computer science than I do, but I definitely wouldn’t feel optimistic about his alignment research directions without knowing more about the situation.)
I think there is some risk of falling into echo chambers where lots of people say really positive things about someone’s research without knowing anything about it. To prevent this, I think that when people are optimistic about someone’s research because the person seems smart rather than because they’ve specifically evaluated the research, I think they should clearly say “I’m provisionally optimistic here because the person seems smart, but fwiw I have not actually looked at the research”.
Thanks for the nuanced response. FWIW, this seems reasonable to me as well:
I agree that it’s important to separate out all of these factors, but I think it’s totally reasonable for your assessment of some of these factors to update your assessment of others.
Separately, I think that people are sometimes overconfident in their assessment of some of these factors (e.g. intelligence), because they over-update on signals that seem particularly legible to them (e.g. math accolades), and that this can cause cascading issues with this line of reasoning. But that’s a distinct concern from the one I quoted from the post.
In my experience, smart people have a pretty high rate of failing to do useful research (by researching in an IMO useless direction, or being unproductive), so I’d never be that confident in someone’s research direction just based on them seeming really smart, even if they were famously smart.
I’ve personally observed this as well; I’m glad to hear that other people have also come to this conclusion.
I think the key distinction here is between necessity and sufficiency. Intelligence is (at least with a certain threshold) necessary to do good technical research, but it isn’t sufficient. Impressive quantitative achievements, like competing in the international math olympiad, are sufficient to demonstrate intelligence (again, above a certain threshold), but not necessary (most smart people don’t compete in IMO and, outside of specific prestigious academic institutions, haven’t even heard of it). But mixing this up can lead to poor conclusions, like one I heard the other night: “Doing better technical research is easy; we just have to recruit the IMO winners!”
To strengthen your point—as an IMO medalist: IMO participation signifies some kind of intelligence for sure, and maybe even ability to do research in math (although I’ve had a professor in my math degree, also an IMO medalist, who disagreed), but I’m not convinced a lot of it transfers to any other kind of research.
I strongly agree with this particular statement from the post, but have refrained stating it publicly before out of concern that it would reduce my access to EA funding and spaces.
I’ve been surprised how many researchers, grant-makers, and community organizers around me do seem to interchange these things. For example, I recently was surprised to hear someone who controls relevant funding and community space access remark to a group “I rank [Researcher X] as an A-Tier researcher. I don’t actually know what they work on, but they just seem really smart.” I found this very epistemically concerning, but other people didn’t seem to.
I’d like to understand this reasoning better. Is there anyone who disagrees with the statement (aka, disagrees that these factors should be consciously separated) who could help me to understand their position?
I agree that it’s important to separate out all of these factors, but I think it’s totally reasonable for your assessment of some of these factors to update your assessment of others.
For example:
People who are “highly intelligent” are generally more suitable for projects/jobs/roles.
People who agree with the foundational claims underlying a theory of change are more suitable for projects/jobs/roles that are based on that theory of change.
I agree that this feels somewhat concerning; I’m not sure it’s an example of people failing to consciously separate these things though. Here’s how I feel about this kind of thing:
It’s totally reasonable to be more optimistic about someone’s research because they seem smart (even if you don’t know anything about the research).
In my experience, smart people have a pretty high rate of failing to do useful research (by researching in an IMO useless direction, or being unproductive), so I’d never be that confident in someone’s research direction just based on them seeming really smart, even if they were famously smart. (E.g. Scott Aaronson is famously brilliant and when I talk to him it’s obvious to me that he knows way more theoretical computer science than I do, but I definitely wouldn’t feel optimistic about his alignment research directions without knowing more about the situation.)
I think there is some risk of falling into echo chambers where lots of people say really positive things about someone’s research without knowing anything about it. To prevent this, I think that when people are optimistic about someone’s research because the person seems smart rather than because they’ve specifically evaluated the research, I think they should clearly say “I’m provisionally optimistic here because the person seems smart, but fwiw I have not actually looked at the research”.
Thanks for the nuanced response. FWIW, this seems reasonable to me as well:
Separately, I think that people are sometimes overconfident in their assessment of some of these factors (e.g. intelligence), because they over-update on signals that seem particularly legible to them (e.g. math accolades), and that this can cause cascading issues with this line of reasoning. But that’s a distinct concern from the one I quoted from the post.
I’ve personally observed this as well; I’m glad to hear that other people have also come to this conclusion.
I think the key distinction here is between necessity and sufficiency. Intelligence is (at least with a certain threshold) necessary to do good technical research, but it isn’t sufficient. Impressive quantitative achievements, like competing in the international math olympiad, are sufficient to demonstrate intelligence (again, above a certain threshold), but not necessary (most smart people don’t compete in IMO and, outside of specific prestigious academic institutions, haven’t even heard of it). But mixing this up can lead to poor conclusions, like one I heard the other night: “Doing better technical research is easy; we just have to recruit the IMO winners!”
To strengthen your point—as an IMO medalist: IMO participation signifies some kind of intelligence for sure, and maybe even ability to do research in math (although I’ve had a professor in my math degree, also an IMO medalist, who disagreed), but I’m not convinced a lot of it transfers to any other kind of research.
Yeah, IMO medals definitely don’t suffice for me to think it’s extremely likely someone will be AFAICT good at doing research.