Thanks for the link to your FAQ, I’m excited to read it further now!
Re: the rest of your comment, I think you’re reading more into my comment than I said or meant. I do not think researchers should generally be deferential; I think they should have strong beliefs, that may in fact go against expert consensus. I just don’t think this is the right attitude while you are junior
To be clear, I think Geoffrey Hinton’s advice was targeted at very junior people. In context, the interview was conducted for Andrew Ng’s online deep learning course, which for many people would be their first exposure to deep learning. I also got the impression that he would stand by this advice for early PhDs (though I could definitely have misunderstood him), and by “future Geoffrey Hintons and Eliezer Yudkowskys” I was thinking about pretty junior people rather than established researchers.
“Defer to experts for ~3 years, then trust your intuitions”
“Always trust your intuitions”
When you said
But to steelman(steel-alien?) his view a little, I worry that EA is overinvested in outside-view/forecasting types (like myself?), rather than people with strong and true convictions/extremely high-quality initial research taste, which (quality-weighted) may be making up the majority of revolutionary progress.
And if we tell the future Geoffrey Hintons (and Eliezer Yudkowskys) of the world to be more deferential and trust their intuitions less relative to elite consensus or the literature, we’re doing the world/our movement a disservice, even if the advice is likely to be individually useful/good for most researchers in terms of expected correctness of beliefs or career advancement.
I thought you were claiming “maybe 3 > 1”, so my response was “don’t do 1 or 3, do 2″.
If you’re instead claiming “maybe 3 > 2”, I don’t really get the argument. It doesn’t seem like advice #2 is obviously worse than advice #3 even for junior Eliezers and Geoffreys. (It’s hard to say for those two people: in Eliezer’s case, since there were no experts to defer to at the time, and I don’t know enough details about Geoffrey to evaluate which advice would be good for him.)
I think Geoffrey Hinton’s advice was targeted at very junior people.
Oh, I agree that’s probably true. I think he’s wrong to give that advice. I’m generally pretty okay with ignoring expert advice to amateurs if you have reason to believe it’s bad; experts usually don’t remember what it was like to be an amateur and so it’s not that surprising that their advice on what amateurs should do is not great. (EDIT: Here’s a new post that goes into more detail on this.)
I would guess the ‘typical young researcher fallacy’ also applies to Hinton - my impression is he is basically advising his past self, similarly to Toby. As a consequence, the advice is likely sensible for people-much-like-past-Hinton, but not a good general advice for everyone.
In ~3 years most people are able to re-train their intuitions a lot (which is part of the point!). This seems particularly dangerous in cases where expertise in the thing you are actually interested in does not exist, but expertise in something somewhat close does - instead of following your curiosity, you ‘substitute the question’ with a different question, for which a PhD program exists, or senior researchers exist, or established directions exist. If your initial taste/questions was better than the expert’s, you run a risk of overwriting your taste with something less interesting/impactful.
Anecdotal illustrative story:
Arguably, large part of what are now the foundations of quantum information theory / quantum computing could have been discovered much sooner, together with taking more sensible interpretations of quantum mechanics than Copenhagen interpretation seriously. My guess what was happening during multiple decades (!) was many early career researchers were curious what’s going on, dissatisfied with the answers, interested in thinking about the topic more… but they were given the advice along the lines ‘this is not a good topic for PhDs or even undergrads; don’t trust your intuition; problems here are distasteful mix of physics and philosophy; shut up and calculate, that’s how a real progress happens’ … and they followed it; acquired a taste telling them that solving difficult scattering amplitudes integrals using advanced calculus techniques is tasty, and thinking about deep things formulated using tools of high-school algebra is for fools. (Also if you did run a survey in year 4 of their PhDs, large fraction of quantum physicists would probably endorse the learned update from thinking about young foolish questions about QM interpretations to the serious and publishable thinking they have learned.)
I agree substituting the question would be bad, and sometimes there aren’t any relevant experts in which case you shouldn’t defer to people. (Though even then I’d consider doing research in an unrelated area for a couple of years, and then coming back to work on the question of interest.)
I admit I don’t really understand how people manage to have a “driving question” overwritten—I can’t really imagine that happening to me and I am confused about how it happens to other people.
(I think sometimes it is justified, e.g. you realize that your question was confused, and the other work you’ve done has deconfused it, but it does seem like often it’s just that they pick up the surrounding culture and just forget about the question they cared about in the first place.)
So I guess this seems like a possible risk. I’d still bet pretty strongly against any particular junior researcher’s intuition being better, so I still think this advice is good on net.
(I’m mostly not engaging with the quantum example because it sounds like a very just-so story to me and I don’t know enough about the area to evaluate the just-so story.)
Thanks for the link to your FAQ, I’m excited to read it further now!
To be clear, I think Geoffrey Hinton’s advice was targeted at very junior people. In context, the interview was conducted for Andrew Ng’s online deep learning course, which for many people would be their first exposure to deep learning. I also got the impression that he would stand by this advice for early PhDs (though I could definitely have misunderstood him), and by “future Geoffrey Hintons and Eliezer Yudkowskys” I was thinking about pretty junior people rather than established researchers.
I’m considering three types of advice:
“Always defer to experts”
“Defer to experts for ~3 years, then trust your intuitions”
“Always trust your intuitions”
When you said
I thought you were claiming “maybe 3 > 1”, so my response was “don’t do 1 or 3, do 2″.
If you’re instead claiming “maybe 3 > 2”, I don’t really get the argument. It doesn’t seem like advice #2 is obviously worse than advice #3 even for junior Eliezers and Geoffreys. (It’s hard to say for those two people: in Eliezer’s case, since there were no experts to defer to at the time, and I don’t know enough details about Geoffrey to evaluate which advice would be good for him.)
Oh, I agree that’s probably true. I think he’s wrong to give that advice. I’m generally pretty okay with ignoring expert advice to amateurs if you have reason to believe it’s bad; experts usually don’t remember what it was like to be an amateur and so it’s not that surprising that their advice on what amateurs should do is not great. (EDIT: Here’s a new post that goes into more detail on this.)
I would guess the ‘typical young researcher fallacy’ also applies to Hinton - my impression is he is basically advising his past self, similarly to Toby. As a consequence, the advice is likely sensible for people-much-like-past-Hinton, but not a good general advice for everyone.
In ~3 years most people are able to re-train their intuitions a lot (which is part of the point!). This seems particularly dangerous in cases where expertise in the thing you are actually interested in does not exist, but expertise in something somewhat close does - instead of following your curiosity, you ‘substitute the question’ with a different question, for which a PhD program exists, or senior researchers exist, or established directions exist. If your initial taste/questions was better than the expert’s, you run a risk of overwriting your taste with something less interesting/impactful.
Anecdotal illustrative story:
Arguably, large part of what are now the foundations of quantum information theory / quantum computing could have been discovered much sooner, together with taking more sensible interpretations of quantum mechanics than Copenhagen interpretation seriously. My guess what was happening during multiple decades (!) was many early career researchers were curious what’s going on, dissatisfied with the answers, interested in thinking about the topic more… but they were given the advice along the lines ‘this is not a good topic for PhDs or even undergrads; don’t trust your intuition; problems here are distasteful mix of physics and philosophy; shut up and calculate, that’s how a real progress happens’ … and they followed it; acquired a taste telling them that solving difficult scattering amplitudes integrals using advanced calculus techniques is tasty, and thinking about deep things formulated using tools of high-school algebra is for fools. (Also if you did run a survey in year 4 of their PhDs, large fraction of quantum physicists would probably endorse the learned update from thinking about young foolish questions about QM interpretations to the serious and publishable thinking they have learned.)
I agree substituting the question would be bad, and sometimes there aren’t any relevant experts in which case you shouldn’t defer to people. (Though even then I’d consider doing research in an unrelated area for a couple of years, and then coming back to work on the question of interest.)
I admit I don’t really understand how people manage to have a “driving question” overwritten—I can’t really imagine that happening to me and I am confused about how it happens to other people.
(I think sometimes it is justified, e.g. you realize that your question was confused, and the other work you’ve done has deconfused it, but it does seem like often it’s just that they pick up the surrounding culture and just forget about the question they cared about in the first place.)
So I guess this seems like a possible risk. I’d still bet pretty strongly against any particular junior researcher’s intuition being better, so I still think this advice is good on net.
(I’m mostly not engaging with the quantum example because it sounds like a very just-so story to me and I don’t know enough about the area to evaluate the just-so story.)