As one of the people you mentioned (I’m flattered!), I’ve also been curious about this.
As for my own anecdata, I basically haven’t applied yet. Technically I did apply and get declined last round, but a) it was a fairly low-effort application since I didn’t really need the money then which b) I said so on the application and c) I didn’t have any public posts until 2 months ago so I wasn’t in your demographic and d) I didn’t have any references because I don’t really know many people in the research community.
I’m about to submit a serious application for this round, where of those only (d) is still true. At least, I haven’t extensively interacted with any high-status researchers for it to make sense to ask anyone for references. And I think maybe there’s a correlation there that explains part of your question: I post/comment online when I’m up to it because it’s one of the best ways for me to get good feedback (this being a great example), even though I’m a slow writer and it’s a laborious process for me to get from “this seems like a coherent, nontrivial idea probably worth writing up” to feeling like I’ve covered the all the inferential gaps, noted all the caveats, taken into account relevant prior writings, and thought of possible objections enough to feel ready to hit the submit button. But anyways, I would guess that maybe online people slightly skew towards being isolated (else they’d get feedback or spread their ideas by just talking to e.g. coworkers), hence not having references. But I don’t think this is a large effect (and I defer to Habryka’s comment). Of the people you mentioned, I believe Evan is currently working with Christiano at OpenAI and has been “clued-in” for a while, and I have no idea about the first 3.
Also, I often wonder how much Alignment research is going on that I’m just not clued into from “merely” reading the Alignment Forum, Alignment Newsletter, papers by OpenAI/DeepMind/CHAI etc. I know that MIRI is nondisclosed-by-default now, and I get that. But they laid out their reasons for that in detail, and that’s on top of the trust they’ve earned from me as an institution for their past research. When I hear about people who are doing their own research but not posting anything, I get pretty skeptical unless they’ve produced good Alignment research in the past (producing other technical research counts for something, my own intuition is that the pre-paradigmatic nature of Alignment research is different enough that the tails come apart), and my system 1 says (especially if they’re getting funded):
Oh come on! I would love to sit around and do my own private “research” uninterrupted without the hard work of writing things up, but that’s what you have to do if you want to be a part of the research community collectively working toward solving a problem. If everyone just lounged around in their own thoughts and notes without distilling that information for others to build on, there just wouldn’t be any intellectual progress. That’s the whole point of academic publication, and forum posting is actually a step down from that norm, and even that’s only possible because the community of < 100 is small, young, and non-specialized enough that medium-effort ways of distilling ideas still work (less inferential gaps to cross etc.)
(My system 2 would obviously use a different tone than that, but it largely agrees with the substance.)
Also, to echo points made by Jan, LW is not the best place for a broad impression of current research, the Alignment Forum is strictly better. But even the latter is somewhat skewed towards MIRI-esque things over CHAI, OpenAI, and Deepmind’s stuff, here’s another decent comment thread discussing that.
As one of the people you mentioned (I’m flattered!), I’ve also been curious about this.
As for my own anecdata, I basically haven’t applied yet. Technically I did apply and get declined last round, but a) it was a fairly low-effort application since I didn’t really need the money then which b) I said so on the application and c) I didn’t have any public posts until 2 months ago so I wasn’t in your demographic and d) I didn’t have any references because I don’t really know many people in the research community.
I’m about to submit a serious application for this round, where of those only (d) is still true. At least, I haven’t extensively interacted with any high-status researchers for it to make sense to ask anyone for references. And I think maybe there’s a correlation there that explains part of your question: I post/comment online when I’m up to it because it’s one of the best ways for me to get good feedback (this being a great example), even though I’m a slow writer and it’s a laborious process for me to get from “this seems like a coherent, nontrivial idea probably worth writing up” to feeling like I’ve covered the all the inferential gaps, noted all the caveats, taken into account relevant prior writings, and thought of possible objections enough to feel ready to hit the submit button. But anyways, I would guess that maybe online people slightly skew towards being isolated (else they’d get feedback or spread their ideas by just talking to e.g. coworkers), hence not having references. But I don’t think this is a large effect (and I defer to Habryka’s comment). Of the people you mentioned, I believe Evan is currently working with Christiano at OpenAI and has been “clued-in” for a while, and I have no idea about the first 3.
Also, I often wonder how much Alignment research is going on that I’m just not clued into from “merely” reading the Alignment Forum, Alignment Newsletter, papers by OpenAI/DeepMind/CHAI etc. I know that MIRI is nondisclosed-by-default now, and I get that. But they laid out their reasons for that in detail, and that’s on top of the trust they’ve earned from me as an institution for their past research. When I hear about people who are doing their own research but not posting anything, I get pretty skeptical unless they’ve produced good Alignment research in the past (producing other technical research counts for something, my own intuition is that the pre-paradigmatic nature of Alignment research is different enough that the tails come apart), and my system 1 says (especially if they’re getting funded):
(My system 2 would obviously use a different tone than that, but it largely agrees with the substance.)
Also, to echo points made by Jan, LW is not the best place for a broad impression of current research, the Alignment Forum is strictly better. But even the latter is somewhat skewed towards MIRI-esque things over CHAI, OpenAI, and Deepmind’s stuff, here’s another decent comment thread discussing that.