Ideally, someone senior tells you what to work on. But this is time-expensive for them, and they don’t want to give away their best ideas to somebody who might execute them badly. So more realistically…
This seems very surprising to me. Unless by “best ideas” you mean “literally somebody’s top idea” or by “someone senior” you mean Nick Bostrom?
My impression from talking to friends working in ML is that usually faculty have ideas that they’d be excited to see their senior grad students to work on, senior grad students have research ideas that they’d love for junior grad students to implement, and so forth.
Math and theoretical CS likewise have lists of open problems.
Similarly, in (non-academic EA) research I have way too many ideas that I can’t work on myself, and I’ve frequently seen buffets of potential research topics/ideas that more senior researchers propose.
My general impression is that this is the norm in EA research? When people choose not to work on other people’s ideas, it’s usually due to a combination of personal fit and arrogance in believing your own ideas are more important (or depending on the relevant incentives, other desiderata like “publishable”, “appealing to funders”, or “tractable”), not because of a lack of ideas!
My impression from talking to friends working in ML is that usually faculty have ideas that they’d be excited to see their senior grad students to work on, senior grad students have research ideas that they’d love for junior grad students to implement, and so forth.
I think this is true if the senior person can supervise the junior person doing the implementation (which is time-expensive). I have lots of project ideas that I expect I could supervise. I have ~no project ideas where I expect I could spend an hour talking to someone, have them go off for a few months and implement it, and then I’d be interested in their results. Something will come up along the way that requires replanning, and if I’m not around to tell them how to replan, they’re going to do it in a way that makes me much less excited about the results.
Thank you for the post, I found it interesting! [Minor point in response to Linch’s comment.]
I generally agree with Linch’s surprise, but
When people choose not to work on other people’s ideas, it’s usually due to a combination of personal fit and arrogance in believing your own ideas are more important (or depending on the relevant incentives, other desiderata like “publishable”, “appealing to funders”, or “tractable”), not because of a lack of ideas!
I (weakly) think that another factor here is that people are trained (e.g. in their undergraduate years) to come up with original ideas and work on those, whether or not they are actually useful. This gets people into the habit of over-valuing a form of topic originality. (I.e. it’s not just personal fit, arrogance, and external incentives, although those all seem like important factors.)
This is definitely the case in many of the humanities, but probably less true for those who participate in things like scientific research projects, where there are clearly useful lab roles for undergraduates to fill. In my personal experience, all my math work was assigned to me (inside and outside of class), while on the humanities side, I basically never wrote a serious essay whose topic I did not create. (This sometimes led to less-than-sensible papers, especially in areas where I felt that I lacked background and so had to find somewhat bizarre topics that I was confident were “original.”)
My guess is that changing this would be valuable, but might be very hard. Projects like Effective Thesis come to mind.
Speaking from my experience in AI governance: There are some opportunities to work on projects that more experienced people have suggested. At GovAI we have recently made a list of ideas people should work on. People on the GovAI fellowship program have been given suggestions.
Overall, yes, I do think there are fewer such opportunities than it sounds like there are in technical areas. That makes sense to me, because for AI governance research projects, the vast majority of junior people don’t yet have the skills necessary to execute the project to a high standard.
Another potential difference is that you don’t get do-overs: the more senior person can’t later write a paper that follows exactly the same idea but that’s written to a much higher standard, because there’s more of a requirement that each paper brings original ideas. (Perhaps in technical subjects you can say e.g. “previous authors have tried to get this method to work but the results weren’t great, and we show that it actually works really well”.)
Therefore, I don’t think the problem is that we have bad norms. The deeper issue is that we need to find ways of accelerating the very slow process of junior researchers learning how to execute research projects to a high standard.
Another potential difference is that you don’t get do-overs: the more senior person can’t later write a paper that follows exactly the same idea but that’s written to a much higher standard, because there’s more of a requirement that each paper brings original ideas.
Hmm taking a step back, I wonder if the crux here is that you believe(?) that the natural output for research is paper-shaped^, whereas I would guess that this would be the exception rather than the norm, especially for a field that does not have many very strong non-EA institutions/people (which I naively would guess to be true of EA-style TAI governance).
This might be a naive question, but why is it relevant/important to get papers published if you’re trying to do impactful research? From the outside, it seems unlikely that all or most good research is in paper form, especially in a field like (EA) AI governance where (if I understand it correctly) the most important path to impact (other than career/skills development) is likely through improving decision quality for <10(?) actors.
If you are instead trying to play the academia/prestige game, wouldn’t it make more sense to optimize for that over direct impact? So instead of focusing on high-quality research on important topics, write the highest-quality (by academic standards) paper you can in a hot/publishable/citable topic and direction.
^ This is a relevant distinction because originality is much more important in journal articles than other publication formats, you absolutely can write a blog post that covers the same general idea as somebody else but better, and AFAIK there’s nothing stopping a think tank from “revising” a white paper covering the same general point but with much better arguments.
This seems very surprising to me. Unless by “best ideas” you mean “literally somebody’s top idea” or by “someone senior” you mean Nick Bostrom?
My impression from talking to friends working in ML is that usually faculty have ideas that they’d be excited to see their senior grad students to work on, senior grad students have research ideas that they’d love for junior grad students to implement, and so forth.
Math and theoretical CS likewise have lists of open problems.
Similarly, in (non-academic EA) research I have way too many ideas that I can’t work on myself, and I’ve frequently seen buffets of potential research topics/ideas that more senior researchers propose.
My general impression is that this is the norm in EA research? When people choose not to work on other people’s ideas, it’s usually due to a combination of personal fit and arrogance in believing your own ideas are more important (or depending on the relevant incentives, other desiderata like “publishable”, “appealing to funders”, or “tractable”), not because of a lack of ideas!
Very surprised to hear about your experiences.
I think this is true if the senior person can supervise the junior person doing the implementation (which is time-expensive). I have lots of project ideas that I expect I could supervise. I have ~no project ideas where I expect I could spend an hour talking to someone, have them go off for a few months and implement it, and then I’d be interested in their results. Something will come up along the way that requires replanning, and if I’m not around to tell them how to replan, they’re going to do it in a way that makes me much less excited about the results.
Thank you for the post, I found it interesting! [Minor point in response to Linch’s comment.]
I generally agree with Linch’s surprise, but
I (weakly) think that another factor here is that people are trained (e.g. in their undergraduate years) to come up with original ideas and work on those, whether or not they are actually useful. This gets people into the habit of over-valuing a form of topic originality. (I.e. it’s not just personal fit, arrogance, and external incentives, although those all seem like important factors.)
This is definitely the case in many of the humanities, but probably less true for those who participate in things like scientific research projects, where there are clearly useful lab roles for undergraduates to fill. In my personal experience, all my math work was assigned to me (inside and outside of class), while on the humanities side, I basically never wrote a serious essay whose topic I did not create. (This sometimes led to less-than-sensible papers, especially in areas where I felt that I lacked background and so had to find somewhat bizarre topics that I was confident were “original.”)
My guess is that changing this would be valuable, but might be very hard. Projects like Effective Thesis come to mind.
Thanks for the comments!
Speaking from my experience in AI governance: There are some opportunities to work on projects that more experienced people have suggested. At GovAI we have recently made a list of ideas people should work on. People on the GovAI fellowship program have been given suggestions.
Overall, yes, I do think there are fewer such opportunities than it sounds like there are in technical areas. That makes sense to me, because for AI governance research projects, the vast majority of junior people don’t yet have the skills necessary to execute the project to a high standard.
Another potential difference is that you don’t get do-overs: the more senior person can’t later write a paper that follows exactly the same idea but that’s written to a much higher standard, because there’s more of a requirement that each paper brings original ideas. (Perhaps in technical subjects you can say e.g. “previous authors have tried to get this method to work but the results weren’t great, and we show that it actually works really well”.)
Therefore, I don’t think the problem is that we have bad norms. The deeper issue is that we need to find ways of accelerating the very slow process of junior researchers learning how to execute research projects to a high standard.
Hmm taking a step back, I wonder if the crux here is that you believe(?) that the natural output for research is paper-shaped^, whereas I would guess that this would be the exception rather than the norm, especially for a field that does not have many very strong non-EA institutions/people (which I naively would guess to be true of EA-style TAI governance).
This might be a naive question, but why is it relevant/important to get papers published if you’re trying to do impactful research? From the outside, it seems unlikely that all or most good research is in paper form, especially in a field like (EA) AI governance where (if I understand it correctly) the most important path to impact (other than career/skills development) is likely through improving decision quality for <10(?) actors.
If you are instead trying to play the academia/prestige game, wouldn’t it make more sense to optimize for that over direct impact? So instead of focusing on high-quality research on important topics, write the highest-quality (by academic standards) paper you can in a hot/publishable/citable topic and direction.
^ This is a relevant distinction because originality is much more important in journal articles than other publication formats, you absolutely can write a blog post that covers the same general idea as somebody else but better, and AFAIK there’s nothing stopping a think tank from “revising” a white paper covering the same general point but with much better arguments.
One reason to publish papers (specifically) about AI governance (specifically) is if you want to build an academic field working on AI governance. This is good both to get more brainpower and to get more people (who otherwise wouldn’t read EA research) to take the research seriously, in the long term. C.f. the last section here https://forum.effectivealtruism.org/posts/42reWndoTEhFqu6T8/ai-governance-opportunity-and-theory-of-impact