Error
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
Thanks for writing this post, it’s always useful to hear people’s experiences! For others considering a PhD, I just wanted to chime in and say that my experience in a PhD program has been quite different (4th year PhD in ML at UC Berkeley). I don’t know how much this is the field, program or just my personality. But I’d encourage everyone to seek a range of perspectives: PhDs are far from uniform.
I hear the point about academic incentives being bad a lot, but I don’t really resonate with it. A summary of my view is that incentives are misaligned everywhere, not just academia. Rather than seeking a place with good (in general) incentives, first figure out what you want to do, and then find a place where the incentives happen to be compatible with that (even if for the “wrong” reasons).
I’ve worked in quant finance, industry AI labs, and academic AI research. There were serious problems with incentives in all three. I found this particularly unforgivable in quantitative finance, where the goal is pretty clear: make money. You can even measure day to day if you’re making money! But getting the details right is hard. At one place I’m aware of, people were paid based on their group’s profitability, divided by how risky their strategies were. This seems reasonable: profit good, risk bad. The problem was, it measured the risk of your strategy in isolation—not how it affected the whole firm’s risk levels. So different groups colluded to swap strategies, which made each of them seem less risky in isolation (so they could paid more), without changing the firm’s overall strategy at all!
Incentivizing research is an unusually hard problem. Agendas can take years to pay off. The best agendas are often really high variance, so someone might fail several times but still be doing great (in expectation) work. Given this backdrop, a PhD actually seems pretty reasonable.
It’s pretty hard to get fired doing a PhD, and some (by no means all) advisors will let you work on pretty much whatever you want. So, you have a 3-5 year runway to just work on whatever topics you think are best. At the end of those 3-5 years, you have to convince a panel of experts (who you get to hand-pick!) that you did something that’s “worth” a PhD.
As far as things go, this is incredibly flexible, and is evidenced by the large number of people who goof of during their PhD. (This is the pitfall of weak incentives.) It also seems like a pretty reasonable incentive. If after 5 years of work you can’t convince people what you did was good, it might be that it’s incredibly ahead of it’s time, but more likely you either need to communicate it better or the work just wasn’t that great by the standards of the field.
The “by the standards of the field” is the key issue here. Some high impact work just doesn’t fit well into the taste of a particular field. Perhaps it falls between disciplinary boundaries. Or it’s more about distilling existing research, so isn’t novel enough. That sucks, and academic research is probably the wrong venue to be pursuing this in—but it doesn’t make academic incentives bad per se. Just bad for that kind of research.
I think the bigger issue are the tacit social pressures to publish and make a name for yourself. These matter a fair bit for the job market, so it’s a real pressure. But I think analogous or equal pressures exist outside of academia. If you work at an industry lab, there might be a pressure to deliver flashy results of products. If you work as an independent researcher, funders will want to see publications or other signs of progress.
I’d love to see better incentives, but I think it’s important to acknowledge that mechanism design for research is a hard problem, not just that academia is screwing it up uniquely badly.
This is an excellent comment, thanks Adam.
A couple impressions:
Totally agree there are bad incentives lots of places
I think figuring out what existing institutions have incentives that best serve your goals, and building a strategy around those incentives, is a key operation. My intent with this article was to illustrate some of that type of thinking within planning for gradschool. If I was writing a comparison between working in academia and other possible ways to do research I would definitely have flagged the many ways academic incentives are better than the alternatives! I appreciate you doing that, because it’s clearly true and important.
In that more general comparison article, I think I may have still cautioned about academic incentives in particular. Because they seem, for lack of a better word, sneakier? Like, knowing you work at a for-profit company makes it really transparently clear that your manager (or manager’s manager’s) incentives are different from yours, if you want to do directly impactful research. Whereas I’ve observed folks, in my academic niche of biological engineering, behave as if they believe a research project to be directly good when I (and others) can’t see the impact proposition, and the behavior feels best explained by publishing incentives? In more extreme cases, people will say that project A is less important to prioritize than project B because B is more impactful, but will invest way more in A (which just happens to be very publishable). I’m sure I’m also very guilty of this, but its easier to recognize in other people :P -I’m primarily reporting on biology/ bioengineering/ bioinformatics academia here, though consume a lot of deep learning academias output. FWIW, my sense is there is actually a difference in the strength and type of incentives between ML and biology, at least. From talking with friends in DL academic labs, it seems like there is still a pressure to publish in conferences but there are also lots of other ways to get prestige currency, like putting out a well-read arxiv paper or being a primary contributor to an open source library like pytorch. In biology, from what I’ve seen, it just really really really matters that you publish in a high impact factor journal, ideally with “Science” or “Nature” on the cover.
It also matters a whole lot who your advisor is, as you mention. Having an advisor who is super bought in to the impact proposition of your research is a totally different game. I have the sense that most people are not this lucky by default, and so would want to optimize for the type of buy-in or, alternatively, laissez-faire management which I pattern match to the type of research freedom you’re describing.
All of this said, I think my biggest reaction is something like “there are ways of finding really good incentives for doing research”! Instead of working in existing institutions—academic, for-profit research labs, for-profit company—come up with a good idea for what to research and how, and just do it. More precisely: ask an altruistic funder for money, find other people to work with, make an organization if it seems good. There are small and large versions of this. On the small scale you can apply for EA grants or another org which grants to individuals, and if you’re really on to something you ask for org-scale funding. I’m not claiming that this is always a better idea: you will be missing lots of resources you might otherwise have in e.g. academia.
But compared to working with a funder who, like you, wants to solve the problem and make the world be good, any of the other institutions mentioned including academia look extremely misaligned. And IMO its worth making it clear that relative to this, almost any lab/ institute’s academic incentives suck. Once this DIY option is on the table I think it is possible to make better choices about whether you like the compromise of working at another institution or whether you will use this option to get specific resources that will make the “forge your own way” option more tractable. E.g.: don’t have any good ideas for a research agenda? Great, focus on figuring this out in your PhD. Don’t know any good people you might recruit for your project? Great, focus on building a good network in your PhD. Etc etc
I’m curious if you still feel like incentives are misaligned in this world, or whether it feels too impractical to be included in your list, or disagree with me elsewhere?
Thanks again :)
Sorry for the (very) delayed reply here. I’ll start with the most important point first.
I think overall the incentives set up by EA funders are somewhat better than run-of-the-mill academic incentives, but I think the difference is smaller than you seem to believe, and I think we’re a long way from cracking it. I think this is something we can get better at, but it’s something that I expect will take significant infrastructure and iteration: e.g. new methods for peer review, experimenting with different granter-grantee relationships, etc.
Concretely, I think EA funders are really good (way better than most of academia or mainstream funders) at picking important problems like AI safety or biosecurity. I also think they’re better at reasoning about possible theories of change (if this project succeeds, would it actually help?) and considering a variety of paths to impact (e.g. maybe a blog post can have more impact than a paper in this case, or maybe we’d even prefer to distribute some results privately).
However, I think most EA funders are actually worse at evaluating whether the research agenda is being executed well than the traditional academic structure. I help the LTFF evaluate grants, many of which are for independent research, and while I try to understand people’s research agenda and how successful they’ve been, I think it’s fair to say I spend at least an order of magnitude less time on this per applicant than someone’s academic advisor.
Even worse, I have basically zero visibility into the process—I only see the final write-up, and maybe have an interview with the person. If I see a negative result, it’s really hard for me to tell if the person executed on the agenda well but the idea just didn’t pan out, or if they bungled the process. Whereas I find it quite easy to form an opinion on projects I advise, as I can see the project evolve over time, and how the person responds to setbacks. Of course, we can (and do) ask for references, but if they’re executing independently they may not have any, and there’s always some CoI on advisors providing a reference.
Of course, when it comes to evaluating larger research orgs, funders can do a deeper dive and the stochasticity of research matters less (as it’s averaged over a longer period of time). But this is just punting the problem to those who are running the org. In general I still think evaluating research output is a really hard problem.
I do think one huge benefit EA has is that people are mostly trying to “play fair”, whereas in academia there is sadly more adversarial behavior (on the light side, people structuring their papers to dodge reviewer criticism; on the dark side, actual collusion in peer review or academic fraud). However, this isn’t scalable, and I wouldn’t want to build systems that rely on it.
This is a fair point. I do think people kid themselves a bit about how much “academic freedom” they really have, and this can lead to people in effect internalizing the incentives more.
Believing something is “directly good” when others disagree seems like a classic case of wishful thinking. There are lots of reasons why someone might be motivated to work on a project (despite it not, in fact, being “directly good”). Publication incentives are certainly a big one, and might well be the best explanation for the cases you saw. But in general I think it could also be that they just find that topic intellectually interesting, have been working on it for a while and are suffering from sunk cost fallacy, etc.
I like this writeup a lot, but I would say to anyone who’s actually reading this should ignore the advice to not go into academia.
If you’re reading this, you’re probably selected (!) to be someone who is atypical and has a decent shot at succeeding in academia. (See also: SSC on ‘reversing all advice you hear’.) i.e.: if you’re someone who’s taking the time out of your day to read this, you’re probably (probably!) similar to “Anita” here.
Ugh. Shrug. That isn’t supposed to be the point of this post. All my comments on this are to alert the reader that I happen to believe this and haven’t tried to stop it from seeping into my writing. It felt disingenuous not to.
But since you raised, I feel like making it clear, if it isn’t already, that I do not recommend reversing this advice. At least if you are considering cause areas/ academic domains that I might know about (see my preamble). I have no idea how applicable this is outside of longtermist technical-leaning work.
If you think you might be an exception to this, feel free to DM me. Exceptions do exist, I just highly doubt you (the reader) are one. THIS DOES NOT MEAN I AM NOT EXCITED ABOUT YOUR IMPACT!! I think there are much better opportunities than becoming a professor out there :)
As I said a lot of smart people disagree with me on this, but here is some of my thinking:
Most people overestimate their chances for the obvious reasons
I’ve advised at least 10 smart, excellent EAs interested in pursuing PhDs and none of them are in “Anita’s” reference class. A first author Nature paper in undergrad is really extremely unique. The only exceptions here are people who are already in early-track faculty positions at good schools, and even then I worry about the counterfactual value. (these are not the people reading this, I imagine)
Having a “good story” for becoming a faculty is a huge part luck. I’ve been interacting with grad students and post docs from top labs at Harvard and MIT since maybe 2015 and for every faculty position people get there are maybe 5 people who are equally or more talented whose research was equally or more compelling in principle; the difference is whether certain parts of their high-risk research panned out in a certain compelling way and whether they were good at “selling it”.
You approximately can’t get directly useful/ things done until you have tenure. I think this should be obvious but some people seem to believe a fairy tale where they are both winning the rat race and doing lots of direct good.
Given the above, academia is a 10-15 year crapshoot. (PhD, postdoc or multiple, 5-ish years as a junior faculty)
It’s not clear to me what you get even after all of this. I think its hard to argue that academia is clearly better than working in a private research org if you want to do direct technology development. This leaves some kind of pulpit/ spokesperson effect. Is this really worth it? Most people who could actually get a tenured faculty position could also write 3 excellent books in the time it takes to do a PhD and post-doc. Are we sure this alternative, as one example among many possible, isn’t a faster way of establishing spokesperson credibility?
Unless you have worked in top labs with EA-minded people, I don’t think it is possible to really understand how bad academic incentives are. You will find yourself justifying the stupidest shit on impact grounds, and/or pursuing projects which directly make the world worse. People who are much better than you will also do this. This just gets worse with time, and needs to be accounted for as a reduction in expected impact when considering an opportunity that only pays off 12 years after steeping in the corrupting juices.
Obviously, academia looks a whole lot worse if you believe lots of things need to happen right now, as opposed to 15 years from now. For my part, I would happily trade work hours 15 years from now for more time now, at a roughly 2:1 premium.
Another risk you are taking, related to the above, is that the field of research you picked has any relevance 15 years from now. Obviously you can change as you go, but switching your “story” around has a big penalty in the academic job market, from what I’ve heard.
If we think we need more professors as a movement, it could be the case that its way more efficient to just reach out to people who already have faculty positions (or are just one step away, in a highly enriched pool). For example, I know of instances where students have influenced their PIs on research directions and goals, in a direction more aligned with longtermist objectives. It might be that targeted outreach and coalition building among academics is just way higher bang for buck. It’s also not clear that we need the most aligned people in faculty positions, rather than people who are allies. Have we ruled this out? Seems like any person considering mortgaging 15 years of their impact might want to spend 1 year testing this hypothesis first.
Putting these random points together, it just feels like a really uphill battle to make academia look good from an impact perspective. I think you need to believe some combination of 1) problems are not urgent 2) academic incentives are actually good (?)/ there is some other side benefit of working toward a faculty position that is really worth having 3) there aren’t many other opportunities for people who could be faculty in a technical domain or 4) we are specifically constrained on something professors have, maybe credible spokespeople, AND there are no more efficient ways to get those resources.
OR you might believe that academia is exciting from a personal fit perspective. I think a lot of people are very motivated by the types of status incentives in academia, which is good I guess if you have trouble finding motivation elsewhere. I’d just want to separate this from the impact story.
My spicy take is that advice to go into academia has arisen through some combination of A) EA being a movement grown out of academia in many ways, B) a lack of better career ideas, C) too much distance from the urgency and concreteness of problems on the ground and D) the same mind destroying publishing and status incentives I have mentioned a number of times here, which lead to a certain kind of self-justification.
So where all this caches out for me is finding it plausible that it is worth preserving some optionality for academia, but being very strategic (as I tried to demonstrate in this post). This includes knowing what you actually are optimizing for, and being willing to leave academic optionality if push comes to shove and there is something better. This is why I wrote the Anita case study this way.
I’m very happy to be shown where I’m wrong.
I’m not convinced that academia is generally a bad place to do useful technical work. In the simplest case, you have the choice between working in academia, industry or a non-profit research org. All three have specific incentives and constraints (academia—fit to mainstream academic research taste; industry—commercial viability; non-profit research—funder fit, funding stability and hiring). Among these, academia seems uniquely well-suited to work on big problems with a long (10-20 year) time horizon, while having access to extensive expertise and collaborators (from colleagues in related fields), EA and non-EA funding, and EA and non-EA hires.
For my field of interest (longtermist biorisk), it appears that many of the key past innovations that help e.g. with COVID now come from academic research (e.g. next-generation sequencing, nanopore sequencing, PCR and rapid tests, mRNA vaccines and other platform vaccine tech). My personal tentative guess is that our split should be something like 4 : 4 : 1 between academia, industry and non-profit research (academia to drive long-term fundamental advances, industry/entrepreneurship to translate past basic science advances into defensive products, and non-profit research to do work that can’t be done elsewhere).
Crux 1 is indeed the time horizon—if you think the problem you want to work on will be solved in 20 years/it will be too late, then dropping ‘long-term fundamental advances’ in the portfolio would seem reasonable.
Crux 2 is how much academia constrains the type of work you can do (the ‘bad academic incentives’). I resonate with Adam’s comment here. I can also think of many examples of groundbreaking basic science that looks defensive and gets published very well (e.g. again sequencing innovations, vaccine tech; or, for a recent example, several papers on biocontainment published in Nature and Science).
Thanks Seb. I don’t think I have energy to fully respond here, possibly I’ll make a separate post to give this argument its full due.
One quick point relevant to Crux 2: “I can also think of many examples of groundbreaking basic science that looks defensive and gets published very well (e.g. again sequencing innovations, vaccine tech; or, for a recent example, several papers on biocontainment published in Nature and Science).”
I think there are many-fold differences in impact/dollar between the tech you build if you are trying to actually solve the problem and the type of probably-good-on-net examples you give here.
Other ways of saying parallels of this point:
Things which are publishable in nature or science are just definitively less neglected, because you are competing against everyone who wants a C/N/S publication
The design space of possible interventions is a superset of, and many times larger than the design space of interventions which also can be published in high impact journals
We find power-laws in cost effectiveness lots of other places, and AFAIK have no counter-evidence here. Given this, even a small orthogonal component between what is incentivized by academia and what is actually good will lead to a large difference in expected impact.
At least in CS, the vast majority of professors at top universities in tenure-track positions do get tenure. The hardest part is getting in. Of course all the junior professors I know work extremely hard, but I wouldn’t characterize it as a publication rat race. This may not be true in other fields and outside the top universities.
The primary impediment to getting things done that I see is professors are also working as administrator and teaching, and that remains a problem post-tenure.
This is interesting and also aligns with my experience depending on exactly what you mean!
If you mean that it seems less difficult to get tenure in CS (thinking especially about deep learning) than the vibe I gave, (which is again speaking about the field I know, bioeng) I buy this strongly. My suspicion is that this is because relative to bioengineering, there is a bunch of competition for top research talent by industrial AI labs. It seems like even the profs who stay in academia also have joint appointment in companies, for the most part. There isn’t an analogous thing in bio? Pharma doesn’t seem very exciting and to my knowledge doesn’t have a bunch of PI-driven basic research roles open. Maybe bigtech-does-bio labs like Calico will change this in the future? IMO this doesn’t change my core point because you will need to change your agenda some, but less than in biology.
If you mean that once you are on the Junior Faculty track in CS, you don’t really need to worry about well-received publications, this is interesting and doesn’t line up with my models. Can you think of any examples which might help illustrate this? I’d be looking for, e.g., recently appointed CS faculty at a good school pursuing a research agenda which gets quite poor reception/ crickets, but this faculty is still given tenure. Possibly there are some examples in AI safety before it was cool? Folks that come to mind mostly had established careers. Another signal would be less of the notorious “tenure switch” where people suddenly change their research direction. I have not verified this, but there is a story told about a Harvard Econ professor who did a bunch of centrist/slightly conservative mathematical econ who switched to left-leaning labor economics after tenure.
To clarify, I don’t think tenure is guaranteed, more that there’s significant margin of error. I can’t find much good data on this, but this post surveys statistics gathered from a variety of different universities, and finds anywhere between 65% of candidates get tenure (Harvard) to 90% (Cal State, UBC). Informally, my impression is that top schools in CS are the higher end of this: I’d have guessed 80%. Given this, the median person in the role could divert some of their research agenda to less well-received topics and still get tenure. But I don’t think they could work on something that no one in the department or elsewhere cared about.
I’ve not noticed much tenure switch in CS but have never actually studied this, would love to see hard data here. I do think there’s a significant difference in research agendas between junior and senior professors, but it’s more a question of what was in vogue when they were in grad school and shaped their research agenda, than tenured vs non-tenured per se. I do think pre-tenure professors tend to put their students under more publication pressure though.
I don’t see how this is a counterargument. Do you mean to say that, once you are on track to tenure, you can already start doing the high-impact research?
It seems to me that, if this research is too diverged from the academic incentives, then our hypothetical subject may become one of these rare cases of CS tenure-track faculty that does not get tenure.
Could you be a bit more specific about this point? This sounds very field-dependent.
I bet it is! The example categories I think I had in mind at time of writing would be 1) people in ML academia who want to be doing safety instead doing work that almost entirely accelerates capabilities and 2) people who want to work on reducing biological risk instead publish on tech which is highly dual use or broadly accelerates biotechnology without deferentially accelerating safety technology.
I know this happens because I’ve done it. My most successful publication to date (https://www.nature.com/articles/s41592-019-0598-1) is pretty much entirely capabilities accelerating. I’m still not sure if it was the right call to do this project, but if it is, it will have been a narrow edge revolving on me using the cred I got from this to do something really good later on.
I think even among such selected crowd, Anita would stand out like a bright star. The average top-university PhD student doesn’t end up holding a top faculty job. (This may seem elitist, but it is important: becoming a trainer of mediocre PhD students is likely not more effective than non-profit work). A first-author Nature paper in undergrad (!) is quite rare too.
One important factor of a PhD that I don’t see explicitly called out in this post is what I’d describe as “research taste”: how to pick what problems to work on. I think this is one of if not the most important part of a PhD. You can only get so much faster at executing routine tasks or editing papers. But the difference between the most and mediam importance research problems can be huge.
Andrej Karpathy has a nice discussion of this:
Clearly we might care about some of these criteria (like grants) less than others, but I think the same idea holds. I’d also recommend Chris Olah’s exercises on developing research taste.
You can get research taste by doing research at all, it doesn’t have to be a PhD. You may argue that PIs have very good research taste that you can learn from. But their taste is geared towards satisfying academic incentives! It might not be good taste for what you care about. As Chris Olah points out, “Your taste is likely very influenced by your research cluster”.
Strong +1 to this. I think I have observed people who have really good academic research taste but really bad EA research taste
Taste is huge! I was trying to roll this under my “Process” category, where taste manifests in choosing the right project, choosing the right approach, choosing how to sequence experiments, etc etc. Alas, not a lossless factorization
These exercises look quite neat, thanks for sharing!
Just to clarify, it sounds like you are:
Encouraging PhD students to be more strategic about how they pursue it
Discouraging longtermist EA PhD-holders from going on to pursue a faculty position in a university, thus implying that they should pursue some other sector (perhaps industry, government, or nonprofits)
I also wanted to encourage you to add more specific observations and personal experiences that motivate this advice. What type of grad program are you in now (PhD or master’s), and how long have you been in it? Were you as strategic in your approach to your current program as you’re recommending to others? What are some specific actions you took that you think others neglect? Why do you think that other sectors outside academia offer a superior incentive structure for longtermist EAs?
I am doing 1. 2 is an incidental from the perspective of this post, but is indeed something I believe (see my response to bhalperin). I think my attempt to properly flag my background beliefs may have led to the wrong impression here. Or alternatively my post doesn’t cover very much on pursuing academia, when the expected post would have been almost entirely focused on this, thereby seeming like it was conveying a strong message?
In general I don’t think about pursuing “sectors” but instead about trying to solve problems. Sometimes this involves trying to get a particular government gig to influence a policy, or needing to write a paper with a particular type of credibility that you might get from an academic affiliation or a research non-profit, or needing to build and deploy a technical system in the world, which maybe requires starting an organization.
I’d encourage folks to work backwards from problems, to possible solutions, to what would need to happen on an object level to realize those solutions, to what you do with your PhD and other career moves. “Academia” isn’t the most useful unit of analysis in this project, which is partly why I wasn’t primarily trying to comment on it.
Regarding specific observations and personal experiences: I agree this post could be better with more things like this. Unfortunately, I don’t feel like including them. Open invite to DM me if you are thinking about a PhD or already in one and want to talk more, including about my strategy.
That makes sense. I like your approach of self-diagnosing what sort of resources you lack, then tailoring your PhD to optimize for them.
One challenge with the “work backwards” approach is that it takes quite a bit of time to figure out what problems to solve and how to solve them. As I attempted this planning my own immanent journey into grad school, my views gained a lot of sophistication, and I expect they’ll continue to shift as I learn more. So I view grad school partly as a way to pursue the ideas I think are important/good fits, but also as a way to refine those ideas and gain the experience/network/credentials to stay in the game.
The “work backwards” approach is equally applicable to resource-gathering as finding concrete solutions to specific world problems.
I think it’s important for career builders to develop gears-level models of how a PhD or tenured academic career gives them resources + freedom to work on the world problems they care about; and also how it compares to other options.
Often, people really don’t seem to do that. They go by association: scientists solve important problems, and most of them seem to have PhDs and academic careers, so I guess I should do that too.
But it may be very difficult to put the resources you get from these positions to use in order to solve important problems, without a gears-level model of how those scientists use those resources to do so.
“Working backwards” type thinking is indeed a skill! I find it plausible a PhD is a good place to do this. I also think there might be other good ways to practice it, like for example seeking out the people who seem to be best at this and trying to work with them.
+1 on this same type of thinking being applicable to gathering resources. I don’t see any structural differences between these domains.
I love how this is laid out and I’d love to see articles like this for other areas, if appropriate!
Thank you for the write-up. I wish I had this advice, and (more crucially) kept reminding myself of it, during my PhD. As you say, academic incentives did poison my brain, and I forgot about my original reasons for entering the programme. I only realised one month ago that it had been happening slowly; my brain is likely still poisoned, but I’m working on it.
I’m curious about your theory of change, if you have time to briefly write about it. You wrote that
and that you don’t think gunning for a faculty position is a good thing. What kind of job is the right one to “make scientific progress”, then? I thought that the best way to do that is to run a lab, managing a bunch of smart PhD students and postdocs, and steering them towards useful research directions.
My impression is that PIs manage the same or more people than the equivalent seniority position in industry, at least in machine learning; but that they have freedom to set research priorities, instead of having to follow a boss. (On the flipside, they have to pander to grant givers, but that seems to give more freedom in research direction).
In summary, what do you think is the kind of job where you can make the most scientific progress?
Appreciate your comment! I probably won’t be able to give my whole theory of change in a comment :P but if I were to say a silly version of it, it might look like: “Just do the thing”
So, what are the constituent parts of making scientific progress? Off the cuff, maybe something like:
You need to know what questions are worth asking / problems are worth solving
You need to know how to decompose these questions in sub-questions iteratively until a subset are answerable from the state of current knowledge
You need to have good research project management skills, to figure out what order it makes sense to tackle these sub-questions and most quickly make progress toward the goal which is where all the impact is
You need people to have smart ideas to guess the answers to sub-questions and generate hypotheses
You need people to do or build things, like run experiments, code, or fab physical objects
You need operations and logistics to turn money into materials and people, and to coordinate the materials and people
You need managers to foster productive environments and maintain healthy relationships
You need advisors to hold you accountable to the actual goal
You often need feedback loops with the actual goal, in case you’ve decomposed the problem incorrectly or something else in the system has gone awry.
You need money
I’m making this up, but do you see what I mean?
Then my advice would be to figure out which subset of these are so constraining that you can’t start the business of doing the thing, and to solve those constraints e.g. by cultivating instrumental resources like research ability. Otherwise, set yourself up with the set of 1-10 which maximize your likelihood of succeeding at the thing, and start doing the thing. Figure the rest out as you go.
It’s totally conceivable that an academic lab is the best place available to you. But I would want you to come to that conclusion after having thought hard about it, working backward from the actual goal.
Assuming the aspects of 1-10 which are research skills are covered, my object level sense is that academia goes wrong on 1,3,5,6,7,8,9.
All told my algorithm might be something like:
What other existing entities/ groups look good on these inputs to the scientific progress machine? These might be existing companies, labs, random people on the internet, non-profits, whatever. Would also include looking for academic opportunities that look better on the above. Don’t think about made up categories like “non-profit” when doing this. Just figure out what it would look like to work at/with this entity to accomplish the goal.
What levers do I have to tweak things such that my list of existing places looks even better?
What would it look like for me to make my own enterprise to directly do the thing? What resources am I missing?
What opportunities do I have to pursue instrumental goods/ resources that don’t look like doing the thing?
With bias toward doing the thing, see which of working with existing collections of people, pushing existing collections of people to be different in some way, starting your own thing, and gathering instrumental resources you are missing looks like it will lead to the best outcomes.
Do that thing. Periodically reevaluate.
This probably isn’t very helpful, but I don’t know of any tricks! I could say more stuff about “industry” vs. “academia” but for the most part I think those conversations are missing the point unless you can drill way more into the specifics of a situation.
Good luck :) remember that lots of other people are trying to figure the same kind of thing out. In my experience they are the best people to learn from
This is so well written, so thoughtful and so well structured.
This theme or motif has come up a few times. It seems important but maybe this particular point is not 100% clear to the new PhD audience you are aiming for.
For clarity, do you mean:
On an operational or “gears-level”, avoid activity due to (maybe distorted) publication incentives? E.g. do not pursue trends, fads or undue authority, or perform busy work that produces publications. Maybe because these produce bad habits, infantilization, distractions.
or
Do not pursue publications because this tends to put you down a R1 research track in some undue way, perhaps because it’s following the path of least resistance.
Also, note that “publications” can be so different between disciplines.
A top publication in economics during a PhD is rare, but would basically be worth $1M in net present value over their career. It’s probably totally optimal to tag such a publication, even in business, because of the signaling value.
Note that my academic school is way below you in academic prestige/rank/productivity. It would be interesting to know more about your experiences at MIT and what it offers.
Thanks Charles! I think of your two options I most closely mean (1). For evidence I don’t mean 2: “Optimize almost exclusively for compelling publications; for some specific goals these will need to be high-impact publications.”
My attempt to restate my position would be something like: “Academic incentives are very strong and its not obvious from the inside when they are influencing your actions. If you’re not careful, they will make you do dumb things. To combat this, you should be very deliberate and proactive in defining what you want and how you want it. In some cases this might involve pushing against pub incentives, in other cases it might involve optimizing for following them really really hard. What you want to avoid is telling yourself the reason for doing something is A, while the real reason is B, where B is usually something related to academic incentives. Publishing good papers is not the problem, deluding yourself is.”
Big +1 to this. Doing things you don’t see as a priority but which other people are excited about is fine. You can view it as kind of a trade: you work on something the research community cares about, and the research community is more likely to listen on (and work on) things you care about in the future.
But to make a difference you do eventually need to work on things that you find impactful, so you don’t want to pollute your own research taste by implicitly absorbing incentives or others opinions unquestioningly.