(I’m a colleague of david_reinstein, one of the authors of this post. All opinions are my own).
I/we would love to get input on this mapping
Intro
I’ve incidentally started thinking about metascience-related questions in the last 3 months, as it has independently came up in two different projects I was involved in. I think the paradigm I was operating out of is somewhat different than the explicit and implicit mapping here, so I’m sharing it here in the hopes that there can be some useful crossfertilization of ideas. Note that I’ve spent very little time thinking about this (likely <10 hours total), and even less time reading papers from others in this field.
Perspective/Toy Model/Paradigm
The perspective I currently have* is viewing “research/science” from 10,000 steps up and consider research as :
an industry that converts $s and highly talented people into (eventually) actionable insights
And then an important follow-up question here is:
How can we make the industry of research more efficient?
Paradigm Scope/Limitations
Notably, my definition is a broader tent (in the context of metascience) than prioritization of science/metascience entirely from a purely impartial EA perspective. From an EA perspective, we’d also care about:
Across-cause prioritization: Whether the marginal $ spent on research is better spent elsewhere
Prioritization in the context of differential technological progress: Whether we’re correctly differentially progressing research
that’s generically good for the long-term future over stuff that’s neutral or bad
I’m deliberately using a lower bar (“research as an industry that converts $s and highly talented people into (eventually) actionable insights”) than EA because I think it better captures the claimed ethos of research and researchers.
However, even with this lower bar, I think having this precise conceptualization (we have an industry that converts resources into actionable insights, how can we make the industry more efficient?) helps us prioritize a little within metascience.
Potentially Valuable: Operations Research of Research Outputs?
Things I would currently guess are quite valuable + understudied, from an inside view:
At the high-level, anything that draws a causal diagram between the inputs of research (eg $s, highly talented people) and the outputs of research
for example, studies on how to produce more research.
Hiring assessment literature for what makes for top {graduate students, postdocs, junior academics}
mostly thinking about conceptual organization, but plausible that optimal physical space/layout stuff have high returns as well.
Management practices for researchers.
In the context of EA nonprofit research orgs and other think tanks, actual research managers.
In the academic context, advisor/advisee relationships.
(note that I have not read enough of the literature to be confident that specific claims about neglectedness are true)
I’m not exactly sure what the umbrella of concepts above I’ve gestured to above should be called, but roughly what I’m interested in is
meta-science specifically of research productivity,
alternatively,
operations research/industrial organization of research outputs.
What I mean about this is that I think it’s plausible that there are immense (tens of billions to trillions) dollar bills laying on the floor in figuring out optimal allocation of the above. I think a lot of these decisions are, in practice, based on lore, political incentives, and intuition. I believe (could definitely be wrong), there’s very little careful theorizing and even less empirical data.
Other Potentially Valuable Things
Things I also think are very valuable + understudied, but feels more speculative and I have a even less firm inside view on:
Disruptive Open Science stuff that tries to solve the real problem (rather than dance around)
eg Aaron Schwartz
someone trying to replicate sci-hub now that sci-hub isn’t accepting new submissions.
note that this is of questionable legality in most jurisdictions
Figuring out models of science outside of academia
eg Figuring out alternatives to journals like Distill (I think some of the authors of this post is working on this?)
Brainstorming ways/incentive mechanisms + accountability for science funders
By default I expect the incentives for science funding incorporates some combination of complacency/CYA/risk minimization
Note that I have not looked into this at all
Maybe research on good ways to incentivize science via funding prizes etc.
Though I can totally buy that this is already addressed in the mechanism design literature
But to the extent it’s addressed but not implemented for reasons other than “human nature” or “political impossibility”, we can do empirical research on adoption/implementation
Maybe something about impact certificates is related
Research on science communication in a way that’s output-focused (rather than just “nice thing to do”)
Comparatively Less Interesting/Useful Things Within Metascience
I don’t think anything within “research on research” is obviously oversubscribed (in the sense that we as a society should clearly devote less resources to the marginal metascience project in that domain, compared to marginal resources on random science projects).
Nonetheless, here are things I would guess is less marginally valuable than things I’m personally interested in within metascience*:
More papers on DEI or demographics of scientists in a way that isn’t trying to track outputs
Stuff that tries to define/redefine science
Replication crisis stuff (I think this is a serious problem that is definitely worth people fixing, but relatively less neglected as of 2021).
Additional EA Work
In addition to the points I’ve identified above, I’d also be excited to see more work on:
More paradigm/conceptualization stuff like what’s done in this post and my comment.
In particular, I don’t think my ontology is quite ready for “prime time” yet, and I’d anti-recommend readers doing a bunch of active work based on my framework without thinking through their own frameworks first.
Scoping out cross-cause comparison between meta-science and other EA objectives
To answer the high level question of “whether the marginal $ spent on research is better spent elsewhere”, we may benefit from some clarity on what a unit of metascience/science output looks like, and how much we value a unit of that over other goals (eg x-risk probability reduction or lives saved).
Both your post and especially my comment pre-supposes that technological/scientific progress is unambiguously good (if sometimes inefficient or too expensive). But I think my all-things-considered view is deeply confused here, so more clarity is helpful.
I would be very happy to see careful mappings of specific futures and projections of what technological advances we’d like to see, in what order.
I believe existing work in differential progress (at least in EA) is quite simple/high-level, making it often hard to prioritize whether specific metascience interventions is even good for the long-term future.
*which I’m not satisfied with but I’m much happier with compared to my previously even more confused thoughts
Meta: I strung together a bunch of Slack, etc, comments together in a hopefully coherent/readable way, and then realized my comment is too long so added approximate section headings in case it’s helpful.
Requesting feedback:
1. Is the above comment in fact coherent and readable? 2. Are the headings useful for you? Or are they just kind of annoying/jarring and didn’t actually add much useful structure?
I found the above comment coherent, readable, and useful as a complementary framework to the original post (which I also liked)
Two things I think this comment added that I’d have ideally liked to see the original post more explicitly note are that neither the comment nor post discussed the important matters of:
“Across-cause prioritization: Whether the marginal $ spent on research is better spent elsewhere
Prioritization in the context of differential technological progress: Whether we’re correctly differentially progressing research
that’s generically good for the long-term future over stuff that’s neutral or bad
(I think the post and comment already covered a lot of important ground, and it’s ok that they didn’t address these things, but these things are crucial considerations here and so their omission should be very clearly noted.)
I found it useful that the section headings broke the comment up into chunks
I think the actual words of the section headings didn’t matter / weren’t helpful (though nor were they harmful)
It would’ve been equally fine from my perpsective to use other words, just break things up with “—”, or organise the comment as bullet points and let a non-bulleted line or minimally indented line signify the start of a new “chunk”
(I work at the same org as Linch and David Reinstein, but all opinions here are my own, of course, and I’d be happy to disagree with them publicly if indeed I did disagree.)
Notably, my definition is a broader tent (in the context of metascience) than prioritization of science/metascience entirely from a purely impartial EA perspective.
I hadn’t formulated it so clearly for myself, but at this stage I would say I’m using the same perspective as you—I think one would have to have a lot clearer view of the field / problems /potential to be able to do across-cause prioritization and prioritization in the context of differential technological progress in a meaningful way.
What I mean about this is that I think it’s plausible that there are immense (tens of billions to trillions) dollar bills laying on the floor in figuring out optimal allocation of the above. I think a lot of these decisions are, in practice, based on lore, political incentives, and intuition. I believe (could definitely be wrong), there’s very little careful theorizing and even less empirical data.
I think this seems like a really exciting opportunity!
On your listing of things that would be valuable vs less valuable, I have a roughly similar view at this stage though I think I might be thinking a bit more about institutional/global incentives and a bit less about improving specific teams (e.g. improving publishing standards vs improving the productivity of a promising research group). But at this stage, I have very little basis for any kind of ranking of how pressing different issues are. I agree with your view that replication crisis stuff seems important but relatively less neglected.
I think it would be very interesting/valuable to investigate what impactful careers in meta-research or improving research could be, and specifically to identify gaps where there are problems that are not currently being addressed in a useful way.
I think one would have to have a lot clearer view of the field / problems /potential to be able to do across-cause prioritization and prioritization in the context of differential technological progress in a meaningful way.
Hmm, I’m not sure I agree.
Or at least, I think I’d somewhat confidently disagree that the ideal project aimed at doing “across-cause prioritisation” and “prioritisation in the context of differential (technological) progress” would look like more of the same sort of work done in this post.
I’m not saying you’re necessarily claiming that, but your comment could be read as either making that claim or as side-stepping that question.
To be clear, this is not to say I think this post was useless or doesn’t help at all with those objectives!
I think the post is quite useful for within cause prioritisation (which is another probably-useful goal), and somewhat useful for across-cause prioritisation
Though maybe it’s not useful for prioritization in the context of differential progress
I also really liked the post’s structure and clarity, and would be likely to at least skim further work you produce on this topic.
But I think for basically any cause area that hasn’t yet received much “across-cause prioritisation” research, I’d be at least somewhat and maybe much more excited about more of that than more within-cause prioritisation research.
And this cause area seems unusually prone to within-cause successes being majorly accidentally harmful (by causing harmful types of progress, technological or otherwise), so this is perhaps especially true here.
And I think the ideal project to do that for meta science would incorporate some components that are like what’s done in this post, but also other components more explicitly focused on across-cause prioritisation, possible accidental harms, and differential progress
(This may sound harsher than my actual views—I do think this post was a useful contribution.)
(I’m a colleague of david_reinstein, one of the authors of this post. All opinions are my own).
Intro
I’ve incidentally started thinking about metascience-related questions in the last 3 months, as it has independently came up in two different projects I was involved in. I think the paradigm I was operating out of is somewhat different than the explicit and implicit mapping here, so I’m sharing it here in the hopes that there can be some useful crossfertilization of ideas. Note that I’ve spent very little time thinking about this (likely <10 hours total), and even less time reading papers from others in this field.
Perspective/Toy Model/Paradigm
The perspective I currently have* is viewing “research/science” from 10,000 steps up and consider research as :
And then an important follow-up question here is:
Paradigm Scope/Limitations
Notably, my definition is a broader tent (in the context of metascience) than prioritization of science/metascience entirely from a purely impartial EA perspective. From an EA perspective, we’d also care about:
Across-cause prioritization: Whether the marginal $ spent on research is better spent elsewhere
Prioritization in the context of differential technological progress: Whether we’re correctly differentially progressing research
that’s generically good for the long-term future over stuff that’s neutral or bad
that’s contingently good for the future given the technologies currently available (in other words, developing technologies in the right order).
I’m deliberately using a lower bar (“research as an industry that converts $s and highly talented people into (eventually) actionable insights”) than EA because I think it better captures the claimed ethos of research and researchers.
However, even with this lower bar, I think having this precise conceptualization (we have an industry that converts resources into actionable insights, how can we make the industry more efficient?) helps us prioritize a little within metascience.
Potentially Valuable: Operations Research of Research Outputs?
Things I would currently guess are quite valuable + understudied, from an inside view:
At the high-level, anything that draws a causal diagram between the inputs of research (eg $s, highly talented people) and the outputs of research
for example, studies on how to produce more research.
Hiring assessment literature for what makes for top {graduate students, postdocs, junior academics}
More qualitative/quantitative understanding on truly excellent research teams work
How researchers should be organized
mostly thinking about conceptual organization, but plausible that optimal physical space/layout stuff have high returns as well.
Management practices for researchers.
In the context of EA nonprofit research orgs and other think tanks, actual research managers.
In the academic context, advisor/advisee relationships.
(note that I have not read enough of the literature to be confident that specific claims about neglectedness are true)
I’m not exactly sure what the umbrella of concepts above I’ve gestured to above should be called, but roughly what I’m interested in is
alternatively,
What I mean about this is that I think it’s plausible that there are immense (tens of billions to trillions) dollar bills laying on the floor in figuring out optimal allocation of the above. I think a lot of these decisions are, in practice, based on lore, political incentives, and intuition. I believe (could definitely be wrong), there’s very little careful theorizing and even less empirical data.
Other Potentially Valuable Things
Things I also think are very valuable + understudied, but feels more speculative and I have a even less firm inside view on:
Disruptive Open Science stuff that tries to solve the real problem (rather than dance around)
eg Aaron Schwartz
someone trying to replicate sci-hub now that sci-hub isn’t accepting new submissions.
note that this is of questionable legality in most jurisdictions
Figuring out models of science outside of academia
eg Figuring out alternatives to journals like Distill (I think some of the authors of this post is working on this?)
Brainstorming ways/incentive mechanisms + accountability for science funders
By default I expect the incentives for science funding incorporates some combination of complacency/CYA/risk minimization
Note that I have not looked into this at all
Maybe research on good ways to incentivize science via funding prizes etc.
Though I can totally buy that this is already addressed in the mechanism design literature
But to the extent it’s addressed but not implemented for reasons other than “human nature” or “political impossibility”, we can do empirical research on adoption/implementation
Maybe something about impact certificates is related
Research on science communication in a way that’s output-focused (rather than just “nice thing to do”)
Comparatively Less Interesting/Useful Things Within Metascience
I don’t think anything within “research on research” is obviously oversubscribed (in the sense that we as a society should clearly devote less resources to the marginal metascience project in that domain, compared to marginal resources on random science projects).
Nonetheless, here are things I would guess is less marginally valuable than things I’m personally interested in within metascience*:
More papers on DEI or demographics of scientists in a way that isn’t trying to track outputs
Stuff that tries to define/redefine science
Replication crisis stuff (I think this is a serious problem that is definitely worth people fixing, but relatively less neglected as of 2021).
Additional EA Work
In addition to the points I’ve identified above, I’d also be excited to see more work on:
More paradigm/conceptualization stuff like what’s done in this post and my comment.
In particular, I don’t think my ontology is quite ready for “prime time” yet, and I’d anti-recommend readers doing a bunch of active work based on my framework without thinking through their own frameworks first.
Scoping out cross-cause comparison between meta-science and other EA objectives
To answer the high level question of “whether the marginal $ spent on research is better spent elsewhere”, we may benefit from some clarity on what a unit of metascience/science output looks like, and how much we value a unit of that over other goals (eg x-risk probability reduction or lives saved).
Scoping out/mapping differential technological progress in more detail.
Both your post and especially my comment pre-supposes that technological/scientific progress is unambiguously good (if sometimes inefficient or too expensive). But I think my all-things-considered view is deeply confused here, so more clarity is helpful.
I would be very happy to see careful mappings of specific futures and projections of what technological advances we’d like to see, in what order.
I believe existing work in differential progress (at least in EA) is quite simple/high-level, making it often hard to prioritize whether specific metascience interventions is even good for the long-term future.
*which I’m not satisfied with but I’m much happier with compared to my previously even more confused thoughts
Meta: I strung together a bunch of Slack, etc, comments together in a hopefully coherent/readable way, and then realized my comment is too long so added approximate section headings in case it’s helpful.
Requesting feedback:
1. Is the above comment in fact coherent and readable?
2. Are the headings useful for you? Or are they just kind of annoying/jarring and didn’t actually add much useful structure?
FWIW:
I found the above comment coherent, readable, and useful as a complementary framework to the original post (which I also liked)
Two things I think this comment added that I’d have ideally liked to see the original post more explicitly note are that neither the comment nor post discussed the important matters of:
“Across-cause prioritization: Whether the marginal $ spent on research is better spent elsewhere
Prioritization in the context of differential technological progress: Whether we’re correctly differentially progressing research
that’s generically good for the long-term future over stuff that’s neutral or bad
that’s contingently good for the future given the technologies currently available (in other words, developing technologies in the right order).”
(I think the post and comment already covered a lot of important ground, and it’s ok that they didn’t address these things, but these things are crucial considerations here and so their omission should be very clearly noted.)
I found it useful that the section headings broke the comment up into chunks
I think the actual words of the section headings didn’t matter / weren’t helpful (though nor were they harmful)
It would’ve been equally fine from my perpsective to use other words, just break things up with “—”, or organise the comment as bullet points and let a non-bulleted line or minimally indented line signify the start of a new “chunk”
(I work at the same org as Linch and David Reinstein, but all opinions here are my own, of course, and I’d be happy to disagree with them publicly if indeed I did disagree.)
I hadn’t formulated it so clearly for myself, but at this stage I would say I’m using the same perspective as you—I think one would have to have a lot clearer view of the field / problems /potential to be able to do across-cause prioritization and prioritization in the context of differential technological progress in a meaningful way.
I think this seems like a really exciting opportunity!
On your listing of things that would be valuable vs less valuable, I have a roughly similar view at this stage though I think I might be thinking a bit more about institutional/global incentives and a bit less about improving specific teams (e.g. improving publishing standards vs improving the productivity of a promising research group). But at this stage, I have very little basis for any kind of ranking of how pressing different issues are. I agree with your view that replication crisis stuff seems important but relatively less neglected.
I think it would be very interesting/valuable to investigate what impactful careers in meta-research or improving research could be, and specifically to identify gaps where there are problems that are not currently being addressed in a useful way.
Hmm, I’m not sure I agree.
Or at least, I think I’d somewhat confidently disagree that the ideal project aimed at doing “across-cause prioritisation” and “prioritisation in the context of differential (technological) progress” would look like more of the same sort of work done in this post.
I’m not saying you’re necessarily claiming that, but your comment could be read as either making that claim or as side-stepping that question.
To be clear, this is not to say I think this post was useless or doesn’t help at all with those objectives!
I think the post is quite useful for within cause prioritisation (which is another probably-useful goal), and somewhat useful for across-cause prioritisation
Though maybe it’s not useful for prioritization in the context of differential progress
I also really liked the post’s structure and clarity, and would be likely to at least skim further work you produce on this topic.
But I think for basically any cause area that hasn’t yet received much “across-cause prioritisation” research, I’d be at least somewhat and maybe much more excited about more of that than more within-cause prioritisation research.
I explain my reasoning for a similar view in Should marginal longtermist donations support fundamental or intervention research?
And this cause area seems unusually prone to within-cause successes being majorly accidentally harmful (by causing harmful types of progress, technological or otherwise), so this is perhaps especially true here.
And I think the ideal project to do that for meta science would incorporate some components that are like what’s done in this post, but also other components more explicitly focused on across-cause prioritisation, possible accidental harms, and differential progress
(This may sound harsher than my actual views—I do think this post was a useful contribution.)