Notably, my definition is a broader tent (in the context of metascience) than prioritization of science/metascience entirely from a purely impartial EA perspective.
I hadn’t formulated it so clearly for myself, but at this stage I would say I’m using the same perspective as you—I think one would have to have a lot clearer view of the field / problems /potential to be able to do across-cause prioritization and prioritization in the context of differential technological progress in a meaningful way.
What I mean about this is that I think it’s plausible that there are immense (tens of billions to trillions) dollar bills laying on the floor in figuring out optimal allocation of the above. I think a lot of these decisions are, in practice, based on lore, political incentives, and intuition. I believe (could definitely be wrong), there’s very little careful theorizing and even less empirical data.
I think this seems like a really exciting opportunity!
On your listing of things that would be valuable vs less valuable, I have a roughly similar view at this stage though I think I might be thinking a bit more about institutional/global incentives and a bit less about improving specific teams (e.g. improving publishing standards vs improving the productivity of a promising research group). But at this stage, I have very little basis for any kind of ranking of how pressing different issues are. I agree with your view that replication crisis stuff seems important but relatively less neglected.
I think it would be very interesting/valuable to investigate what impactful careers in meta-research or improving research could be, and specifically to identify gaps where there are problems that are not currently being addressed in a useful way.
I think one would have to have a lot clearer view of the field / problems /potential to be able to do across-cause prioritization and prioritization in the context of differential technological progress in a meaningful way.
Hmm, I’m not sure I agree.
Or at least, I think I’d somewhat confidently disagree that the ideal project aimed at doing “across-cause prioritisation” and “prioritisation in the context of differential (technological) progress” would look like more of the same sort of work done in this post.
I’m not saying you’re necessarily claiming that, but your comment could be read as either making that claim or as side-stepping that question.
To be clear, this is not to say I think this post was useless or doesn’t help at all with those objectives!
I think the post is quite useful for within cause prioritisation (which is another probably-useful goal), and somewhat useful for across-cause prioritisation
Though maybe it’s not useful for prioritization in the context of differential progress
I also really liked the post’s structure and clarity, and would be likely to at least skim further work you produce on this topic.
But I think for basically any cause area that hasn’t yet received much “across-cause prioritisation” research, I’d be at least somewhat and maybe much more excited about more of that than more within-cause prioritisation research.
And this cause area seems unusually prone to within-cause successes being majorly accidentally harmful (by causing harmful types of progress, technological or otherwise), so this is perhaps especially true here.
And I think the ideal project to do that for meta science would incorporate some components that are like what’s done in this post, but also other components more explicitly focused on across-cause prioritisation, possible accidental harms, and differential progress
(This may sound harsher than my actual views—I do think this post was a useful contribution.)
I hadn’t formulated it so clearly for myself, but at this stage I would say I’m using the same perspective as you—I think one would have to have a lot clearer view of the field / problems /potential to be able to do across-cause prioritization and prioritization in the context of differential technological progress in a meaningful way.
I think this seems like a really exciting opportunity!
On your listing of things that would be valuable vs less valuable, I have a roughly similar view at this stage though I think I might be thinking a bit more about institutional/global incentives and a bit less about improving specific teams (e.g. improving publishing standards vs improving the productivity of a promising research group). But at this stage, I have very little basis for any kind of ranking of how pressing different issues are. I agree with your view that replication crisis stuff seems important but relatively less neglected.
I think it would be very interesting/valuable to investigate what impactful careers in meta-research or improving research could be, and specifically to identify gaps where there are problems that are not currently being addressed in a useful way.
Hmm, I’m not sure I agree.
Or at least, I think I’d somewhat confidently disagree that the ideal project aimed at doing “across-cause prioritisation” and “prioritisation in the context of differential (technological) progress” would look like more of the same sort of work done in this post.
I’m not saying you’re necessarily claiming that, but your comment could be read as either making that claim or as side-stepping that question.
To be clear, this is not to say I think this post was useless or doesn’t help at all with those objectives!
I think the post is quite useful for within cause prioritisation (which is another probably-useful goal), and somewhat useful for across-cause prioritisation
Though maybe it’s not useful for prioritization in the context of differential progress
I also really liked the post’s structure and clarity, and would be likely to at least skim further work you produce on this topic.
But I think for basically any cause area that hasn’t yet received much “across-cause prioritisation” research, I’d be at least somewhat and maybe much more excited about more of that than more within-cause prioritisation research.
I explain my reasoning for a similar view in Should marginal longtermist donations support fundamental or intervention research?
And this cause area seems unusually prone to within-cause successes being majorly accidentally harmful (by causing harmful types of progress, technological or otherwise), so this is perhaps especially true here.
And I think the ideal project to do that for meta science would incorporate some components that are like what’s done in this post, but also other components more explicitly focused on across-cause prioritisation, possible accidental harms, and differential progress
(This may sound harsher than my actual views—I do think this post was a useful contribution.)