What would better science look like?
In this post I discuss what the aim of improving science should be, and outline strategies for improvement into those that address effectiveness and efficiency respectively. I have recently received funding from the EA Infrastructure Fund to spend part-time during 10 months on career- and project exploration in this field. This post is a followup on my problem mapping for scientific research, and is also crossposted on my personal blog.
What is the purpose of scientific research?
During the coming year I will be exploring different career and project opportunities that could contribute to improving the scientific research system. As a first step I here make an attempt to formulate more clearly what “better science” could look like. On an abstract level, there seem to be two competing worldviews around. One is a consequentialist, welfarist, aim and the other is focused on the intrinsic value of knowledge.
The first view can be phrased as follows:
“Better science is science that more effectively improves the lives of sentient beings.”
This is a view of science in the service of society. It goes especially well with applied research, and also seems to correspond with many research funding organizations’ approaches. The justification for dedicating substantial resources (approx 2 % of global GDP) to scientific research comes from the expected social benefits in terms of health, economy, quality of life, and so on.
The second framing is more focused on the intrinsic value of knowledge:
“Better science is science that more effectively improves our understanding of the universe.”
This might appear closer to the approach of fundamental research, where the practical usefulness of eventual results is sometimes unclear. This second framing, with science in service of knowledge rather than of society, also rhymes better with ideals of scientific freedom and independence from political influence.
Steering science too much towards societal gains might be counterproductive as it is difficult to predict the usefulness of new knowledge before it has been obtained. Meanwhile, completely ignoring societal values could also have absurd consequences as there would be no way to distinguish between meaningless, valuable, or even harmful knowledge.
I’m leaning towards some sort of pragmatic compromise here where the ultimate aim is the production of value in the welfarist sense, while “understanding the universe” remains an important instrumental goal. Even if we cannot predict the usefulness of new knowledge before we obtain it, it seems reasonable that advancing our understanding of the universe has a lot of potential to improve lives. How much we should prioritize fundamental research where there is not yet any apparent societal usefulness of the results is then an issue of balancing exploration and exploitation.
Given that we expect scientific research to improve the lives of sentient beings (in a long time perspective, not excluding exploratory work in new fields), in which ways could scientific research be improved? We could use strategies focusing on effectiveness, improving the direction that science develops in, and we could also work on improving the efficiency—or the speed—of development.
Improved effectiveness
Particularly in applied research, it would be relevant to try to improve how effective research is at improving welfare. Improving effectiveness could be done by improving prioritization between different research areas (e.g. “What is the best resource allocation between health research and clean energy research?”) as well as prioritizing between strategies in a specific subarea (e.g. “What is the best resource allocation between research into perovskite solar cells versus plasmonic solar cells?”). In practice, this is often a concern of research funders aiming to direct funds to the projects that show most promise.
Effective (adj.) – Adequate to accomplish a purpose; producing the intended or expected result.
Another way of improving the effectiveness of specific projects would be by designing the research questions in a way that makes results more useful. This could for example be approached by increasing and improving collaboration between researchers and the intended end users. Close contact with end users both ensures that the research question is framed in a relevant way, and facilitates the communication and exploitation of results when the research has concluded.
Advances in scientific research could, however, also be a threat to welfare. Discussions regarding how gain of function research could cause pandemics or the risks associated with development of advanced artificial intelligence illustrate this, and another aspect of improving the effectiveness of research could therefore be to make scientific progress safer. One strategy would be to simply steer clear of any field that seems too risky, but this is complicated by the fact that the same piece of knowledge or information can often be applied both in useful and in dangerous ways. There is an interesting historical case that illustrates this complexity when, in the 70’s, a moratorium was placed on certain kinds of recombinant DNA research that appeared like high risk activities at the time. A more contemporary example would be the work of the Future of Humanity Institute on safe development of artificial development.
Improved efficiency
Most of the metascience movement appears to me to be focused on improving the efficiency of research. If improving effectiveness is about making sure research is exploring knowledge in the right direction, improving efficiency is about increasing the speed and decreasing the cost of exploration.
Efficient (adj.) – Performing or functioning in the best possible manner with the least waste of time and effort.
Improving the efficiency of scientific research could be done by, for example, improving methods and reporting to ensure better reproducibility of results, by improving peer review to increase quality standards for published results or by cultural improvements that increase productivity in a meaningful way. Initiatives of these kinds are often framed as improving the quality of research rather than the efficiency, but I think it’s reasonable to think that the main long-term effect of more high quality research would be more efficient progress. Improved quality could also contribute to increased effectiveness, but I think that effect would be less significant than the contribution to higher efficiency.
Intuitively, increasing efficiency can seem uncontroversial, but there is criticism that some of these initiatives may be counterproductive. The long-term impact on welfare caused by more efficient research is also difficult to forecast. It is easy to come up with examples where speedy progress could bring significant gains: diseases could be cured or prevented, climate change could be slowed, societies could be adapted to become more resilient. On the other hand, speedier progress could entail risks that increase existential risks and threaten welfare. Particularly in fundamental research where the implications of new results are unpredictable, it is difficult to know if and how much increased efficiency would contribute to welfare.
I’ve heard arguments that even if we wouldn’t want to speed up scientific progress, higher efficiency would decrease waste and free up resources that could be used for something other than research (that would increase welfare). As much as I hate seeing resources go to waste, I think such a scenario seems like an unlikely development in the general case. I don’t see how resources would be allocated (presumably on a political level) away from research as a natural consequence of improvements to research efficiency. Perhaps I could imagine this happening as a result of a specific targeted effort where gains in efficiency were somehow linked to reallocation of resources, but I’m unsure what that would look like.
Differential development of different fields
Scientific research is a very diverse activity and different fields of research vary from each other both in terms of potential benefits, potential risks and of which bottlenecks limit progress. New practices are often picked up in a patchy way; for example, the practice of publishing preprints was introduced in physics in 1991, but it is only recently that it has become more common in chemistry research.
This tendency of differential development of different fields suggests that an attempt to improve the scientific research system could benefit from targeting a specific area of research, rather than scientific research in general. This seems especially true for initiatives targeting efficiency or risk mitigation. For initiatives to improve effectiveness by influencing funding prioritizations a more “global” approach could perhaps be feasible in some cases, though it might still be practically more difficult than a targeted effort.
Thanks for this post, I found it interesting.
One thing I want to push back on is the way you framed applied vs fundamental research and how to decide how much to prioritise fundamental research. I think the following claims from the post seem somewhat correct but also somewhat misleading:
I share the view that the ultimate aim should basically be the production of value in a welfarist (and impartial) sense, and that “understanding the universe” can be an important instrumental goal for that ultimate aim. But I think that, as you seem to suggest elsewhere, how much “understanding the universe” helps and whether it instead harms depends on which parts of the universe are being understood, by whom, and in what context (e.g., what other technologies also exist).
So I wouldn’t frame it primarily as exploration vs exploitation, but as trying to predict how useful/harmful a given area of fundamental research—or fundamental research by a given actor, - will be. And, crucially, that prediction need not be solely based on detailed, explicit ideas about what insights and applications might occur and how—it can also incorporate things like reference class forecasting. Quick, under-justified examples:
Fundamental research in some areas of physics and biology have seemed harmful in the past, so further work in those areas may be more concerning than work in an average area.
Meanwhile, fundamental-ish research by Nick Bostrom has seemed to produce many useful insights in the past, which we’ve gradually developed better ideas about how to actually apply, so further work of that sort by him may be quite useful, even if it’s not immediately obvious what insights will result or how we’ll apply them.
(See also Should marginal longtermist donations support fundamental or intervention research?)
And “Steering science too much towards societal gains might be counterproductive as it is difficult to predict the usefulness of new knowledge before it has been obtained” reminds me of critiques that utilitarianism would actually be counterproductive on its own terms, because constantly thinking like a utilitarian would be crippling (or whatever). But if that’s true, then utilitarianism just wouldn’t recommend constantly thinking like a utilitarian.
Likewise, if it is the case that “Steering science too much [based on explicitly predictable-in-advance paths to] societal gains might be counterproductive”, then a sophisticated approach to achieving societal gains just wouldn’t actually recommend doing that.
(On those points, I recommend Act utilitarianism: criterion of rightness vs. decision procedure and Naive vs. sophisticated consequentialism.)
My thought is that the exploration vs exploitation issue remains, even if we also attempt to favour the areas where progress would be most beneficial. I am not really convinced that it’s possible to make very good predictions about the consequences of new discoveries in fundamental research. I don’t have a strong position/belief regarding this but I’m somewhat skeptical that it’s possible.
Thanks for the reading suggestions, I will be sure to check them out – if you think of any other reading recommendations supporting the feasibility of forecasting consequences of research, I would be very grateful!
This is more or less my conclusion in the post, even if I don’t use the same wording. The reason why I think it’s worth mentioning potential issues with a (naïve) welfarist focus is that if I’d work with science reform and only mention the utilitarian/welfarist framing, I think this could come across as naïve or perhaps as opposed to fundamental research and that would make discussions unnecessarily difficult. I think this is less of a problem on the EA Forum than elsewhere 😊
Addition: When reading your post Should marginal longtermist donations support fundamental or intervention research? I realize that we maybe draw the line a bit differently between applied and fundamental research—examples you give of fundamental research there (e.g. the drivers of and barriers to public support for animal welfare interventions) seems quite applied to me. When I think of fundamental research I imagine more things like research on elementary particles or black holes. This difference could explain why we might think differently about if it’s feasible to predict the consequences of fundamental research.