There are three ways to contribute to scientific progress. The direct way is to conduct a good scientific study and publish the results. The indirect way is to help others make a direct contribution. Journal editors, university administrators and philanthropists who fund research contribute to scientific progress in this second way. A third approach is to marry the first two and make a scientific advance that itself expedites scientific advances. The full significance of this third way is commonly overlooked.
It is, of course, widely appreciated that certain academic contributions lay the theoretical or empirical foundations for further work. One reason why a great scientist such as Einstein is celebrated is that his discoveries have enabled thousands of other scientists to tackle problems that they could not have solved without relativity theory.
Yet even this deep and beautiful theory is, in one sense, very narrow. While relativity is of great help in cosmology and some other parts of physics, it is of little use to a geneticist, a paleontologist, or a neuroscientist. General relativity theory is therefore a significant but not a vast contribution to the scientific enterprise as a whole.
Some findings have wider applicability. The scientific method itself — the idea of creating hypotheses and subjecting them to stringent empirical tests — is one such. Many of the basic results in statistics also have very wide applicability. And some scientific instruments, such as the thermometer, the microscope, and the computer, have proved enormously useful over a wide range of domains. Institutional innovations — such as the peer‐reviewed journal — should also be counted.
Those who seek the advancement of human knowledge should focus more on these kinds of indirect contribution. A “superficial” contribution that facilitates work across a wide range of domains can be worth much more than a relatively “profound” contribution limited to one narrow field, just as a lake can contain a lot more water than a well, even if the well is deeper.
No contribution would be more generally applicable than one that improves the performance of the human brain. Much more effort ought to be devoted to the development of techniques for cognitive enhancement, be they drugs to improve concentration, mental energy, and memory, or nutritional enrichments of infant formula to optimize brain development. Society invests vast resources in education in an attempt to improve students’ cognitive abilities. Why does it spend so little on studying the biology of maximizing the performance of the human nervous system?
Imagine a researcher invented an inexpensive drug which was completely safe and which improved all‐round cognitive performance by just 1%. The gain would hardly be noticeable in a single individual. But if the 10 million scientists in the world all benefited from the drug the inventor would increase the rate of scientific progress by roughly the same amount as adding 100,000 new scientists. Each year the invention would amount to an indirect contribution equal to 100,000 times what the average scientist contributes. Even an Einstein or a Darwin at the peak of their powers could not make such a great impact. Meanwhile others too could benefit from being able to think better, including engineers, school children, accountants, and politicians.
This example illustrates the enormous potential of improving human cognition by even a tiny amount. Those who are serious about seeking the advancement of human knowledge and understanding need to crunch the numbers. Better academic institutions, methodologies, instrumentation, and especially cognitive enhancement are the fast tracks to scientific progress.
I reposted this because I expect it to be a useful reference for people discussing about institutional improvements and meta-science. (And because it only seems to exist on one other website.)
Epistemic status: Brainstorming out loud, and my conclusion is still that meta-science seems promising based on scale alone (despite these quibbles)
One Devil’s Advocate response to Bostrom is that producing meta-scientific tools is difficult.
If there were a pill you could take to boost cognitive performance by 1% with no drawback, perhaps most scientists would find a way to get it (if it were legal or otherwise easily obtained in their countries). But I’ve more often seen this idea used in the context of software production, and in that case, we have real-world examples to work from:
What are the ten best software projects ever, in terms of their effect on scientific productivity? (Actually, let’s say “most efficient”, so that there’s no question of whether e.g. Word counts.)
How much more “productive” has the average scientist become as a result of those ten projects? (Consider that some of them might have been made obsolete by other projects.)
How well does this “productivity” translate into impact? If a paper got written slightly faster, how did the scientist use that time? If they could run twice as many statistical analyses, did any of them tell us something useful? If an extra paper was produced, did anyone benefit?
How much time has gone into developing software tools for scientists that saw little to no use?
You could apply these same points to any other project meant to improve scientific productivity.
When the stakes run as high as “one person having the impact of thousands of scientists”, all of these concerns (and many others I haven’t listed) may still leave meta-science as an extremely strong cause area. And not all meta-scientific interventions are about boosting productivity; there’s also e.g. Registered Reports and other projects meant to improve scientific quality. (Though these might have other issues; replication projects can fall victim to the generalizability crisis.)
As a non-scientist, I might also be underestimating the best scientific tools; maybe we wouldn’t have a COVID vaccine yet if it weren’t for a couple of well-placed Python packages (or something).