I reposted this because I expect it to be a useful reference for people discussing about institutional improvements and meta-science. (And because it only seems to exist on one other website.)
Epistemic status: Brainstorming out loud, and my conclusion is still that meta-science seems promising based on scale alone (despite these quibbles)
One Devilâs Advocate response to Bostrom is that producing meta-scientific tools is difficult.
If there were a pill you could take to boost cognitive performance by 1% with no drawback, perhaps most scientists would find a way to get it (if it were legal or otherwise easily obtained in their countries). But Iâve more often seen this idea used in the context of software production, and in that case, we have real-world examples to work from:
What are the ten best software projects ever, in terms of their effect on scientific productivity? (Actually, letâs say âmost efficientâ, so that thereâs no question of whether e.g. Word counts.)
How much more âproductiveâ has the average scientist become as a result of those ten projects? (Consider that some of them might have been made obsolete by other projects.)
How well does this âproductivityâ translate into impact? If a paper got written slightly faster, how did the scientist use that time? If they could run twice as many statistical analyses, did any of them tell us something useful? If an extra paper was produced, did anyone benefit?
How much time has gone into developing software tools for scientists that saw little to no use?
You could apply these same points to any other project meant to improve scientific productivity.
When the stakes run as high as âone person having the impact of thousands of scientistsâ, all of these concerns (and many others I havenât listed) may still leave meta-science as an extremely strong cause area. And not all meta-scientific interventions are about boosting productivity; thereâs also e.g. Registered Reports and other projects meant to improve scientific quality. (Though these might have other issues; replication projects can fall victim to the generalizability crisis.)
As a non-scientist, I might also be underestimating the best scientific tools; maybe we wouldnât have a COVID vaccine yet if it werenât for a couple of well-placed Python packages (or something).
I reposted this because I expect it to be a useful reference for people discussing about institutional improvements and meta-science. (And because it only seems to exist on one other website.)
Epistemic status: Brainstorming out loud, and my conclusion is still that meta-science seems promising based on scale alone (despite these quibbles)
One Devilâs Advocate response to Bostrom is that producing meta-scientific tools is difficult.
If there were a pill you could take to boost cognitive performance by 1% with no drawback, perhaps most scientists would find a way to get it (if it were legal or otherwise easily obtained in their countries). But Iâve more often seen this idea used in the context of software production, and in that case, we have real-world examples to work from:
What are the ten best software projects ever, in terms of their effect on scientific productivity? (Actually, letâs say âmost efficientâ, so that thereâs no question of whether e.g. Word counts.)
How much more âproductiveâ has the average scientist become as a result of those ten projects? (Consider that some of them might have been made obsolete by other projects.)
How well does this âproductivityâ translate into impact? If a paper got written slightly faster, how did the scientist use that time? If they could run twice as many statistical analyses, did any of them tell us something useful? If an extra paper was produced, did anyone benefit?
How much time has gone into developing software tools for scientists that saw little to no use?
You could apply these same points to any other project meant to improve scientific productivity.
When the stakes run as high as âone person having the impact of thousands of scientistsâ, all of these concerns (and many others I havenât listed) may still leave meta-science as an extremely strong cause area. And not all meta-scientific interventions are about boosting productivity; thereâs also e.g. Registered Reports and other projects meant to improve scientific quality. (Though these might have other issues; replication projects can fall victim to the generalizability crisis.)
As a non-scientist, I might also be underestimating the best scientific tools; maybe we wouldnât have a COVID vaccine yet if it werenât for a couple of well-placed Python packages (or something).