Provides useful language and imagery for the incentive structures and informational asymmetries that make some institutions resistant to honest assessment of their progress. The takeaway: oftentimes you can’t take a field’s self-assessment in good faith, and should assume you’re operating in an epistemically adversarial environment. This has important implications for our collective appraisal of the rate of scientific progress.
We might call the academic-bureaucratic biomedical research apparatus “the blob”. Think of it as a distributed super-organism, like an amoeboid colony or swarm of eusocial insects. The blob is massive and inscrutable, impenetrable past a certain depth if you’re an outsider.
Yet the blob isn’t static; it’s slippery and protean, constantly growing. Just as you feel you’ve found something to hold onto—a fragment of jargon or a promising research paper—the blob evolves and slips out of your grasp, and you’re left holding a mere artifact. Every year it extends its thousands of probing pseudopods further, enveloping more data, digesting them, and adding to the maze-like accretion that is Science.
Yet this constant motion, this incessant evolution, makes the blob a shimmering bright spot of progress in an otherwise mostly dark landscape of technological stagnation. Inscrutable though it may be, the blob is generating results.
Yes, on the margins there might be issues—the publish-or-perish mindset, the replication crisis, funding mechanisms, and slightly more acute issues like Eroom’s law—but these will eventually be solved.
And when they are, and the biomedical research community is no longer reined in by myopic grantors or encumbered by bureaucratic bloat, there will be a blossoming of research papers and a bearing of many biomedical fruit—the potential for which is currently lying latent in the minds of brilliant scientists, yearning to be nourished. “Feed me, Seymour!”
The above description might seem, well… florid.
Suppose you didn’t want to take the blob at its word. As glossy as the covers of Nature and Science may be, you’re not convinced by the prima facie evidence for recent advances in biomedicine and have some innocuous questions that you’d like answered. For instance:
What do biomedical research scientists do day-to-day? What will the $50B requested by the NIH for 2023 be used for? Is the answer so multifarious and evolving as to be beyond lay understanding?
On what dimensions is biomedical research progressing? Are there quantifiable indicators that we could track over time—not just particular instances that people point to?
If the blob is progressing systematically in some direction, then toward what end? What does this accretion of knowledge amount to? What is the sausage that’s purportedly being made—and when will it be ready?
When trying to answer these questions, we find ourselves in an epistemically adversarial environment, which we can call “hyperbolic science”. It is defined by two mutually reinforcing facets:
Hyperbolic information asymmetries
The scientific research frontier is like an ever-expanding tree embedded in hyperbolic geometry. (This is evident in the etymological root of “science”: the Latin scientia derives from the Proto-Indo-European skei-, which means to “cut” or “split” and is the root of “schism” and “schizoid”.) No one person can be versed in all the frontiers of science, and therefore no one can grasp the entire body of knowledge; locally the blob may be navigable to insiders, but globally it is just as blob-like to them as it is to outsiders, an archipelago of frontiers that lie beyond their research horizon. This problem becomes fractally more acute with time, the surface area of the frontier increasing as it expands and as it becomes more detailed.
Hyperbolic incentives
Due to the advancing hyperbolic research frontier, researchers are the ones most fit to assess the state of their own research areas. Yet you can’t simply ask them about progress along their particular research frontier and aggregate these evaluations, because all are conflicted by a principal-agent problem. If the fruit in their branch of the tree is rotten or the branch isn’t blossoming at all, they won’t tell you. At worst, they actively exaggerate the rate and quality of growth in their field, both exoterically (to lay outsiders) and esoterically (to grantors, colleagues, and prospective PhD students). At best, they are well-intentioned and simply don’t pay attention to these sorts of questions, instead choosing to “focus on the science”, a kind of bureaucratic blindspot (ego-preserving self-deception is often highly adaptive).
These incentives apply to their particular research area—there is intra-academic zero-sum competition for a slice of NIH budget, which one must win to fund one’s research and increase the odds of getting tenure—and to science as a whole—you’re unlikely to meet a researcher that supports shrinking the NIH budget, thinks less biomedical research would make the world a better place, and wants to pare back dead research branches, especially when that dead branch is their field. (Threatening the livelihoods of those in your field is a good way to get kicked out of it.)
These growth narratives are reinforced by other parts of the extended biomedical blob: academic journals, the NIH, biopharmaceutical companies, biotech venture capitalists, and so on, whose existences are all predicated on continuing biomedical progress. Everyone is talking their book, and the books on which those books depend. Yet due to hyperbolic information asymmetries, no one person can independently verify all the claims their own work rests on, let alone someone else’s—invariably one must outsource some verification to trusted third parties. (Perhaps it’s networks of mutually reinforcing status deferral and mood affiliation all the way down.)
Thus, information asymmetries and structural incentives conspire to make the blob emergently adversarial toward critical assessment. Few can accurately assess the state of their own field, let alone the entire research enterprise, and those who can are almost always incentivized to exaggerate its health—and these problems worsens as the research frontier expands.
These problems aren’t new. Derek de Solla Price identified them over sixty years ago in one of the first books on academic bibliometrics, Science Since Babylon:
It seems also that we can no longer take the word of the scientists on the job. Their evaluation of the importance of their own research must also be unreliable, for they must support their own needs; even in the most ideal situation they can look only at neighboring parts of the research front, for it is not their own business to see the whole picture…The trouble seems to be that it is no man’s business to understand the general patterns and reactions of science as the economist understands the business world.
Given the current mood of biomedical optimism, we ought to reconsider de Solla Price’s warning.
Hyperbolic science could be symptomatic of a world in which biomedical progress is simply too great and complex for lay people to keep up with—highly diffuse progress naturally leads to a feeling of diffuse optimism—and we will soon see the fruits of this progress.
But hyperbolic science also raises the possibility of something far more dire: the blob has created massive, systemic distortions in our collective appraisal of the rate of biomedical progress. What we took to be an expansive, shimmering bright spot of progress might actually be a mirage of flitting particularities, shrouding an atrophying tree of knowledge that is no longer bearing fruit. And as we pick the few remaining low-hanging fruit from other trees and look for new orchards to satisfy our hunger for growth, these distortions will have increasingly existential ramifications.
Thanks for this article. I found the following interesting:
What this says to me is that while we can count on scientists to do science, typically in an impressive manner, for very understandable human reasons we can’t count on them to reflect objectively on our relationship with science.
If true, this seems a real problem because of the great cultural authority which scientists have earned from their many technical accomplishments.
To illustrate, it’s natural for the public to look to, say, genetic engineering experts, for commentary on society’s relationship with genetic engineering. We in the public understandably reason, the degreed experts have spent years studying this subject at a high level, so they must be the best source of information on this topic. And so long as the focus is purely technical, this is true.
The problem here is that the most important questions are not purely technical. If we wish to know how to do genetic engineering, the experts are the place to look for leadership. But if we wish to know whether we should do genetic engineering, the technical experts can not be detached and objective because they have an enormous personal investment in the question being answered in the affirmative.
To return to biomedical research, the following has always interested me. This highly rational scientific enterprise spends billions to trillions of dollars trying to keep us alive, based on exactly no proof that life is better than death. I’m not objecting, I just find this relationship between faith and reason eternally fascinating.
The one thing we know for sure is that we’re all going to die. From that fact we then proceed to label death as being very bad, based on nothing other than completely uninformed wild speculation regarding the alternative to life. And then we stamp the entire faith based operation with an “APPROVED BY THE AUTHORITY OF SCIENCE” label.
Not complaining. Just interested, that’s all.