Note: Before the launch of the Open Philanthropy Project Blog, this post appeared on the GiveWell Blog. Uses of “we” and “our” in the below post may refer to the Open Philanthropy Project or to GiveWell as an organization. Additional comments may be available at the original post.
As we’ve looked for potential gaps in the world of scientific research funding—focusing for now on life sciences—we’ve come across many suggestions to look at the “valley of death” that sits between traditional academic research and industry research. Speaking very broadly, the basic idea is that:
The world of life sciences research has become increasingly complex, with a widening gulf between traditional academic research—which aims at uncovering fundamental insights—and industry work, which is focused on developing drugs and other marketable products.
There is a lot of work that could be done on figuring out how to apply insights from academia and therefore close this gap. However, this “translational science” tends not to fit well within the traditional academic ecosystem—perhaps because it focuses on useful applications rather than on intellectual novelty—and so may be under-supported.
As a result, the world is becoming increasingly inefficient at translating basic research into concrete applications, and this explains why drug development has seemingly been slowing despite increasing expenditures on biomedical research (though recent data suggests that this trend may be changing).
For examples of this basic argument, see Translational Research: Crossing the Valley of Death (Nature 2008) and Helping New Drugs Out of Research’s ‘Valley of Death’ (New York Times 2011). In particular, the Nature article contains a pair of charts giving a rough illustration of two basic trends that may represent the causes and consequences of the growing “valley of death”: (a) rising government expenditures on research, increasingly supporting pure academics as opposed to medical practitioners, and (b) declining drug development production despite rising pharmaceutical R&D expenditures. (As noted above, more recent data may indicate that these trends are changing.)
We find this theory extremely challenging to assess for several reasons. One is that there doesn’t appear to be any one clear definition of “translational science” or of the “valley of death,” and some “translational science” seems quite well-suited to industry—to the point where it’s not entirely clear why we should think of it as a candidate for philanthropic or government funding at all. Another is that there has been growing interest in the issue over the last decade, including the 2011 debut of NCATS, a new institute at NIH dedicated to translational science; it’s hard to say whether translational science still represents one of the main “gaps” in the existing system.
Finally, there are other strong explanations for the observed decline in pharmaceutical output. The most comprehensive article I’ve seen on the subject names multiple possible explanations for the decline, many having to do with regulatory issues as well as the inherent challenges of improving on already-available drugs. The “valley of death,” as outlined above, doesn’t figure prominently in its account.
I am skeptical of some of the arguments people have made for the importance of translational science. These arguments often do not distinguish between different possible definitions of “translational science,” and often do not make a strong case that nonprofit funding (as opposed to industry funding) is what’s needed. In addition, it seems quite possible to me that the goals of promoting “translational science” might be better served by policy change (on regulatory and intellectual property law, for example) than by scientific research. With that said, I think the idea of translational science is worth keeping in mind, and that certain kinds of research in this category could be under-invested in because they do not fit cleanly into an academic or for-profit framework.
The rest of this post will:
List several different definitions of “translational science” that I’ve come across, noting that in some cases it isn’t clear why a proposed sort of research is a fit for the nonprofit as opposed to for-profit world. More
List some other potential reasons for the decline in pharmaceutical output, which may point to solutions outside of “translational science.” More
Five different definitions of “translational science”
The Nature article on translational science states, “Ask ten people what translational research means and you’re likely to get ten different answers.” Here I give five definitions I’ve come across that seem quite distinct from each other—particularly in terms of what they imply about the appropriateness of nonprofit funding.
1. Not-for-profit preclinical research. “Preclinical research” here refers to categories D-E (mostly E) from my previous post on different phases of scientific research. A possible new medical treatment is often first tested “in vitro”—in a simplified environment, where researchers can isolate how it works. (For example, seeing whether a chemical can kill isolated parasites in a dish.) But ultimately, a treatment’s value depends on how it interacts with the complex biology of the human body, and whether its benefits outweigh its side effects. Since testing with human subjects is extremely expensive and time-consuming, it can be valuable to first test and refine possible treatments in other ways, including animal testing.
The idea of carrying out this kind of work outside of industry—both in vitro screening to identify potential new medical technologies, and other tests to improve estimates of their promise—appears to be one of the most common definitions of translational research.
The Nature article states “For basic researchers clutching a new prospective drug, it might involve medicinal chemistry along with the animal tests and reams of paperwork required to enter a first clinical trial … In some sense much translational research is just rebranding — clinical R&D by a different name.”
The NYT article states, “For a discovery to reach the threshold where a pharmaceutical company will move it forward what’s needed is called “translational” research — research that validates targets and reduces the risk. This involves things like replicating and standardizing studies, testing chemicals (potentially millions) against targets, and if something produces a desired reaction, modifying compounds or varying concentration levels to balance efficacy and safety (usually in rats). It is repetitive, time consuming work — often described as “grunt work.” It’s vital for developing cures, but it’s not the kind of research that will advance the career of a young scientist in a university setting.”
I don’t feel that there’s a clear case for supporting this kind of work with nonprofit (government or philanthropic) funds. Unlike much basic research, this sort of work seems generally to have a very specific medical application in mind, and I believe that companies are often able to monetize the value created by new technologies they develop (especially drugs). Therefore, when looking at this kind of “translational science,” I think it is fair to ask: “If this research is generating more expected value than it costs, why isn’t industry investing in it? Why the need for nonprofit funds?”
There are a few possible answers. One is that that this kind of research may have positive expected value, but it is too risky for any one investor to take on—even the large industries that consider investing in it. This may be true, but I’ve rarely seen it spelled out by comparing the level of risk in particular kinds of research to the level of risk that various industry players are likely to bear. In addition, if risk is the key issue, this doesn’t necessarily call for a nonprofit solution. An economics-and-financed-focused group at MIT has proposed that a large enough for-profit fund—perhaps made possible via financial engineering—could result in much more investment in this type of research. This group appears to be working on a collaboration with NCATS. I am unsure about whether (and if so, for what diseases) financial engineering could ever turn a set of biomedical research investments (which I believe will generally have fairly correlated odds of success) into a high-grade-bond-quality investment, but I think it is an interesting approach.
There are other possible answers to the question. Perhaps industry can’t fully monetize the benefits its products bring, for reasons including the fact that (a) there may be many beneficiaries who can’t afford to pay (and don’t have insurance for paying) full price; (b) patents on medical products eventually expire. Taking existing health care and intellectual property law as a given, this could serve as some defense of investing nonprofit funds in “industry-style” research. I haven’t explicitly seen this argument made anywhere, except in cases where a disease has a clearly disproportionate impact on very low-income people.
In my limited readings on translational science, I’ve felt that this basic issue—the question of why we ought to support research with nonprofit funds when it appears to be a fairly good fit for industry—is rarely addressed.
2. Research on public goods—such as new tools and techniques—for preclinical research. The 2012-2013 annual report from NCATS cites several projects aimed at developing generally-useful tools and insights, that might be taken up by industry for a broad variety of purposes. For example, improving general methods for predicting how toxic a drug will end up being (page 7). In cases where such research aims to release public insights that others can build on, the case for a nonprofit model seems stronger than with the above category (targeted preclinical work with more specific aims).
3. Improving communication between clinical and academic professionals, via multidisciplinary groups as well as multidisciplinary career tracks. The idea here is that academics might do more useful research if they had more observations about how medical care works in practice—not only in terms of understanding the greatest needs, but also in terms of potentially drawing scientific inspiration from observing the effects of treatments on patients. It could be argued that there were more medical breakthroughs in the past, before academic biology and clinical medicine became as separated as they are today. A related idea is that it might be productive to provide academics with more support in understanding market demand for the kinds of technologies they’re working toward, via market research, competition analysis, etc.
Back in the 1950s and 60s, basic and clinical research were fairly tightly linked in agencies such as the NIH. Medical research was largely done by physician–scientists who also treated patients. That changed with the explosion of molecular biology in the 1970s. Clinical and basic research started to separate, and biomedical research emerged as a discipline in its own right, with its own training … Science and innovation have become too complex for any nostalgic return to the physician–scientist on their own as the motor of health research. Reinventing that culture is therefore the focus of the CTSCs [CTSCs are centers supported by NCATS] in the form of larger, multidisciplinary groups, including both basic scientists and clinicians, but also bioinformaticians, statisticians, engineers and industry experts. Zerhouni says he expects them to be breeding grounds for a new corps of researchers who will effectively stand on the bridge and help others across.
4. Conducting academic research in the “style” of industry research. The NYT article highlights research-focused nonprofits that are “intensely goal-directed and collaborative; they see the creation of new cures as a process that needs to be managed; and they bring a sense of urgency to the task.” The Nature article mentions that CTSCs (the same NCATS-supported centers discussed above) will evaluate scientists “with business techniques, such as milestones and the ability to work in multidisciplinary groups, rather than by their publications alone.” The focus on collaboration and setting specific goals seems conceptually distinct from a focus on the preclinical phases of research, though I’ve generally seen the two side by side in discussions of translational science.
5. Supporting and improving clinical trials. Clinical trials (category F from my previous post on different phases of scientific research) are generally the most expensive part of developing new medical technologies, and they are traditionally paid for mostly by industry. NCATS reports (page 10) working to improve their cost-effectiveness and usefulness in a variety of ways, including improving data sharing and recruitment of participants: “investigators work together on data sharing, multisite trial regulatory hurdles, patient recruitment, communication and other functional areas of research to enhance the efficiency and quality of clinical and translational research … the University of California Research eXchange (UC ReX) Data Explorer is a secure,online system that enables cross-institution queries of clinical aggregate data from 12 million de-identified patient records derived from patient care activities.”
The recent creation of NCATS
The National Center for Advancing Translational Sciences (NCATS) was established in December 2011, making it “the newest of 27 Institutes and Centers (ICs) at the National Institutes of Health (NIH).” Its annual budget is in the range of $600 million (page 4). Going over its 2012-2013 annual report, I note quite a broad variety of activities, seemingly including all five of the categories described above (note that it spends over $400 million per year (page 5) on clinical research centers, which I believe are the same as the centers referred to under #3 and #4 from the previous section). NCATS also appears to engage in attempting to improve policy (e.g., regulation and intellectual property law—see page 22). It appears to pay special attention to rare diseases (pages 13-16), though the reasons for this are not obvious to me.
It appears to me that the creation of NCATS was met with some negative reaction from the scientific community, as evidenced by three posts (1, 2, 3) by chemist Derek Lowe. The negative reaction appears to be based partly on a perceived vagueness of mission and partly on fears of diverting funding from other science.
Most discussion I’ve seen of the “valley of death” and need for translational science pre-dates the creation of NCATS. It is unclear to what extent the creation of NCATS has addressed the relevant gaps.
Why has pharmaceutical productivity been declining in recent years?
Advocates of translational science often point to the seeming paradox of declining pharmaceutical productivity despite an ever-growing world of academic research (example). It appears that the decline in productivity has been real, and concerning (though there is also preliminary data that the situation may be changing). However, the decline has multiple possible explanations. The most useful-seeming paper I’ve seen on this topic is Scannell et al. 2012, and I highly recommend it to those interested in the subject. A brief summary:
Over the past 60 years, “R&D efficiency, measured simply in terms of the number of new drugs brought to market by the global bio- technology and pharmaceutical industries per billion US dollars of R&D spending, has declined fairly steadily.” The authors call this “Eroom’s law” (Moore’s Law reversed).
The decline has occurred despite major improvements in efficiency on many fronts, from better understanding of biology to more efficient methods for screening large numbers of potential drugs. The authors are skeptical that there is any easy fix, noting that many potential fixes have been explored. They believe the magnitude and consistency of the decline in productivity “indicates that powerful forces have outweighed scientific, technical and managerial improvements over the past 60 years, and/or that some of the improvements have been less ‘improving’ than commonly thought.”
One of the major explanations the authors offer is the “better-than-the-Beatles problem”: each potential new drug has to compete with the best drugs developed to date in order to justify its development. It has to compete in clinical trials (making the trials challenging and expensive), and it has to compete for patients (making it hard to recoup revenue). The authors list some classes of drugs that “could have been blockbusters” 15 years ago, but today are not worth the costs and risks of development because there are existing drugs that are probably nearly as good.
The authors also hypothesize that drug development has transitioned to a fundamentally different new kind of approach, and that this approach—while superficially seeming clearly superior—may actually be inferior. In the past, drug development consisted largely of testing a relatively small number of potential drugs in animals (and humans), and observing results via trial-and-error. Today, there are more attempts to logically segment the process: for example, it is common to first identify a biological “target” via academic research, then look for compounds that do an outstanding job binding to the target in a lab environment, and only then to move on to animal/human trials. The authors believe that the old process may in fact have been more efficient (their arguments are somewhat complex and I do not summarize them here). It’s worth noting that if true, this hypothesis calls for a different approach to drug development, but does not necessarily call for “translational science” as defined above.
Many of the other explanations offered by the authors have to do with increasingly cautious regulation, which is likely responsible for longer, more expensive, more challenging clinical trials. From my limited readings on the history of biomedical research, it seems to me that getting drugs tested and approved used to be much easier than it is today, and that many key experiments were highly speculative and dangerous; such experiments would have been much more difficult to carry out with today’s regulation and social norms.
If the authors were right, it wouldn’t necessarily mean translational science isn’t valuable. It does seem true that academic biology has gotten far more complex, and translational science may be crucial in taking advantage of improved basic science and thereby improving pharmaceutical productivity. But I believe it is far from clear that translational challenges are the source of the productivity decline we’ve seen.
Translational Science and the “Valley of Death”
Link post
Note: Before the launch of the Open Philanthropy Project Blog, this post appeared on the GiveWell Blog. Uses of “we” and “our” in the below post may refer to the Open Philanthropy Project or to GiveWell as an organization. Additional comments may be available at the original post.
As we’ve looked for potential gaps in the world of scientific research funding—focusing for now on life sciences—we’ve come across many suggestions to look at the “valley of death” that sits between traditional academic research and industry research. Speaking very broadly, the basic idea is that:
The world of life sciences research has become increasingly complex, with a widening gulf between traditional academic research—which aims at uncovering fundamental insights—and industry work, which is focused on developing drugs and other marketable products.
There is a lot of work that could be done on figuring out how to apply insights from academia and therefore close this gap. However, this “translational science” tends not to fit well within the traditional academic ecosystem—perhaps because it focuses on useful applications rather than on intellectual novelty—and so may be under-supported.
As a result, the world is becoming increasingly inefficient at translating basic research into concrete applications, and this explains why drug development has seemingly been slowing despite increasing expenditures on biomedical research (though recent data suggests that this trend may be changing).
For examples of this basic argument, see Translational Research: Crossing the Valley of Death (Nature 2008) and Helping New Drugs Out of Research’s ‘Valley of Death’ (New York Times 2011). In particular, the Nature article contains a pair of charts giving a rough illustration of two basic trends that may represent the causes and consequences of the growing “valley of death”: (a) rising government expenditures on research, increasingly supporting pure academics as opposed to medical practitioners, and (b) declining drug development production despite rising pharmaceutical R&D expenditures. (As noted above, more recent data may indicate that these trends are changing.)
We find this theory extremely challenging to assess for several reasons. One is that there doesn’t appear to be any one clear definition of “translational science” or of the “valley of death,” and some “translational science” seems quite well-suited to industry—to the point where it’s not entirely clear why we should think of it as a candidate for philanthropic or government funding at all. Another is that there has been growing interest in the issue over the last decade, including the 2011 debut of NCATS, a new institute at NIH dedicated to translational science; it’s hard to say whether translational science still represents one of the main “gaps” in the existing system.
Finally, there are other strong explanations for the observed decline in pharmaceutical output. The most comprehensive article I’ve seen on the subject names multiple possible explanations for the decline, many having to do with regulatory issues as well as the inherent challenges of improving on already-available drugs. The “valley of death,” as outlined above, doesn’t figure prominently in its account.
I am skeptical of some of the arguments people have made for the importance of translational science. These arguments often do not distinguish between different possible definitions of “translational science,” and often do not make a strong case that nonprofit funding (as opposed to industry funding) is what’s needed. In addition, it seems quite possible to me that the goals of promoting “translational science” might be better served by policy change (on regulatory and intellectual property law, for example) than by scientific research. With that said, I think the idea of translational science is worth keeping in mind, and that certain kinds of research in this category could be under-invested in because they do not fit cleanly into an academic or for-profit framework.
The rest of this post will:
List several different definitions of “translational science” that I’ve come across, noting that in some cases it isn’t clear why a proposed sort of research is a fit for the nonprofit as opposed to for-profit world. More
Briefly discuss the recent creation of the U.S. government’s National Center for Advancing Translational Sciences (NCATS). More
List some other potential reasons for the decline in pharmaceutical output, which may point to solutions outside of “translational science.” More
Five different definitions of “translational science”
The Nature article on translational science states, “Ask ten people what translational research means and you’re likely to get ten different answers.” Here I give five definitions I’ve come across that seem quite distinct from each other—particularly in terms of what they imply about the appropriateness of nonprofit funding.
1. Not-for-profit preclinical research. “Preclinical research” here refers to categories D-E (mostly E) from my previous post on different phases of scientific research. A possible new medical treatment is often first tested “in vitro”—in a simplified environment, where researchers can isolate how it works. (For example, seeing whether a chemical can kill isolated parasites in a dish.) But ultimately, a treatment’s value depends on how it interacts with the complex biology of the human body, and whether its benefits outweigh its side effects. Since testing with human subjects is extremely expensive and time-consuming, it can be valuable to first test and refine possible treatments in other ways, including animal testing.
The idea of carrying out this kind of work outside of industry—both in vitro screening to identify potential new medical technologies, and other tests to improve estimates of their promise—appears to be one of the most common definitions of translational research.
The Nature article states “For basic researchers clutching a new prospective drug, it might involve medicinal chemistry along with the animal tests and reams of paperwork required to enter a first clinical trial … In some sense much translational research is just rebranding — clinical R&D by a different name.”
The NYT article states, “For a discovery to reach the threshold where a pharmaceutical company will move it forward what’s needed is called “translational” research — research that validates targets and reduces the risk. This involves things like replicating and standardizing studies, testing chemicals (potentially millions) against targets, and if something produces a desired reaction, modifying compounds or varying concentration levels to balance efficacy and safety (usually in rats). It is repetitive, time consuming work — often described as “grunt work.” It’s vital for developing cures, but it’s not the kind of research that will advance the career of a young scientist in a university setting.”
The examples of translational research listed by the Science: Translational Medicine journal seem to fit this basic framework, as does much of the activity described in the most recent annual report for NCATS (the recently created government institute focused on translational science).
I don’t feel that there’s a clear case for supporting this kind of work with nonprofit (government or philanthropic) funds. Unlike much basic research, this sort of work seems generally to have a very specific medical application in mind, and I believe that companies are often able to monetize the value created by new technologies they develop (especially drugs). Therefore, when looking at this kind of “translational science,” I think it is fair to ask: “If this research is generating more expected value than it costs, why isn’t industry investing in it? Why the need for nonprofit funds?”
There are a few possible answers. One is that that this kind of research may have positive expected value, but it is too risky for any one investor to take on—even the large industries that consider investing in it. This may be true, but I’ve rarely seen it spelled out by comparing the level of risk in particular kinds of research to the level of risk that various industry players are likely to bear. In addition, if risk is the key issue, this doesn’t necessarily call for a nonprofit solution. An economics-and-financed-focused group at MIT has proposed that a large enough for-profit fund—perhaps made possible via financial engineering—could result in much more investment in this type of research. This group appears to be working on a collaboration with NCATS. I am unsure about whether (and if so, for what diseases) financial engineering could ever turn a set of biomedical research investments (which I believe will generally have fairly correlated odds of success) into a high-grade-bond-quality investment, but I think it is an interesting approach.
There are other possible answers to the question. Perhaps industry can’t fully monetize the benefits its products bring, for reasons including the fact that (a) there may be many beneficiaries who can’t afford to pay (and don’t have insurance for paying) full price; (b) patents on medical products eventually expire. Taking existing health care and intellectual property law as a given, this could serve as some defense of investing nonprofit funds in “industry-style” research. I haven’t explicitly seen this argument made anywhere, except in cases where a disease has a clearly disproportionate impact on very low-income people.
In my limited readings on translational science, I’ve felt that this basic issue—the question of why we ought to support research with nonprofit funds when it appears to be a fairly good fit for industry—is rarely addressed.
2. Research on public goods—such as new tools and techniques—for preclinical research. The 2012-2013 annual report from NCATS cites several projects aimed at developing generally-useful tools and insights, that might be taken up by industry for a broad variety of purposes. For example, improving general methods for predicting how toxic a drug will end up being (page 7). In cases where such research aims to release public insights that others can build on, the case for a nonprofit model seems stronger than with the above category (targeted preclinical work with more specific aims).
3. Improving communication between clinical and academic professionals, via multidisciplinary groups as well as multidisciplinary career tracks. The idea here is that academics might do more useful research if they had more observations about how medical care works in practice—not only in terms of understanding the greatest needs, but also in terms of potentially drawing scientific inspiration from observing the effects of treatments on patients. It could be argued that there were more medical breakthroughs in the past, before academic biology and clinical medicine became as separated as they are today. A related idea is that it might be productive to provide academics with more support in understanding market demand for the kinds of technologies they’re working toward, via market research, competition analysis, etc.
The Nature article states,
This issue was a major focus of a 2000 roundtable on clinical research as well.
4. Conducting academic research in the “style” of industry research. The NYT article highlights research-focused nonprofits that are “intensely goal-directed and collaborative; they see the creation of new cures as a process that needs to be managed; and they bring a sense of urgency to the task.” The Nature article mentions that CTSCs (the same NCATS-supported centers discussed above) will evaluate scientists “with business techniques, such as milestones and the ability to work in multidisciplinary groups, rather than by their publications alone.” The focus on collaboration and setting specific goals seems conceptually distinct from a focus on the preclinical phases of research, though I’ve generally seen the two side by side in discussions of translational science.
5. Supporting and improving clinical trials. Clinical trials (category F from my previous post on different phases of scientific research) are generally the most expensive part of developing new medical technologies, and they are traditionally paid for mostly by industry. NCATS reports (page 10) working to improve their cost-effectiveness and usefulness in a variety of ways, including improving data sharing and recruitment of participants: “investigators work together on data sharing, multisite trial regulatory hurdles, patient recruitment, communication and other functional areas of research to enhance the efficiency and quality of clinical and translational research … the University of California Research eXchange (UC ReX) Data Explorer is a secure,online system that enables cross-institution queries of clinical aggregate data from 12 million de-identified patient records derived from patient care activities.”
The recent creation of NCATS
The National Center for Advancing Translational Sciences (NCATS) was established in December 2011, making it “the newest of 27 Institutes and Centers (ICs) at the National Institutes of Health (NIH).” Its annual budget is in the range of $600 million (page 4). Going over its 2012-2013 annual report, I note quite a broad variety of activities, seemingly including all five of the categories described above (note that it spends over $400 million per year (page 5) on clinical research centers, which I believe are the same as the centers referred to under #3 and #4 from the previous section). NCATS also appears to engage in attempting to improve policy (e.g., regulation and intellectual property law—see page 22). It appears to pay special attention to rare diseases (pages 13-16), though the reasons for this are not obvious to me.
It appears to me that the creation of NCATS was met with some negative reaction from the scientific community, as evidenced by three posts (1, 2, 3) by chemist Derek Lowe. The negative reaction appears to be based partly on a perceived vagueness of mission and partly on fears of diverting funding from other science.
Most discussion I’ve seen of the “valley of death” and need for translational science pre-dates the creation of NCATS. It is unclear to what extent the creation of NCATS has addressed the relevant gaps.
I should also note that there are longer-running NIH mechanisms for supporting translational science, such as SBIR and STTR grants for “domestic small businesses [that] engage in R&D that has a strong potential for technology commercialization.”
Why has pharmaceutical productivity been declining in recent years?
Advocates of translational science often point to the seeming paradox of declining pharmaceutical productivity despite an ever-growing world of academic research (example). It appears that the decline in productivity has been real, and concerning (though there is also preliminary data that the situation may be changing). However, the decline has multiple possible explanations. The most useful-seeming paper I’ve seen on this topic is Scannell et al. 2012, and I highly recommend it to those interested in the subject. A brief summary:
Over the past 60 years, “R&D efficiency, measured simply in terms of the number of new drugs brought to market by the global bio- technology and pharmaceutical industries per billion US dollars of R&D spending, has declined fairly steadily.” The authors call this “Eroom’s law” (Moore’s Law reversed).
The decline has occurred despite major improvements in efficiency on many fronts, from better understanding of biology to more efficient methods for screening large numbers of potential drugs. The authors are skeptical that there is any easy fix, noting that many potential fixes have been explored. They believe the magnitude and consistency of the decline in productivity “indicates that powerful forces have outweighed scientific, technical and managerial improvements over the past 60 years, and/or that some of the improvements have been less ‘improving’ than commonly thought.”
One of the major explanations the authors offer is the “better-than-the-Beatles problem”: each potential new drug has to compete with the best drugs developed to date in order to justify its development. It has to compete in clinical trials (making the trials challenging and expensive), and it has to compete for patients (making it hard to recoup revenue). The authors list some classes of drugs that “could have been blockbusters” 15 years ago, but today are not worth the costs and risks of development because there are existing drugs that are probably nearly as good.
The authors also hypothesize that drug development has transitioned to a fundamentally different new kind of approach, and that this approach—while superficially seeming clearly superior—may actually be inferior. In the past, drug development consisted largely of testing a relatively small number of potential drugs in animals (and humans), and observing results via trial-and-error. Today, there are more attempts to logically segment the process: for example, it is common to first identify a biological “target” via academic research, then look for compounds that do an outstanding job binding to the target in a lab environment, and only then to move on to animal/human trials. The authors believe that the old process may in fact have been more efficient (their arguments are somewhat complex and I do not summarize them here). It’s worth noting that if true, this hypothesis calls for a different approach to drug development, but does not necessarily call for “translational science” as defined above.
Many of the other explanations offered by the authors have to do with increasingly cautious regulation, which is likely responsible for longer, more expensive, more challenging clinical trials. From my limited readings on the history of biomedical research, it seems to me that getting drugs tested and approved used to be much easier than it is today, and that many key experiments were highly speculative and dangerous; such experiments would have been much more difficult to carry out with today’s regulation and social norms.
If the authors were right, it wouldn’t necessarily mean translational science isn’t valuable. It does seem true that academic biology has gotten far more complex, and translational science may be crucial in taking advantage of improved basic science and thereby improving pharmaceutical productivity. But I believe it is far from clear that translational challenges are the source of the productivity decline we’ve seen.