Innovation fuels the in vitro diagnostics (IVD)6 industry. The introduction of new technologies has greatly changed the face of the clinical laboratory, and they will continue to impact the industry and the laboratory in the years ahead. Some advances have been evolutionary, building on established core technologies, whereas other advances have been game changers. For diagnostics industry research, business development, and product-development professionals, the question of what technologies will become the products of the future—and which ones will not—is a critical one.
The search for the answer starts with a clear understanding of the customer and the clinical need, but the real question is more often not the needs of today but the true unmet needs of the clinical laboratory in the years ahead. What problems do laboratories work through today because no obvious technology solution exists? What new diagnostic information would improve the accuracy or speed of diagnosis? What new biomarker will provide a look to the future and facilitate clinical intervention and disease prevention?
Let's take a trip back in time before the PCR, or even further back, before automated immunoassay systems. Innovation was planting the seeds for these important advances. Scientists and technologists were developing technologies to reach lower levels of clinical detection. Computer scientists and engineers were exploring how to tame the power of the microchip, fluidics, miniaturization, solid-state substrates, and informatics to build integrated clinical systems. While these changes were occurring, molecular biologists were exploring how to use the replication capability of DNA to create a new diagnostic field that has produced major advances in infectious disease testing, cancer diagnostics, and the detection and characterization of anomalies in our genetic code that are associated with disease.
If you are an industry strategist, business-development manager, or investor, what technologies and breakthroughs occurring today would you acquire or develop? That is not an easy question to answer. There are the obvious questions, but there are many more. How do these technologies align with current business models? Are global regulations a hurdle to commercialization? Is the technology ready for prime time?
There are a multitude of exciting technologies. A select number become valuable products, but for many reasons others do not. We have asked a panel of experts, each with a different viewpoint, to provide their insight on the process of selecting opportunities for investment and development.
How do you establish the potential value of a new diagnostic or life science technology?
Avi Kulkarni: For innovative diagnostics, the starting point in the value discussion is an “unmet medical need” addressed by a diagnostic that provides “actionable information.” For follow-on diagnostics and for new technologies, the points of value become more varied. Value for these is established by addressing one or all of the following criteria:
Lower price (i.e., no change in performance, just a lower cost to generate the same information);
Total assay turnaround time, better form factor, usage protocols, and easier integration into existing medical practice;
Transportability, portability, and reliability of instruments and the results generated (i.e., minimizing the need for expert intervention).
There are lesser factors, such as paper printouts, integration with electronic medical records, and the complexity of sourcing reagents, but the 6 factors above are generally accepted as the critical ones.
Once value is generated, establishing it in a transparent, unbiased manner for examination by the external community requires independent analysis. It is best if the results of this economic (and sometimes health-economic analysis) are published in a peer-reviewed journal, since that provides compelling evidence that a community of experts agree that the diagnostic test or new technology has clearly established value.
Aaron Geist: When we try to determine the potential value of a new technology, no matter if it is targeted toward the clinical diagnostics market or toward the research market, we try to characterize the total market opportunity over a specific period of time. As the pace of technological change in the life sciences is often dramatically different from that in the clinical diagnostics market, developing an understanding of the specific technology fit, as well as how and where it will be used, is critical. In addition to a detailed analysis of the costs and benefits of the new technology over what is currently in use, other factors that often impact the ultimate value of a technology include the strength of any intellectual property associated with development and the likelihood of the introduction of other, competing technologies.
From a practical perspective, we begin by initially mapping the total addressable market, which involves characterizing technologies that are in use today. We look at current adoption on a geographic basis, as well as forecast future growth and potential adoption of the new marker. For the clinical diagnostics market, we look first and foremost at the clinical utility of the new technology. Assuming the clinical value of the technology is sound, we evaluate the barriers the technology will experience once launched. Capital requirements, cost of implementation, the labor required, turnaround time, and potential manufacturing challenges are important considerations that will ultimately impact adoption and, in turn, value creation.
Extending the life of an older technology is often an additional way to create incremental value for a new or existing biomarker. We often look at repurposing existing technologies in new markets. Many of the diagnostics markers that are currently in use in developed markets have been fully adopted and are no longer generating positive growth. However, the introduction of these markers in emerging markets generates robust growth as new countries begin to implement new or expanded testing.
On the life sciences side, when deciding to commercialize a new technology, we look at other factors that will usually have an impact on value creation. New tools and technologies require tremendous investments in marketing to introduce the product to our customers. These costs, in addition to both manufacturing and capital costs, often render new technologies not financially viable.
Quintin Lai: We think of tests as valued and valuable. Being of value needs to be viewed from the eyes of stakeholders—clinicians, physicians, patients, and payers. Valued tests are appropriately sensitive, specific, reliable, reproducible, and cost-effective for clinicians. They provide insightful and actionable results for physicians. They provide better outcomes and clarity for patients. And they do all of this at a cost that payers are willing to reimburse.
“Valuable” comes from a manufacturer point of view. Valuable tests generate revenues, have potential for growth (market size opportunity), and are profitable to make and sell. The test needs to be able to command a price that will generate adoption and continued use while covering the costs of goods and operating expenses. Here is where barriers to entry, such as intellectual property, competition, and alternative tests, affect value.
What filters or selection criteria do you use to screen new technologies?
Emery Stephans: Two criteria. First is the intrinsic clinical value of the technology. Second is the likelihood of adequate reimbursement. The first criterion is relatively straightforward. I agree that the second one is tricky and depends greatly on the choice of the target national health market and precedent technologies.
Avi Kulkarni: New technologies have multiple screens. The hardest screen is the first one, which is referred to by Emery Stephans as “intrinsic clinical value,” which I think of more as unmet medical value and actionable information. The second screen is “relative value” to other existing technologies and includes the 6 factors I mentioned earlier. The last 2 screens are “reimbursement and payer dynamics” and “socio-political environment.” The last one is often overlooked, but it is important to remember that several products have made it to the market that would not have necessarily passed the first 3 screens were it not for the relative importance placed on the product at that point in time. These include prion diagnostics in the mid-90s and, more recently, rapid assays for severe acute respiratory syndrome and the anthrax Bacillus.
Aaron Geist: We place a heavy emphasis on intellectual property. We ask ourselves questions such as: How proprietary is the product, and how defensible is its position? How differentiated is the new technology? Are the products currently on the market sufficient for marketplace needs, and will the new technology offer only a modest improvement? How challenging is the path to market, and what will be the development, clinical, and regulatory hurdles? Will the new product likely create potential synergies with our existing products?
We are usually willing to accept a higher level of risk if the new technology is highly synergistic with our existing products or fills a yet unmet need.
Quintin Lai: A new technology needs to provide value to all stakeholders (patient, physician, clinician, and payer) and be valuable to the manufacturer (revenue and profit potential). In the case of a new diagnostic, we look for buy-in from key opinion leaders, as they often will be the early adopters. For new technologies trying to supplant older test modalities, we look for a dimension (performance, cost, ease of use) that would create enough value to offset the cost and time to switch.
How do you estimate the potential value of a new biomarker?
Emery Stephans: We like to make an estimate of the intrinsic value of a new biomarker. Intrinsic value is, of course, defined in the context of a healthcare environment. As an illustration of the concept, consider an actual case. A few years ago, a diagnostics company was offered a license on a new cardiac marker. The company asked us to determine the value of the biomarker in the clinical setting, in this case the emergency department (ED), before they made a decision. We conducted a retrospective clinical-outcome study with historical patient data from a respected healthcare network (IHN). We retained the head of cardiology as the principal investigator for the study and worked with this individual to define the flow of care that a patient would encounter if he presented at the ED with a chest pain complaint. The flow chart contained several dozen “boxes,” each representing a diagnostic or therapeutic point of care. Each box contained the time the patient spent at that station, the cost this incurred, and the probability of outcome (confirmed diagnosis, true or false positive, true or false negatives, decision to conduct further tests). Once the flow chart was defined, we ran several thousand retrospective cases through it to obtain a meaningful amount of statistics. That gave us a mathematical model. We then “installed” the hypothetical cardiac marker at several likely points of intervention and measured its impact at the bottom end of the model. This gave us cost outcomes and medical outcomes in which the impact of the new biomarker could be seen. As the last step, we varied the diagnostic sensitivity and specificity of the biomarker in 5% increments from 100% to 60% to find the point of inflection where its impact on the ED process rapidly diminished. From these data the intrinsic value of the biomarker, in economic and medical terms, became visible. Also visible was the lowest tolerable level of performance at which the biomarker still made sense.
As it turned out, the company that came to us took these performance levels back to the licensor and offered to take license if the test could meet those minimums. The performance fell below those minimums, and the company declined the offer to take license.
Our intrinsic value was not a price. It did, however, represent a measure of the cardiac biomarker's value to that integrated healthcare network and, by generalization, roughly in the US healthcare system. If the intrinsic value were low, there would be no point to investing in its development and validation. If it were high, the case would appear to have merit.
Aaron Geist: Generally speaking, we avoid investing in our own unproven or early-stage biomarkers but rather opt to sponsor academic researchers who have discovered exciting markers.
In evaluating the potential value of a biomarker, we look at its clinical indication, the tests or markers that are currently in use, and the potential improvement in outcomes and healthcare savings. These data points are used to risk-adjust our financial model and thereby determine the potential value of the biomarker. Over the past decade, the majority of biomarkers have yet to be successfully implemented into routine clinical care. Until we are better able to ascertain which biomarkers will succeed, we are often only able to place modest value on a new and unproven biomarker. As with all of our acquisitions, licenses, and internal development opportunities, we develop a financial model that outlines the life cycle of the project, market, or opportunity. This model projects a time horizon for approval, robust implementation, and subsequent maturation. The model is then risk adjusted and discounted to ascertain the present value of the opportunity. While we use a multitude of different valuation techniques as a guide, the most significant impact on value is our risk assessment.
Quintin Lai: We start from a cautious, conservative estimate when evaluating a new biomarker. It seems that every month brings several published articles describing a new biomarker with fantastic potential. However, like athletes moving from high school to the pros, only a few of these markers become commercial blockbusters.
When focusing on commercial viability, we look for a body of peer-reviewed articles evaluating the biomarker. We seek out opinions from key opinion leaders, who tend to also become the early adopters. We then try to determine if the biomarker is likely to provide decision-making insight to healthcare providers in managing medical conditions.
Most biomarkers take an average of 10+ years after market launch to become viable from a financial standpoint. What can be done to shorten this? How do you estimate adoption rates?
Avi Kulkarni: Understanding the reasons for the slow ramp in adoption of biomarkers may provide us with some insights on improving their adoption rate. First of all, a large number of biomarkers—the majority in fact—approach us through the public domain many years after they were first discovered. The scientific process grinds on after initial discovery, and it takes as much as a decade after initial discovery for a consensus view to begin to emerge about the likely value of the biomarker. And this is just the beginning of the road.
Assuming that a sponsor organization seizes the moment and builds a commercially viable assay at the point in time when the biomarker has jumped from being one among thousands to one that may have clinical relevance, identifying the critical jump that proves the clinical-decision process in which the biomarker can be used is challenging. In fact, historically, organizations have developed IVD tests or laboratory-developed tests (LDTs) around biomarkers without extensive clinical studies on the benefits. Furthermore, prospective clinical studies are even rarer, so the scientific organizations, key opinion leaders, and practicing clinicians don't have the advantage of having specific guidelines into which the biomarker can be included. Instead, they have an often confusing classification system with “recommendations for use,” which means that clinicians have to build their own comfort with the biomarker regarding what is the usefulness of the information and what ameliorative action should be taken. The upshot is a system where biomarkers end up being adopted in guidelines and in clinical practice several years after they are on the market.
How to fix this? One method might be to change the system to grant market exclusivity for a period of time to the sponsor who developed the assay, along the lines of orphan drug market exclusivity. Having a period of guaranteed economic return would incentivize firms to absorb the costs of the prospective clinical studies, often in the $100 million range, to build the most compelling case for the biomarker.
Another solution would be for the scientific bodies to be less risk averse and more direct in their recommendations. In the current scheme of classification, only the IA classification appears to incentivize physicians to adopt biomarkers into their practice.
Completely independent of the above analysis, the net present value (NPV) on assay development for new biomarkers would be improved significantly if the Centers for Medicare and Medicaid Services (CMS) and private payers had a faster process for granting new reimbursement codes based on health-economic value. The current process forces new assays to either be cross-walked to lower-priced codes or be on hold for as much as 2–3 years while the CMS is being petitioned for a new CPT (Current Procedural Terminology) code. Most commercial laboratories are unwilling to aggressively promote the test while there is uncertainty about the reimbursed price, contributing to additional delays in overall adoption.
Aaron Geist: Biomarkers take years to develop because historically they have lacked the critical financial, commercial, and regulatory support. Until recently, most biomarkers were discovered at academic and not-for-profit research institutions. These centers would offer out these markers for license to anyone who was interested in continuing the enormous task of validation. As large diagnostics companies typically need clinical supporting data, they would hesitate to embark on the process. New startup companies, funded by angel or early-stage venture investors, were created to attempt to obtain the critical validation data. Unfortunately, they would often run out of money early in the process. The biomarker would languish until another company or investor group picked it up and pushed the ball forward. Once the biomarker made it through/passed the “valley of death,” it would be picked up by a diagnostics company and would receive the necessary focus and attention to insure its success.
I believe over the past few years much has changed. There has been an explosion in the number of new biomarkers. Pharmaceutical and biotechnology companies that once avoided pairing a therapeutic and a diagnostic for fear it could limit market opportunity now embrace companion diagnostics. Not only are companion diagnostics and personalized medicine often seen as a path for more successful therapeutic clinical trials, they can potentially aid in regulatory approval, adoption, and ongoing postapproval surveillance. With researchers and pharmaceutical, biotechnology, diagnostics, and regulatory organizations all working together in partnership, we can be optimistic that the time frame for biomarkers from discovery to use will be reduced in the future.
Quintin Lai: I am curious on when the stopwatch started for the 10-year comment. If it is from first discovery, then it is understandable that a body of peer-reviewed evidence would be required before adoption occurred.
I think the amount of education required by stakeholders before adoption is inversely correlated with the clarity in action provided by the test. For infectious disease, a test that returns a yes-or-no answer provides a clear action, so the amount of education required is low. In contrast, a test that provides a more complicated result, such as a risk score for recurrence, needs a high level of education before getting a buy-in from stakeholders.
Biomarkers are difficult and expensive to find, develop, and validate. Is there any way to reduce cost?
Emery Stephans: Large quantities of biomarkers are identified in the drug discovery and development process. Only a few ultimately play a role in the commercialization phase of the drug. In my favored scenario, the pharmaceutical companies would put the noncritical biomarkers in the public domain by handing data and knowledge to the US Food and Drug Administration. Diagnostics companies could pick from this collection of more-or-less completed biomarkers and make them commercially available. Time and expense would be saved.
Avi Kulkarni: Biomarkers are not difficult to find. They are difficult to develop and validate. The technology explosion that came with the genomics revolution of the 1990s has in fact produced too many biomarkers. The questions are how to take the best ones forward, how to develop assays for them quickly, how to establish clinical relevance for them, and how to get them validated in isolation or as parts of signature-based algorithms.
The solutions lie in pooling data so that discovery-level algorithms can find the best biomarkers from the many biomarkers of interest, in NIH-funded biobanks for establishing clinical relevance, and in regulator acceptance of observational studies—even those with uncertain banked-sample preanalytical treatment—for granting clinical claims and folding all open questions into open-label registry studies.
Aaron Geist: I would argue that biomarkers are not difficult and expensive to find, but rather are extraordinarily difficult to validate. With the explosion in knowledge about normal biological function and development of disease, studies designed to validate biomarkers should be more successful. The introduction of many new technologies over the past few years has enabled researchers to obtain an unprecedented cache of valuable biological, physiological, and metabolic information. As new tools are introduced to turn this information into knowledge, in silico studies become possible. Through harnessing our computational capabilities with our biological knowledge, the time frame for biomarker validation can hopefully be compressed. Through reducing the time required for validation and being better able to predict which biomarkers will succeed, researchers and investors alike will be able to reduce the development costs associated with bringing a biomarker to market.
What technology development are you watching?
Emery Stephans: Nanoscale sequencing. Although I also watch technologies that might deliver a $1000 genome, the promise of the lowest-cost personal sequencing, in my view, is held by the nanoscale method. Under-$1000 genomes may become important quickly; technically, they eliminate the need for the $3000–$5000 CLIA laboratory tests currently on the market (e.g., Genomic Health, Myriad Genetics).
Avi Kulkarni: There are 3 technologies that I am following because I consider them game changing. Moving from the nearest to the farthest away, they are:
Restriction map analysis. This technology lowers the cost of genome-wide analysis and can be used to prepare risk profiles across multiple implicated genes that can be used to segment diseases at a genomic level.
Digital antibodies. This technology allows for whole-proteome mapping at a fraction of the cost, and specific disease-state algorithms can be overlaid onto the data to inform on multiple diseases attendant in a single patient.
Genomic and proteomic lab-on-a-chip. Admittedly, this technology is still a very long time away, but in the end this is going to be an engineering problem and not a biology or chemistry issue. We are proving every day that we understand the biochemistry, and it is only a matter of time before all the known biomarkers can be assembled on a microarray for a 1-stop shop for tests.
Aaron Geist: I have been watching and continue to focus on the impact of new next-generation sequencing technologies. Sequencing technologies have already had a profound impact on life sciences and biomedical research and will likely continue to revolutionize the industry for many years to come. Characterizing individuals within clinical trials to the level of single–base pair mutations is now commonplace. Increasingly, insurance providers are requiring substantial individual genetic information to qualify for the new and more expensive treatments. While predicting which game-changing technologies will likely take hold is difficult, the next revolution will potentially come to those who are able to turn the ever-growing mountain of genetic information into valuable knowledge. The ability to interpret, manage, and mine the information is the bottleneck. One could argue that genetic information is a series of base pairs and that knowledge is created through the understanding of the myriad of genetic changes. As a well-known aphorism goes, knowledge is power. Those who are able to rapidly convert the current deluge of genomic information will surely lead advances in medical knowledge and treatment.
Quintin Lai: Sequencing technology is making tremendous strides, and we are watching the cost and turnaround times drop almost every few months. While clinical utility for whole-genome sequencing is still off in the future, the technological advances are accelerating biomarker discovery.
Manufacturers are developing faster, more cost-efficient high-throughput sequencing machines and are beginning to introduce desktop low-cost sequencing machines. We are also watching the rise of service companies that specialize in whole human genome sequencing to facilitate large-scale population studies and, potentially, enable extension into clinical applications.
Sample-to-result genetic tests continue to make strides at opening new market opportunities. Recently, multidrug-resistant tuberculosis tests are penetrating developing countries as technological advances have expanded molecular diagnostics from pristine clean rooms to mobile, remote laboratories.
What will be the impact of the comparative-effectiveness paradigm on new clinical diagnostics technologies?
Emery Stephans: First, I believe there will be no harm. The UK has established the National Institute for Health and Clinical Excellence (NICE) to offer technology guidance on clinical effectiveness and cost-effectiveness for “new treatments with potential significant impact on National Health Service (NHS), or policy priorities (cancer, heart disease, and stroke)” (NICE Evaluation of Diagnostic Technologies, September 2010). NICE seems to have succeeded in striking a balance between clinical effectiveness and cost-effectiveness. NICE's work has not damaged the ability of the NHS to attract and introduce technologies. If the US “comparative effectiveness” program accurately copies NICE's methodology, I expect no harm to come to the prospects for new US technologies either.
Second, there might be a reimbursement benefit. If comparative effectiveness has been demonstrated for a new diagnostic technology, presumable there will be less of a struggle to get reimbursement secured.
Avi Kulkarni: As users of clinical diagnostics, surely we will all benefit. Legislated comparative-effectiveness programs are after all forcing the question that all of us have at the back of our minds: How does this compare to other tests? Is it better than comparables, and if so, how?
But as a supplier, I think it depends on where you sit. If you are a clinical laboratory developing LDTs, comparative effectiveness is a scary thing, as it raises the stakes for your test. Most clinical laboratories develop assays for $3–5 million, and the idea of having to add on comparative-effectiveness programs is a deal killer. For IVD manufacturers, the costs of comparative effectiveness in terms of real costs as well as the potential delays in registration, adoption, and reimbursement can be an issue, but if the other side of the ledger has an explicit higher price, then it's worth it. Among IVD manufacturers, there will also be groups of less innovative players who will wonder how their existing strategies will work in a world of comparative effectiveness and a group of more innovative players who will worry that the cost of their innovation will be further increased because they will have to show comparative effectiveness. Lastly, there is the scale issue. Each layer of complexity that is added to the medical system increases the minimum efficient scale, making this more of a game for larger companies.
Aaron Geist: The introduction of clinical-effectiveness programs, first in the UK and now in the US, will likely have many effects. Most importantly, these programs will improve the quality and accuracy of clinical diagnostics tests. On the micro level, individual laboratories will need to demonstrate accuracy and reproducibility. On the macro level, the impact will likely be to accelerate the adoption and utilization of clinical diagnostics technologies.
Time and again, studies have continued to prove that the utilization of clinical diagnostics offers dramatic improvement in the quality of healthcare while reducing the total cost. At the same time as the regulatory requirement to demonstrate clinical utility might increase up-front development costs, the speed with which these tests are implemented would also increase. Developing markets around the world are implementing advanced clinical diagnostics at breakneck speed. Sectors of China's clinical diagnostics market have continued to grow annually at a rate of 30%–40% as local governments have come to realize that clinical diagnostics may offer the greatest investment return for healthcare investments.
↵6 Nonstandard abbreviations:
- in vitro diagnostics;
- emergency department;
- laboratory-developed test;
- net present value;
- Centers for Medicare and Medicaid Services;
- Current Procedural Terminology;
- National Institute for Health and Clinical Excellence;
- National Health Service.
Author Contributions: All authors confirmed they have contributed to the intellectual content of this paper and have met the following 3 requirements: (a) significant contributions to the conception and design, acquisition of data, or analysis and interpretation of data; (b) drafting or revising the article for intellectual content; and (c) final approval of the published article.
Authors' Disclosures or Potential Conflicts of Interest: Upon manuscript submission, all authors completed the Disclosures of Potential Conflict of Interest form. Potential conflicts of interest:
Employment or Leadership: S. Evans, Beckman Coulter; E. Stephans, Enterprise Analysis Corporation; A. Kulkarni, Booz & Company.
Consultant or Advisory Role: E. Stephans, most major IVD companies and many pharmaceutical companies; A. Kulkarni, Sevident, Inc.
Stock Ownership: S. Evans, Beckman Coulter; A. Kulkarni, XDx, Inc.; A. Geist, PerkinElmer.
Honoraria: None declared.
Research Funding: None declared.
Expert Testimony: None declared.
- Received for publication February 2, 2011.
- Accepted for publication February 16, 2011.
- © 2011 The American Association for Clinical Chemistry