In 2004, the International Committee of Medical Journal Editors (ICMJE)10 issued guidelines requiring registration of all clinical trials that initiated recruitment after July 1, 2005. Clinical trials were to be registered in one of the primary registries in the WHO Registry Network (16 registries as of April 2014) or in ClinicalTrials.gov. Key information about each study, including eligibility criteria and primary outcome measures, are recorded in the registry before the trial starts recruiting participants.
The impetus for this action was to improve the ethical standards in the conduct and reporting of research and to assist the biomedical publishing community in the production and distribution of accurate and unbiased articles. Although ICMJE has targeted only clinical trials, publication bias and selective reporting are recognized problems throughout the entire biomedical literature. Positive findings are more likely to be reported than negative ones. Some authors of studies with negative findings perform multiple comparisons and numerous subgroup analyses not defined in the study protocol, to generate positive results, while failing to report the negative observations. Reported outcomes and primary hypotheses differ from those defined in the study protocol.
Authors and sponsors, rather than journal editors, appear to be primarily responsible for the reporting bias, a practice that distorts the practice of evidence-based medicine and diminishes the value of systematic reviews and metaanalyses. Selective and incomplete reporting and the failure to report negative findings lead to unnecessary duplication of efforts. Such practices corrupt the validity of literature-based estimates of the risk–benefit ratio of interventions and of the performance of medical tests and, ultimately, can erode public trust in the biomedical research community.
On the basis of these considerations a movement has started that is gaining momentum toward expanding the registration of studies involving human study participants beyond clinical trials. In this Q&A, the issue of registering diagnostic and prognostic studies is debated among representatives of the major stakeholders: a researcher who examines the clinical value of biomarkers, a scientist from the in vitro diagnostics industry, a scientist from a major federal funding agency, a journal editor, and a director of a trial registry.
Why should diagnostic and prognostic studies be registered before they start? Is registration of such studies currently a legal requirement?
John P.A. Ioannidis: Registration would be meaningful, if it precedes the onset of a study. Otherwise, there is the risk that one may run thousands of studies and analyses, select a few on the basis of their results, and register them post hoc. This would give a false sense of prespecification, while fostering selection bias at its worst. A major challenge for many diagnostic/prognostic investigations is to define what constitutes a “study.” It is relatively straightforward when new data collection, a prospective design, and a newly approved protocol are involved. But with retrospective analyses of already collected specimens and datasets, “studies” can be built around what seems interesting during data peeking. Registration of diagnostic/prognostic studies is lagging behind registration of clinical trials in terms of legal mandates. It remains unclear who the enforcers of registration should be for diagnostic/prognostic studies. Potential enforcers could include journals, funders, regulators, or other stakeholders.
Kurtis R. Bray: This question assumes that diagnostic and prognostic studies should be registered before they start, which in my opinion is true only for certain types of studies, specifically interventional studies.
Lisa M. McShane: I am not aware of blanket legal requirements to register diagnostic and prognostic studies. The main type of legal requirement for these studies is relevant to the setting of a study that is conducted prospectively and in which use of a diagnostic or prognostic test in the study is judged to be investigational. The US Food and Drug Administration (FDA) may require an investigational device exemption (a diagnostic test may be considered a “device”) or similar oversight mechanism to be in place for a study in which use of a test poses more than nonsignificant risk to the study participants.
For retrospective studies that utilize existing data or use stored specimens to generate new data such as biomarker values, there is often not a formal study protocol that outlines specific study aims, primary endpoints, statistical design considerations, or prespecified analysis plans. Such studies are prone to generation of false-positive findings resulting from extreme analysis of the data and prone to publication bias. Study registration in which main aims and hypotheses and prespecified analysis plans are clearly identified before study initiation would help to identify which published study results might have been generated by excessive data analyses. Furthermore, registration would help to identify the existence of unpublished studies with negative results. Such studies, if they are appropriately designed, conducted, and analyzed, still provide important information about the worth, or lack of worth, of a diagnostic or prognostic test.
Robert M. Golub: Strong arguments can be made in favor of prospective registration of observational studies, arguments that parallel those for randomized clinical trials. [Observational studies include diagnostic test (cross-sectional) designs, as well as cohort and case control studies, among others.] These include providing a database of studies in progress, documenting at least the key aspects of study design at inception, and recording the scope of the question that will be investigated. This type of database can inform other researchers of ongoing work. It can also serve as a benchmark to help assess whether there might be completed studies that remain unpublished; this in turn may be of service to those performing systematic reviews of the literature, but may also discourage suppression of findings. Having documentation of planned study design and research scope can help editors, reviewers, and readers understand whether a study and its reporting maintain fidelity to the original plan and identify discrepancies that need to be resolved. I am not aware of legal requirements for registration.
Lotty Hooft: Registration in ClinicalTrials.gov is required for trials of devices with health outcomes subject to FDA regulation, which can include new screening or diagnostic techniques. Although registration of the majority of diagnostic and prognostic studies is currently not a legal requirement, many registries already hold records for such studies. Registration of diagnostic and prognostic studies facilitates transparency and is no different in the need for completeness of reporting of clinical trials. In both research fields it seems that the published evidence does not offer a fair representation of the existing evidence. Korevaar and colleagues recently demonstrated in Clinical Chemistry (2014;60,651–659) that failure to publish and selective reporting are also prevalent in a cohort of diagnostic studies registered in ClinicalTrials.gov. Preferably, depositing information in a registry should occur early, before enrollment of the first patient or prospective data collection, because it is essential that the investigators cannot take back what they intended to do. However, that requirement of timing might be difficult to endorse for diagnostic and prognostic studies as these fields often make use of preexisting datasets.
Are there examples that show that an overly optimistic assessment of laboratory test performance is costly to the healthcare system or harms patients?
John P.A. Ioannidis: There are many diagnostic/prognostic tests and practices that have been proposed for screening or for monitoring and that have sustained “medical reversal” with newer evidence or reappraisal of the totality of the evidence. Inflated diagnostic expectations, overdiagnosis, false positives, and unnecessary additional diagnostic procedures or therapeutic interventions are common problems. Some examples include screening with the prostate-specific antigen for prostate cancer, CA-125 and several other tumor markers for ovarian cancer, C-reactive protein and many other inflammatory markers for prediction of coronary artery disease, screening with urinalysis or urine biomarkers for bladder cancer, massive screening of infants or children for neuroblastoma, monitoring with pulmonary artery catheters in high-risk surgery and acute lung injury, computer-aided detection methods for screening mammography, and routine screening with mammography for low-risk women. Most routinely used laboratory tests never undergo large-scale evaluation of their performance, let alone randomized trials of their clinical utility. I suspect that many, perhaps the majority, of currently used laboratory tests would fail randomized trials of utility.
Lisa M. McShane: There have been some highly publicized examples in recent years of diagnostic tests that were widely promoted for their ability to detect cancer or to optimally choose cancer therapy. These tests were later determined to be worthless after it was uncovered that flawed data or inappropriate statistical analysis methods were used as the basis for the claims of test performance. Costs to the healthcare system and harms to patients can be difficult to quantify. Negative consequences might include unnecessary medical procedures, choice of an ineffective therapy, a missed opportunity for a better therapy, delay in care resulting in a worse clinical outcome, or even emotional stress. These harms can lead directly or indirectly to higher cost to the healthcare system. We accept that virtually every medical test has a nonzero probability of delivering an incorrect result and we weigh that risk against the potential benefits of using the test. We should not accept an environment in which useless or harmful tests easily work their way into clinical use on the basis of flawed evidence supporting their performance.
Robert M. Golub: There are certainly many examples in medicine of laboratory tests that in early studies seemed to have excellent test characteristics but subsequently were determined to be far less accurate in the population in whom they would ultimately be used. This is commonly a result of initial testing of sensitivity in the “sickest of the sick,” and specificity in the “wellest of the well,” which is a reasonable approach to determine if a new test has any potential. However, in practical use tests are applied to patients with earlier or less evident forms of disease, and those with other less serious conditions that may also cause positive results, so that sensitivity and specificity may decrease to unacceptable levels. One example of this was the suggestion that carcinoembryonic antigen might be useful to screen for early stage colon cancer. But probably more common is technology creep, the tendency for screening tests or forms of treatment to enter common practice with inadequate evidence of benefit, or without clarity about the types of patients in whom they may be of benefit. Current examples of controversy for screening include mammography in younger age groups, prostate-specific antigen, and computed tomography for coronary calcium. The use of MRI as a diagnostic test for most common types of acute low back pain is unlikely to be of value. The proper assessment of a new test, if it occurs at all, often tends to occur after the test has entered common use, making it harder to recruit patients for a study or change physician behavior and patient expectations.
Lotty Hooft: In general, skewed evidence in the direction of “positive” study results will raise the concern that clinical decisions are based on inflated estimates of laboratory test performances. Inadequate decisions may subject patients to adverse side effects from administration of treatments that they do not need. The wasted resources are worrisome in times of continuous increases of healthcare costs.
What type of clinical studies should be registered? All studies that collect data or specimens in humans?
John P.A. Ioannidis: Besides human studies, there is also interest in the registration of preclinical research in animals, since it may decide what technologies or tests move to human experimentation. Conversely, I am trying to be realistic and acknowledge that it may be impossible to register all studies that collect data or specimens, for various reasons. If so, it is important to know what universe each study belongs to regarding its registration status. Unregistered studies (what I call level 0 registration) may still offer some interesting hints, but they should be reproduced independently with registered studies before any serious claim is made. For registered studies, there are also different levels, depending on whether just the presence of the study, full protocols, data sets, raw data, analysis plans and codes, and/or open live streaming of the ongoing research are registered.
Kurtis R. Bray: I agree with the current practice of the ClinicalTrials.gov registry that requires registration primarily for interventional human clinical trials of drugs and medical devices. And I believe that other registries should follow similar rules.
In vitro diagnostic (IVD) tests are regulated as medical devices and as such in the US fall under the purview of the Food and Drug Administration Modernization Act of 1997 and the Food and Drug Administration Amendments Act of 2007 that first established and then expanded the requirements for registration of certain types of clinical trials.
Much of the discussion and debate around the passage of these laws centered on the benefit of making clinical trial information available to physicians and patients who together are battling serious diseases, with the thought that if armed with knowledge of ongoing trials they might choose to seek out and participate in applicable trials to possibly receive the benefit of early access to the latest experimental therapeutic interventions.
Thus, the laws and their subsequent application have appropriately focused on the registration of interventional clinical trials. Under these applicable laws as interpreted by NIH and the FDA and as stated on the Clinicaltrials.gov website, a device clinical trial must be registered if it meets each of the following four criteria: “(1) it is prospective clinical study of health outcomes; (2) it compares an intervention [emphasis added] with a device against a control in human subjects; (3) the studied device is subject to section 510(k), 515, or 520(m) of the Federal Food, Drug, and Cosmetic Act (FDC Act); and (4) it is other than a small clinical trial to determine the feasibility of a device, or a clinical trial to test prototype devices where the primary outcome measure relates to feasibility and not to health outcomes” (http://prsinfo.clinicaltrials.gov/ElaborationsOnDefinitions.pdf).
The key concept with respect to registering IVD trials is “intervention.” By their intention and design most IVD clinical trials are observational rather than interventional in nature. Indeed, most IVD clinical trial protocols require that subjects receive the same medical care they would have received had they not participated in the trial and that no medical intervention may be made to subjects on the basis of the investigational test trial results. Moreover, as a normal part of the IVD trial blinding process, treating physicians are blinded to the experimental test results, and the laboratorians performing the tests are also blinded to the subjects' medical conditions and diagnoses. These blinding and ethics requirements render impossible any medical interventions based on the investigational test trial results.
Under current regulation observational trials may be voluntarily registered if the sponsor so chooses, and some sponsors do so. But since by definition patients cannot normally improve or otherwise alter their medical care by participation in an observational IVD trial, it is appropriate that registration of these trials remain voluntary in accordance with the rules.
Lisa M. McShane: I believe that any clinical study that will generate results directly leading to a test or therapy that will have an impact on patient care should be registered. This would certainly include any prospective study in which a diagnostic or prognostic test is being used in an investigational manner in a study to evaluate its performance. I would go further and extend the registration requirement to studies that retrospectively use clinical samples collected and stored from a previously completed prospective cohort study or clinical trial. When they are well designed, such studies, which some refer to as “prospective-retrospective” studies, can generate a high level of evidence; therefore, the transparency afforded by study registration would be highly desirable.
Robert M. Golub: Ideally all clinical studies would be prospectively registered, but there are pragmatic limitations. For example, many different studies use data generated from a large database, such as NHANES (National Health and Nutrition Examination Survey) or patient registries. It would not be practical to register studies at the time of establishing these databases because it is not clear at that time what questions will be addressed. Pilot studies and exploratory analyses may generate more formal studies. Findings of one study may provide insights that lead to additional analyses. Data structure may require modification of anticipated statistical approaches. For all of these reasons, the appropriate timing of registration, and the degree of formality of research that would potentially require registration, therefore may be variable. So the relationship between the ideal and the practical is likely to be complex. Nevertheless, all studies can establish a protocol and anticipated statistical analysis plan before initiation; this would not only serve as a guide to conducting the research but also would serve as a record against which the final study could be compared.
Lotty Hooft: To me the rational and ethical obligations to register studies apply to all studies in humans, and maybe even any study with stored human samples, although it might be too philosophical to encourage getting all studies on humans registered. Perhaps we can introduce a standard policy that any study that requires Institutional Review Board approval needs to be registered.
Can the existing trial registries be used? Will the appropriate information be obtained?
John P.A. Ioannidis: ClinicalTrials.gov already has records on >30 000 observational studies. Perhaps some more sophisticated and tailored versions may be generated for capturing information on diagnostic and prognostic studies in ClinicalTrials.gov and/or other existing registries more systematically. However, we also need resources for registering full data sets and analysis codes.
Kurtis R. Bray: Yes, the ClinicalTrials.gov registry can certainly be used. As I have indicated earlier, it currently serves as an appropriate registry for interventional IVD clinical trials and as a voluntary registry for observational trials. The appropriate information can be obtained to inform patients, physicians, and researchers of the basics of these trials and of where they are being conducted so that they may be aware and perhaps participate in the trials if appropriate.
Lisa M. McShane: For some fairly simple tests I believe that existing trial registries could be used to catalogue relevant studies, but this may prove insufficient over time. The increasing complexity of tests, for example multiplex genomic tests, included in more recent diagnostic and prognostic studies could strain existing trial registries. Simply capturing enough information about the test to adequately describe it while maintaining the ability to effectively query the registry could prove challenging for existing trial registries.
Robert M. Golub: Some observational studies are already registered on trial registration sites, and a registry for metaanalyses has been established, so existing registries might be able to accommodate other study designs. The key would be to establish the minimal data fields necessary to accomplish the goals of registration for each study design.
Lotty Hooft: Although the 20-item format for the trial characteristics is not ideal (WHO Trial Registration Data Set), there is no need to modify these items. Diagnostic and prognostic studies can be added to existing trial registers to make optimal use of the already fully developed technical infrastructure. In addition, the disconnection of prognostic and diagnostic evaluation studies embedded in clinical trials can be avoided, resulting in only one unique trial identifier assigned to these trials. However, guidance is needed on what information must appear to make registration of diagnostic and prognostic studies scientifically informative. Since there is no formal policy yet, the quality of registered information is low and there is no uniformity between registers.
Are there drawbacks for registering diagnostic studies? If yes, what are they?
John P.A. Ioannidis: Potential drawbacks include the false security that if a study is registered, selective reporting bias is averted; and the normative response where investigators may register studies that look nice on paper while in reality they are messy post hoc efforts.
Kurtis R. Bray: Yes, there are definite business drawbacks to forced clinical trial registration for IVD manufacturers. At a strategic level, involuntary trial registration forces companies to reveal to competitors much of their strategic business plans and product pipelines.
At a more tactical level, registration of clinical trials requires manufacturers to in effect reveal valuable trade secrets and intellectual property. This is especially an issue when the registration requirements also include revealing the protocol or other details of the study design, as has been promoted and proposed. Often the design itself of an IVD trial, particularly in the case of complex prospective efficacy trials, is a key company asset that is developed at considerable cost and effort. Thus, some companies who have made this investment seek to protect the details of their trial designs as trade secrets in much the same way as they protect details of reagent formulation and instrument development and design.
The FDA has long publicly published a Decision Summary and a Summary of Safety and Effectiveness for each laboratory test once it is approved or cleared. But details of the trial design revealed in these documents are usually moderate and would often require a certain degree of uncertain “reverse engineering” to successfully copy. In addition, these are made public only well after the trial is over. By contrast, forced registration of beginning or ongoing trials, especially if details of the protocol must be disclosed, essentially hands a detailed trial design roadmap to all of a company's competitors.
Lisa M. McShane: From the perspective of advancing science and developing tests that benefit patients, I don't see major drawbacks for registering studies. There may be some logistical challenges and resource issues in developing and maintaining a registry, and some might have concerns about revealing diagnostics industry activities and priorities.
Robert M. Golub: From the perspective of a researcher, the public announcement of initiating a new study might result in a loss of a research or commercial competitive advantage. Another possible drawback could be the need to periodically update the registration; additional information may become available during the course of the study that would require modification in the study protocol, such as with sample size or outcomes.
Lotty Hooft: Some people are concerned that registration will delay innovation, when results might not get published if the analyses were not prespecified in a register.
Is registering a study enough, or should the actual data be deposited?
John P.A. Ioannidis: In principle, I am in favor of making available also the raw data, along with the detailed protocol, analysis plan, analytical codes, and documentation of all decisions made in the conduct and analysis of the study. However, this would require extra cost and resources, e.g., to achieve appropriate deidentification. To do this for every single diagnostic and prognostic study is a challenge. Messy data explorations should be acknowledged as such rather than trying to masquerade them as prospective registered experiments. At a minimum, for any proposed diagnostic or prognostic test that either gets endorsed and/or licensed for any clinical use (with or without supporting randomized evidence) or is considered for assessment of utility in a randomized clinical trial, I think raw data should be widely available and results reproduced by independent analysts. This also means that messy data explorations would not suffice for a test to reach this stage.
Kurtis R. Bray: When and if required, registration of the study is sufficient. The same drawback of revealing intellectual property mentioned above applies to depositing data. In addition, study data are already fully open to inspection by regulatory agencies such as the FDA, whose inspectors are skilled and experienced in detecting and investigating protocol deviations as well as auditing data for quality and completeness and interpreting them for scientific validity and rigor. In my opinion, this degree of oversight by skilled inspectors is sufficient.
Depositing data publicly, particularly full, raw data sets, invites misuse. It enables anyone and everyone, including competitors, individuals untrained in statistics or science, antiscience crusaders, and proponents of pet or quack theories to endlessly reanalyze clinical trial results to suit their own ends.
Lisa M. McShane: Deposition of actual data is highly desirable. Data availability can facilitate evaluation of appropriateness of data analysis methods used and robustness of reported results. Data are then also available for use in metaanalyses which can provide additional convincing evidence for the worth of a test. It is important for deposited data to be accurate, complete, and well annotated.
Many issues will need to be addressed to significantly expand and enforce data deposition requirements. Data curation can require substantial resources in terms of electronic storage space, development of search engines, and development of data structures to handle data sets that are continually increasing in size and complexity. Attention must be paid to ethical and legal requirements for privacy and confidentiality, consent, and protection of proprietary interests. It is not clear who should be responsible for constructing and maintaining such data repositories.
Robert M. Golub: This is an extraordinarily complex question that is currently being contemplated related to randomized clinical trials. Issues include site and locus of responsibility for such data repositories, participant and patient privacy requirements, whether there should be oversight in determining legitimate access to the data for secondary research, and challenges in making the data format usable for others. These issues need to be resolved.
Lotty Hooft: Trial registries can provide a comprehensive overview of planned and ongoing diagnostic and prognostic studies, thereby identifying knowledge gaps, facilitating collaboration in these research fields, preventing data dredging, and avoiding duplication of effort, time, and resources. In addition, depositing of the actual “raw data” will enable individual patient data (IPD) metaanalyses, which are potentially more reliable and clinically more informative than metaanalyses based on aggregated data. To encourage the availability of IPD, funders of diagnostic and prognostic research may require making research data accessible.
What can be learned from the experience of registering clinical trials?
John P.A. Ioannidis: Registration of clinical trials has been a highly successful initiative. We have learned that in order for registration to be successful it should have the support of influential stakeholders, in this case practically all major journals. We have also learned that while registration of studies is helpful towards understanding the potential depth of publication bias, it is not sufficient alone to fully understand the depth of selective analysis and outcome reporting.
Kurtis R. Bray: The best lesson from current experience is that the key going forward will be to continue to use registries to register interventional trials for the public benefit without encouraging their misuse or burdening sponsors and scientists with onerous, non–value-added reporting requirements.
Lisa M. McShane: Clinical trial registries have made it much easier for clinicians and their patients to identify trials for which the patients are eligible to enroll or that might have generated results relevant to their clinical situation. Clinical researchers and study funders can use registries to scan the landscape of studies being conducted in a particular clinical area to identify gaps and avoid redundancy. Registry information can aid in assessment of adherence to prespecified study plans and evaluation of the quality of research results, as well as assist in identification of studies that have not been published. Experience with the ClinicalTrials.gov registry demonstrated that investigators may not utilize registries just because they are available. Incentives are needed, including requirements for study registration enforced by research funders, or journal requirements for registration before study initiation as a condition of eligibility for eventual publication of study results.
Robert M. Golub: Although the ICMJE established a policy requiring prospective trial registration that went into effect in 2005, some researchers are still unaware of or misunderstand the policy. Establishing requirements for other study designs would require a major educational effort. To be successful, such a policy would likely require endorsement by an influential group such as the ICMJE. However, one of the most important lessons is that clinical trial registries provide insufficient information for editors to fully evaluate study design and execution; that requires access to the trial protocol and statistical analysis plan. While studies of diagnostic tests and prognosis are generally going to be less complex, it may still be important to have formal protocols and statistical analysis plans to be able to assess validity.
Lotty Hooft: The potential benefits of clinical trial registration are undermined by the poor quality of registration. Unawareness of the importance of trial registration and the lack of advanced registration systems may have affected the quality of the registered information. In addition, many journals that endorse the clinical trial registration policy still have not provided sufficient information in the instruction to authors, or incorporated the policy in the editorial and peer review process. Unregistered or retrospectively registered clinical trials are often considered for publication, thereby ignoring the current registration requirement. Before registration of diagnostic and prognostic studies becomes a requirement, guidelines should be developed to clarify many specific issues like timing of registration and what is considered as essential information. Not only researchers need guidance, but also journal editors and peer reviewers to profit from the potential benefits of registration.
Footnotes
↵10 Nonstandard abbreviations:
- ICMJE,
- International Committee of Medical Journal Editors;
- FDA,
- US Food and Drug Administration;
- IVD,
- in vitro diagnostic;
- IPD,
- individual patient data.
Author Contributions: All authors confirmed they have contributed to the intellectual content of this paper and have met the following 3 requirements: (a) significant contributions to the conception and design, acquisition of data, or analysis and interpretation of data; (b) drafting or revising the article for intellectual content; and (c) final approval of the published article.
Authors' Disclosures or Potential Conflicts of Interest: Upon manuscript submission, all authors completed the author disclosure form. Disclosures and/or potential conflicts of interest:
Employment or Leadership: N. Rifai, Clinical Chemistry, AACC; K.R. Bray, Beckman Coulter Inc.; R.M. Golub, JAMA.
Consultant or Advisory Role: None declared.
Stock Ownership: K.R. Bray, Danaher Corporation (parent company of Beckman Coulter).
Honoraria: None declared.
Research Funding: None declared.
Expert Testimony: None declared.
Patents: None declared.
- Received for publication April 16, 2014.
- Accepted for publication April 30, 2014.
- © 2014 American Association for Clinical Chemistry