Raise standards for preclinical cancer research

Raise standards for preclinical cancer research

29 MARCH 2012 | C. Glenn Begley and Lee M. Ellis
Many preclinical cancer research findings are not reproducible due to inadequate cell lines and animal models. C. Glenn Begley and Lee M. Ellis argue that methods, publications, and incentives must change to benefit patients. Over the past decade, efforts to understand genetic changes in human cancers have improved understanding of molecular drivers, but translating this to clinical success has been low. Clinical trials in oncology have the highest failure rate compared to other therapeutic areas. This low success rate is not sustainable, and investigators must reassess their approach to translating discovery research into clinical success. Many factors contribute to the high failure rate, including the limitations of preclinical tools such as inadequate cancer-cell-line and mouse models. Issues related to clinical-trial design, such as uncontrolled phase II studies, reliance on standard criteria for evaluating tumour response, and challenges in selecting patients prospectively, also play a significant role. The quality of published preclinical data is a major contributor to failure in oncology trials. Drug development relies heavily on the literature, especially regarding new targets and biology. Clinical endpoints in cancer are defined mainly in terms of patient survival, rather than intermediate endpoints seen in other disciplines. Thus, it takes many years before the clinical applicability of initial preclinical observations is known. The results of preclinical studies must therefore be very robust to withstand the rigours and challenges of clinical trials. Amgen's findings show that only 11% of 'landmark' studies could be reproduced. This highlights the need for greater rigor in preclinical research. The scientific community assumes that preclinical study claims can be taken at face value, but this is not always the case. The inability of industry and clinical trials to validate results from the majority of publications on potential therapeutic targets suggests a general, systemic problem. There are many examples of outstanding research that has been rapidly and reliably translated into clinical benefit. However, the inability to validate results from most publications suggests a systemic issue. To improve the reliability of preclinical cancer studies, there must be more opportunities to present negative data. Journal editors must play an active part in initiating a cultural change. There must be mechanisms to report negative data that are accessible through PubMed or other search engines. Preclinical investigators should be blinded to the control and treatment arms, and use only rigorously validated reagents. All experiments should include and show appropriate positive and negative controls. Critical experiments should be repeated, preferably by different investigators in the same lab, and the entire data set must be represented in the final publication. The responsibility for design, analysis, and presentation of data rests with investigators, the laboratory, and the host institution. All are accountable for poor experimental design, a lack of robust supportive data, or selective data presentation. The scientific process demands the highest standards of quality, ethics, and rigour.Many preclinical cancer research findings are not reproducible due to inadequate cell lines and animal models. C. Glenn Begley and Lee M. Ellis argue that methods, publications, and incentives must change to benefit patients. Over the past decade, efforts to understand genetic changes in human cancers have improved understanding of molecular drivers, but translating this to clinical success has been low. Clinical trials in oncology have the highest failure rate compared to other therapeutic areas. This low success rate is not sustainable, and investigators must reassess their approach to translating discovery research into clinical success. Many factors contribute to the high failure rate, including the limitations of preclinical tools such as inadequate cancer-cell-line and mouse models. Issues related to clinical-trial design, such as uncontrolled phase II studies, reliance on standard criteria for evaluating tumour response, and challenges in selecting patients prospectively, also play a significant role. The quality of published preclinical data is a major contributor to failure in oncology trials. Drug development relies heavily on the literature, especially regarding new targets and biology. Clinical endpoints in cancer are defined mainly in terms of patient survival, rather than intermediate endpoints seen in other disciplines. Thus, it takes many years before the clinical applicability of initial preclinical observations is known. The results of preclinical studies must therefore be very robust to withstand the rigours and challenges of clinical trials. Amgen's findings show that only 11% of 'landmark' studies could be reproduced. This highlights the need for greater rigor in preclinical research. The scientific community assumes that preclinical study claims can be taken at face value, but this is not always the case. The inability of industry and clinical trials to validate results from the majority of publications on potential therapeutic targets suggests a general, systemic problem. There are many examples of outstanding research that has been rapidly and reliably translated into clinical benefit. However, the inability to validate results from most publications suggests a systemic issue. To improve the reliability of preclinical cancer studies, there must be more opportunities to present negative data. Journal editors must play an active part in initiating a cultural change. There must be mechanisms to report negative data that are accessible through PubMed or other search engines. Preclinical investigators should be blinded to the control and treatment arms, and use only rigorously validated reagents. All experiments should include and show appropriate positive and negative controls. Critical experiments should be repeated, preferably by different investigators in the same lab, and the entire data set must be represented in the final publication. The responsibility for design, analysis, and presentation of data rests with investigators, the laboratory, and the host institution. All are accountable for poor experimental design, a lack of robust supportive data, or selective data presentation. The scientific process demands the highest standards of quality, ethics, and rigour.
Reach us at info@study.space