Evaluating non-randomised intervention studies

Evaluating non-randomised intervention studies

September 2003 | JJ Deeks, J Dinnes, R D'Amico, AJ Sowden, C Sakarovitch, F Song, M Petticrew, DG Altman
This report, authored by JJ Deeks, J Dinnes, R D'Amico, AJ Sowden, C Sakarovitch, F Song, M Petticrew, and DG Altman, evaluates methods and evidence for assessing bias in non-randomised intervention studies. The study was conducted in collaboration with the International Stroke Trial and the European Carotid Surgery Trial Collaborative Groups. The report is part of the Health Technology Assessment (HTA) Programme, which aims to produce high-quality research on the costs, effectiveness, and broader impact of health technologies. The report includes three systematic reviews and two empirical studies. The systematic reviews examine existing evidence of bias in non-randomised studies, the content and usability of quality assessment tools, and the assessment of study quality in systematic reviews. The empirical studies generate non-randomised studies from large, multicentre randomised controlled trials (RCTs) and assess the impact of non-random allocation on study results and the effectiveness of case-mix adjustment methods in correcting selection bias. Key findings include: - Results from non-randomised studies may differ from those of RCTs but not always. - Non-randomised studies can still produce misleading results when treated and control groups have similar prognostic factors. - Standard case-mix adjustment methods do not guarantee the removal of bias. - Residual confounding may be high even with good prognostic data. - Many quality assessment tools for non-randomised studies omit key quality domains. - Healthcare policies based on non-randomised studies may need re-evaluation if the uncertainty in the evidence base was not fully appreciated. The report concludes that non-randomised studies should only be undertaken when RCTs are infeasible or unethical. It also recommends further research on the use of resampling methodology, the development of quality assessment tools, and the evaluation of case-mix adjustment methods.This report, authored by JJ Deeks, J Dinnes, R D'Amico, AJ Sowden, C Sakarovitch, F Song, M Petticrew, and DG Altman, evaluates methods and evidence for assessing bias in non-randomised intervention studies. The study was conducted in collaboration with the International Stroke Trial and the European Carotid Surgery Trial Collaborative Groups. The report is part of the Health Technology Assessment (HTA) Programme, which aims to produce high-quality research on the costs, effectiveness, and broader impact of health technologies. The report includes three systematic reviews and two empirical studies. The systematic reviews examine existing evidence of bias in non-randomised studies, the content and usability of quality assessment tools, and the assessment of study quality in systematic reviews. The empirical studies generate non-randomised studies from large, multicentre randomised controlled trials (RCTs) and assess the impact of non-random allocation on study results and the effectiveness of case-mix adjustment methods in correcting selection bias. Key findings include: - Results from non-randomised studies may differ from those of RCTs but not always. - Non-randomised studies can still produce misleading results when treated and control groups have similar prognostic factors. - Standard case-mix adjustment methods do not guarantee the removal of bias. - Residual confounding may be high even with good prognostic data. - Many quality assessment tools for non-randomised studies omit key quality domains. - Healthcare policies based on non-randomised studies may need re-evaluation if the uncertainty in the evidence base was not fully appreciated. The report concludes that non-randomised studies should only be undertaken when RCTs are infeasible or unethical. It also recommends further research on the use of resampling methodology, the development of quality assessment tools, and the evaluation of case-mix adjustment methods.
Reach us at info@study.space
[slides and audio] Evaluating non-randomised intervention studies.