Newcastle-Ottawa Scale: comparing reviewers' to authors' assessments

Newcastle-Ottawa Scale: comparing reviewers' to authors' assessments

2014 | Carson Ka-Lok Lo, Dominik Mertz, Mark Loeb
The study compared the Newcastle-Ottawa Scale (NOS) risk of bias assessments between reviewers and authors of 65 cohort studies included in a systematic review on influenza risk factors. Authors completed a survey covering all NOS items, and their results were compared with those of the reviewers. The overall NOS score was significantly higher for reviewers (median = 6) than authors (median = 5). Inter-rater reliability was poor for most items and the overall score. Authors and reviewers had minimal agreement, with only slight reliability for some items. The difference in assessments suggested that reviewers may have assigned lower scores due to limited information from published articles. The study highlights the need for systematic reviewers to contact authors for additional information when using the NOS. The NOS allows for subjective interpretation, which may affect inter-rater reliability. The findings suggest that training and detailed guidance are needed to improve the application of the NOS. The study also notes that the NOS may have lower reliability compared to other tools, and that revised or new instruments could improve its validity. The results indicate that the NOS may not be reliable for assessing risk of bias in observational studies, and that authors' lack of familiarity with the tool may contribute to the discrepancies. The study concludes that systematic reviewers should contact authors for information not published in the study to improve the accuracy of risk of bias assessments.The study compared the Newcastle-Ottawa Scale (NOS) risk of bias assessments between reviewers and authors of 65 cohort studies included in a systematic review on influenza risk factors. Authors completed a survey covering all NOS items, and their results were compared with those of the reviewers. The overall NOS score was significantly higher for reviewers (median = 6) than authors (median = 5). Inter-rater reliability was poor for most items and the overall score. Authors and reviewers had minimal agreement, with only slight reliability for some items. The difference in assessments suggested that reviewers may have assigned lower scores due to limited information from published articles. The study highlights the need for systematic reviewers to contact authors for additional information when using the NOS. The NOS allows for subjective interpretation, which may affect inter-rater reliability. The findings suggest that training and detailed guidance are needed to improve the application of the NOS. The study also notes that the NOS may have lower reliability compared to other tools, and that revised or new instruments could improve its validity. The results indicate that the NOS may not be reliable for assessing risk of bias in observational studies, and that authors' lack of familiarity with the tool may contribute to the discrepancies. The study concludes that systematic reviewers should contact authors for information not published in the study to improve the accuracy of risk of bias assessments.
Reach us at info@study.space