197 348

Cited 25 times in

Selection and Reporting of Statistical Methods to Assess Reliability of a Diagnostic Test: Conformity to Recommended Methods in a Peer-Reviewed Journal

Authors
 Ji Eun Park  ;  Kyunghwa Han  ;  Yu Sub Sung  ;  Mi Sun Chung  ;  Hyun Jung Koo  ;  Hee Mang Yoon  ;  Young Jun Choi  ;  Seung Soo Lee  ;  Kyung Won Kim  ;  Youngbin Shin  ;  Suah An  ;  Hyo-Min Cho  ;  Seong Ho Park 
Citation
 KOREAN JOURNAL OF RADIOLOGY, Vol.18(6) : 888-897, 2017 
Journal Title
KOREAN JOURNAL OF RADIOLOGY
ISSN
 1229-6929 
Issue Date
2017
Keywords
Agreement ; Reliability ; Repeatability ; Repeatability coefficient ; Reproducibility ; Software program ; Statistical analysis ; Statistical method
Abstract
Objective: To evaluate the frequency and adequacy of statistical analyses in a general radiology journal when reporting a reliability analysis for a diagnostic test.

Materials and methods: Sixty-three studies of diagnostic test accuracy (DTA) and 36 studies reporting reliability analyses published in the Korean Journal of Radiology between 2012 and 2016 were analyzed. Studies were judged using the methodological guidelines of the Radiological Society of North America-Quantitative Imaging Biomarkers Alliance (RSNA-QIBA), and COnsensus-based Standards for the selection of health Measurement INstruments (COSMIN) initiative. DTA studies were evaluated by nine editorial board members of the journal. Reliability studies were evaluated by study reviewers experienced with reliability analysis.

Results: Thirty-one (49.2%) of the 63 DTA studies did not include a reliability analysis when deemed necessary. Among the 36 reliability studies, proper statistical methods were used in all (5/5) studies dealing with dichotomous/nominal data, 46.7% (7/15) of studies dealing with ordinal data, and 95.2% (20/21) of studies dealing with continuous data. Statistical methods were described in sufficient detail regarding weighted kappa in 28.6% (2/7) of studies and regarding the model and assumptions of intraclass correlation coefficient in 35.3% (6/17) and 29.4% (5/17) of studies, respectively. Reliability parameters were used as if they were agreement parameters in 23.1% (3/13) of studies. Reproducibility and repeatability were used incorrectly in 20% (3/15) of studies.

Conclusion: Greater attention to the importance of reporting reliability, thorough description of the related statistical methods, efforts not to neglect agreement parameters, and better use of relevant terminology is necessary.
Files in This Item:
T201705518.pdf Download
DOI
10.3348/kjr.2017.18.6.888
Appears in Collections:
1. College of Medicine (의과대학) > Research Institute (부설연구소) > 1. Journal Papers
Yonsei Authors
Han, Kyung Hwa(한경화)
URI
https://ir.ymlib.yonsei.ac.kr/handle/22282913/179842
사서에게 알리기
  feedback

qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Browse

Links