Cited 0 times in

Machine learning model of facial expression outperforms models using analgesia nociception index and vital signs to predict postoperative pain intensity: a pilot study

Authors
 Insun Park  ;  Jae Hyon Park  ;  Jongjin Yoon  ;  Hyo-Seok Na  ;  Ah-Young Oh  ;  Junghee Ryu  ;  Bon-Wook Koo 
Citation
 KOREAN JOURNAL OF ANESTHESIOLOGY, Vol.77(2) : 195-204, 2024-04 
Journal Title
KOREAN JOURNAL OF ANESTHESIOLOGY
ISSN
 2005-6419 
Issue Date
2024-04
MeSH
Analgesia* ; Anesthesia, General ; Artificial Intelligence ; Facial Expression ; Humans ; Machine Learning ; Nociception* / physiology ; Pain Measurement ; Pain, Postoperative / diagnosis ; Pain, Postoperative / etiology ; Pilot Projects ; Vital Signs
Keywords
Artificial intelligence ; Facial expression ; Machine learning ; Pain measurement ; Postoperative pain ; Vital signs
Abstract
Background: Few studies have evaluated the use of automated artificial intelligence (AI)-based pain recognition in postoperative settings or the correlation with pain intensity. In this study, various machine learning (ML)-based models using facial expressions, the analgesia nociception index (ANI), and vital signs were developed to predict postoperative pain intensity, and their performances for predicting severe postoperative pain were compared.

Methods: In total, 155 facial expressions from patients who underwent gastrectomy were recorded postoperatively; one blinded anesthesiologist simultaneously recorded the ANI score, vital signs, and patient self-assessed pain intensity based on the 11-point numerical rating scale (NRS). The ML models' area under the receiver operating characteristic curves (AUROCs) were calculated and compared using DeLong's test.

Results: ML models were constructed using facial expressions, ANI, vital signs, and different combinations of the three datasets. The ML model constructed using facial expressions best predicted an NRS ≥ 7 (AUROC 0.93) followed by the ML model combining facial expressions and vital signs (AUROC 0.84) in the test-set. ML models constructed using combined physiological signals (vital signs, ANI) performed better than models based on individual parameters for predicting NRS ≥ 7, although the AUROCs were inferior to those of the ML model based on facial expressions (all P < 0.050). Among these parameters, absolute and relative ANI had the worst AUROCs (0.69 and 0.68, respectively) for predicting NRS ≥ 7.

Conclusions: The ML model constructed using facial expressions best predicted severe postoperative pain (NRS ≥ 7) and outperformed models constructed from physiological signals.

Trial registration: ClinicalTrials.gov NCT05477303.
Files in This Item:
T992025178.pdf Download
DOI
10.4097/kja.23583
Appears in Collections:
1. College of Medicine (의과대학) > Dept. of Radiology (영상의학교실) > 1. Journal Papers
Yonsei Authors
Yoon, Jongjin(윤종진)
URI
https://ir.ymlib.yonsei.ac.kr/handle/22282913/204269
사서에게 알리기
  feedback

qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Browse

Links