Article Text
Abstract
Please confirm that an ethics committee approval has been applied for or granted: Yes: I’m uploading the Ethics Committee Approval as a PDF file with this abstract submission
Background and Aims Self-reported pain scores are often used for pain assessments and require effective communication. Observer-based assessments are resource-intensive and require training. We developed an automated system to assess the pain intensity in adult patients via changes in facial expression.
Methods The patients’ facial expressions were videotaped from a frontal view using a customized mobile application. The collected videos were trimmed into multiple 1-second of video clips and categorized into three levels of pain: no pain, mild pain, or significant pain. A total of 468 facial key points were extracted from each video frame. A customized Spatial Temporal Attention Long Short-Term Memory (STA-LSTM) deep learning network was trained and validated using the keypoints to detect pain level through analyzing facial expressions in both spatial and temporal domains.
Results Two hundred patients were recruited, with 2,008 videos collected and clipped into 10,274 1-second clips. Among these clips, a total of 8,219 (80%) balanced and normalized data were randomly chosen for STA-LSTM training, while the remaining 2,055 (20%) data were set aside for validation. By differentiating the polychromous levels of pain (no pain versus mild pain versus significant pain requiring clinical intervention), we reported optimal performance of STA-LSTM model, with the accuracy, sensitivity, recall, and F1-score being 0.9217, 0.9215, 0.9215, and 0.9215 respectively.
Conclusions Our proposed solution has the potential to facilitate objective pain assessment in inpatient and outpatient healthcare settings and allow healthcare professionals and caregivers to perform pain assessment with accessible infrastructure.