Article Text

Download PDFPDF

Daring discourse: artificial intelligence in pain medicine, opportunities and challenges
Free
  1. Meredith C B Adams1,
  2. Ariana M Nelson2 and
  3. Samer Narouze3
  1. 1 Departments of Anesthesiology, Biomedical Informatics, Physiology & Pharmacology, and Public Health Sciences, Wake Forest School of Medicine, Winston-Salem, North Carolina, USA
  2. 2 Department of Anesthesiology and Perioperative Care, University of California Irvine, Irvine, California, USA
  3. 3 Western Reserve Hospital, Cuyahoga Falls, Ohio, USA
  1. Correspondence to Dr Ariana M Nelson, Department of Anesthesiology and Perioperative Care, University of California Irvine, Irvine, CA 92868, USA; arianamn{at}hs.uci.edu

Abstract

Artificial intelligence (AI) tools are currently expanding their influence within healthcare. For pain clinics, unfettered introduction of AI may cause concern in both patients and healthcare teams. Much of the concern stems from the lack of community standards and understanding of how the tools and algorithms function. Data literacy and understanding can be challenging even for experienced healthcare providers as these topics are not incorporated into standard clinical education pathways. Another reasonable concern involves the potential for encoding bias in healthcare screening and treatment using faulty algorithms. And yet, the massive volume of data generated by healthcare encounters is increasingly challenging for healthcare teams to navigate and will require an intervention to make the medical record manageable in the future. AI approaches that lighten the workload and support clinical decision-making may provide a solution to the ever-increasing menial tasks involved in clinical care. The potential for pain providers to have higher-quality connections with their patients and manage multiple complex data sources might balance the understandable concerns around data quality and decision-making that accompany introduction of AI. As a specialty, pain medicine will need to establish thoughtful and intentionally integrated AI tools to help clinicians navigate the changing landscape of patient care.

  • TECHNOLOGY
  • CHRONIC PAIN
  • Economics
  • Diagnostic Techniques and Procedures
  • Treatment Outcome

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Introduction

Artificial intelligence (AI) is a field of computer science that identifies and predicts patterns in large datasets. Machine learning (ML) is a subset of AI that builds statistical models based on training datasets to predict findings.1 Models can be supervised, with required human input, or unsupervised, without explicit programming. Deep learning is a type of ML that generates automated predictions from training datasets. In medical research and clinical care, AI methods have the potential to identify patterns for diagnosis and treatment (table 1).1

Table 1

Definition and current applications of assorted aspects of artificial intelligence

As we begin to incorporate ever-growing quantities of patient-related data, curating and managing these data sources becomes a challenge for an overwhelmed health workforce. While some clinicians embrace AI and the accompanying automation and ability to process large-scale data rapidly, others are concerned about the potential consequences. One of the logistical challenges of assessing the benefits and harm of AI is the ability to evaluate and monitor its impact on clinical care. Specifically, operationalizing AI for decision support involves design, development, selection, use, and ongoing surveillance.1

The primary intersection of AI and pain medicine is through developments in adjacent clinical domains and the use of clinical decision support (CDS). CDS related to opioid prescribing has recently been classified as a medical device and will require US Food and Drug Administration (FDA) approval if the system lacks transparency in algorithms or has a closed loop process, which is used by some prescription drug monitoring programs (PDMPs) to calculate opioid risk scores.2 3 CDS has a large body of literature supporting its benefits but also highlighting associated alarm fatigue.4

As we weigh the strengths and weaknesses of incorporating AI into clinical settings, to date, the FDA has approved at least 29 AI health devices and algorithms for patient care.5 One of the major challenges for blending technology and medical decision-making is the understanding that the algorithms are constructed by humans, and thus fundamentally possess biases and blind spots. These have the potential to be amplified and encoded into clinical care, worsening access and health equity. In anesthesiology, a classic case of this detrimental combination of technology and blind spots was the development of pulse oximetry in Japan. As the measurement capability is impacted by melanin levels, the design places a large proportion of the world’s population at risk for undertreated oxygenation issues, which was particularly impactful in stratification of care during the COVID-19 pandemic.6

Ultimately, the goal of physicians caring for patients with pain is to provide the safest and most effective care possible. The growing volume of digital health data is rapidly becoming an overwhelming burden. A clinician attempting to absorb the multitudinous notes, imaging studies and laboratory tests may learn to triage the most important data points, but AI could perform this task more efficiently and presumably with fewer critical omissions. Similarly, scientific databases may be more effectively queried for answers to real time clinical questions with the assistance of AI. In this work, we examine the strengths and limitations of integrating AI and automated technologies in the clinical care sphere for pain medicine.

Yes: AI will improve care of patients with pain if leveraged thoughtfully

As we work toward precision medicine, AI can play a role in supporting health equity and delivery of treatment for best practices in pain medicine. Understanding that predictions are only as good as the data that trained the models, the best performing models can provide objective information that can reduce bias and markedly improve healthcare treatment recommendations. For example, ML techniques have been used to successfully predict chronicity of symptoms after COVID-19 infection.7 If a similar approach was applied to patients with acute pain to predict likelihood of transition to chronicity, it would improve understanding of these pathways and lead to potential interventions that could prevent this conversion.

Mental health is one of the clinical areas highly relevant to pain treatment that is cautiously embracing the branch of AI known as natural language processing (NLP), the automated analysis of text for meaning. Recent work in this area is demonstrating potential for advances in identification of people with suicidal ideation, using algorithms that screen noisy emergency department notes to identify patients that would benefit from mental health services.8 In a related clinical domain, AI is emerging as part of risk, screening, and imaging evaluation for spine surgery.9 10 Large databases (eg, The Cancer Genome Atlas, National Health and Nutrition Examination Survey, Surveillance, Epidemiology, and End Results) are increasingly available but the massive volumes of information require AI transformation to optimize their ability to improve patient outcomes.11 In pain medicine, application of AI may support risk assessment and screening criteria to support interventions and treatment plans in at risk patients12 and predict increased resource utilization.13

Supervised AI, where a CDS-generated suggestion is confirmed by a clinician prior to implementation, has been used to predict total patient-controlled analgesia consumption based on clinically relevant variables.14 NLP has numerous potential applications for both clinical care and research in pain medicine, primarily through the ability to work in the subjective and unstructured areas of the electronic health record (EHR). This algorithmic support can provide the structure for identifying patients that might experience less common side effects or are more likely to benefit from treatments, which could decrease information-gathering burden on clinicians.15 Thoughtful use of AI in decision support can improve sensitivity and specificity of this augmented clinical care.16 An AI tool was able to predict the need for acute pain service consultation with 93% accuracy over a decade ago; if this was implemented in earnest today, the resultant ability to allocate resources could be leveraged to optimize clinician staffing and improve access for patients.17

No: AI will burden healthcare workers and is unlikely to improve pain outcomes

Although AI may have provided quantitative benefits in very specific healthcare settings, these tend to be clinical arenas where large amounts of reliable discrete data are available for review. Even in those key circumstances, the technology is not developed to the point where AI can function in a silo without physician oversight. A paradigm of this is observed in the field of radiology, where standalone AI is outperformed by radiologists. Although AI may identify suspicious lesions that are overlooked by radiologists, it also increases the workload by increasing the number of total scans for review.16 However, if AI detects a questionable lesion and a physician reviews the scan and overrules this AI classification of concern, the physician will bear increased liability if the patient does eventually develop cancer. On the topic of liability, permitting assistive AI to support non-experts in performance of regional anesthesia procedures is certainly a potential safety hazard, regardless of advancements in dynamic ultrasound.18

These imaging-related applications for AI, although they do not function pristinely, are passable only because the quantity of data available for review is numerous and relatively homogenous (eg, breast mammogram, brachial plexus ultrasound). In nuanced and multifactorial clinical scenarios, such as discerning the etiology of low back pain, AI is likely to markedly underperform a clinician just as it has been shown to be inferior to physician diagnosis in the multi-faceted environment of the ED.19 Indeed, as any pain physician can attest, imaging does not always correlate with a patient’s symptoms and self-report of the subjective experience of pain has been proven superior to even advanced neuroimaging evaluated by ML algorithms.20 AI has also fallen short in quantitative assessments, as researchers were unable to detect new risk factors for death after myocardial infarction despite using multiple ML methodologies.21 Analogous evaluations used to identify risk factors in pain diagnoses, for which no expansive databases exist, are likely to be equally unsuccessful.

Outside of its limitations in diagnosis, AI is also poorly equipped to construct treatment plans, even with limited range of the proscribed algorithm. Using genetic profiling, ML was unable to predict the opioid dose that would be required for patients with cancer, which does not portend well for the typically more complex analgesic regimen construction used in patients with chronic non-cancer pain.22 In a similar vein, a home health application for management of low back pain was ‘non-inferior’ to actual time the patient would spend with a clinician,23 but in interviews of patients involved in the trial, those that did not find benefit with the use of this digital tool felt that clinician involvement would have been superior.24

Other affective-domain concerns include breaches of patient confidentiality and diminished agency, which might be intensified with the use of AI technologies. As an example, harm reduction initiatives aim to reduce opioid overdose deaths, but use of large insurance databases to predict morbidity and mortality in patients with opioid use disorder can endanger patient privacy.25In addition, amplified bias from such results might reduce patient sense of agency due to increased hopelessness. In complex clinical scenarios like these, AI must be restrained as it has the potential to diminish the role of the individual in their own care.

As healthcare systems increasingly emphasize efficiency and cost saving, AI could also potentially decrease access to care for patients with state funded insurance or complex diagnoses. Orthopedic literature is already replete with articles extolling the virtues and inevitability of incorporating prognostic ML into practice, but these early models have notably lacked any attention to social determinants of health (SDH).26 Although not yet included in these processes, one can imagine that SDH factors considered high risk for poor outcomes may be screened out of consideration and result in inequity for pharmaceutical or interventional candidates.

Discussion

The variably subtle or overt presence of AI in current applications of clinical medicine is indisputable. The resulting question is not whether pain clinicians should include AI, but rather how we best leverage this technology to lessen clinical workload and improve care (figure 1). AI faces numerous challenges in data quality, specifically data missingness (eg, not random, incomplete, inconsistent, and potentially inaccurate).27 To thrive with AI, pain medicine will need infrastructure to support this tool if clinicians wish to reliably use AI for clinical support.27 One of the pain medicine specific challenges associated with AI integration is that, unlike clinical domains such as cancer or cardiovascular medicine, our research and body of literature is not robustly defined to broadly support algorithmic approaches to clinical care. Much of our current understanding of AI integration originates from adjacent fields, which is a major limitation, but the observed patterns can provide the foundation for thoughtfully incorporating AI into pain medicine.

Figure 1

Positive and negative attributes of AI must be considered when incorporating these algorithms into relevant clinical domains of pain medicine. AI, artificial intelligence.

When evaluating opportunities to leverage AI in healthcare, an important framework to consider is how it can be used to sift through enormous amounts of clinical data rapidly to improve human clinical decision-making. While AI is only as good as its data set and algorithms, it does provide an opportunity for improved diagnosis and treatment of patients by supporting the burdensome portions of clinical care. In addition, if AI can be used to harmonize, collect, and organize patient data in a way to support higher quality care interactions between patients and providers, this would encourage the continued integration of these tools. Physician burn-out has become a platitude without a simple solution, but AI tools that decrease the administrative burden could potentially improve satisfaction in clinical work.

The challenges of AI in healthcare are numerous because this heterogeneous set of tools has a wide range of implications and potential negative ramifications. Community safety and privacy standards for AI in healthcare are only in the beginning phases of development. AI and algorithms show great promise in a research capacity for knowledge discovery but, in their current state, if these algorithms are applied to direct patient care it must be through CDS with clinician oversight. The trap of advancing technology is adopting the convenient functionality without questioning the methods and development. The potential for algorithmically encoded bias is significant with numerous potential downstream effects on clinical care.28 One of the design challenges associated with AI is that many of the technical solutions are developed without clinical domain expertize. Another concern is that lack of standardization and consistency in medicine creates challenges for an algorithm attempting to incorporate this information. For example, use of morphine milligram equivalents is an attempt to standardize opioid dosing, but different opioid calculators use different methods to generate these results. Few of these calculators use data to develop algorithms and individual variation may cause a certain patient to be more or less responsive to a given opioid. These decisions are difficult to navigate as a clinician, but it would be even more challenging to feel confident in an AI recommendation based on data that has been identified as faulty. This highlights the incontrovertible fact that the governing principle of dependable AI is reliable foundational data.

If AI could be leveraged to be truly supportive, then the benefits would be tangible for front-line clinicians. For example, a PDMP that intelligently guides a clinician to safer opioid prescribing practices would improve patient morbidity. ML that predicts patient response to spinal cord stimulation would be similarly transformative and early studies already show promising results.29 Likewise, EHRs are already presenting important data in real time while clinicians are actively ordering medications, such as a window displaying creatinine values when a renally cleared medication is ordered. If this could be enhanced in ways that reduce the cognitive burden on pain providers that are already extended to maximum efficiency, patient outcomes could be improved.4 Lastly, using AI to predict duration of care to best use healthcare resources can help reign in healthcare costs.30 The opportunity for increased patient access by improving efficiency through reduction of trivial tasks and precise patient selection for certain therapeutics could be truly transformative. Above all, AI must be integrated intelligently.

Ethics statements

Patient consent for publication

References

Footnotes

  • Twitter @meredithadamsmd, @ANels_MD

  • Contributors MCBA, AMN and SN contributed to analysis of the literature, design of the manuscript and to the writing of the manuscript.

  • Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.

  • Competing interests MCBA receives research support from the NIH HEAL Initiative through the National Institute of Biomedical Imaging and Bioengineering of the National Institutes of Health under grant number K08EB022631 and the National Institute of Drug Abuse under grant number R24DA055306, R24DA055306-01S1. AMN receives research support from Veoneer to investigate a biomarker for cannabis intoxication that can be used in roadside testing.

  • Provenance and peer review Not commissioned; externally peer reviewed.