Statistics from Altmetric.com
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.
Quality improvement (QI) embodies ‘systematic and continuous actions that lead to measurable improvement in healthcare services and the health status of targeted patient groups’.1 Through continuous analysis and understanding of key components of its delivery system, an organization may make improvements in key processes to improve the quality of healthcare. Systematic reviews on QI studies suggest that QI interventions may be effective at improving a wide range of health issues, such as hospital readmission, management of diabetes, and osteoporosis.1–3
The reporting of QI interventions and techniques in perioperative medicine is often deficient, making it difficult to ascertain whether an intervention can be effectively used in another setting.4 There are many reasons for the suboptimal reporting of QI studies which may be related to the descriptive nature of the interventions, confounding, and unfamiliar methods.5 In an attempt to standardize the scholarly publication of healthcare QI studies, the Standards for Quality Improvement Reporting Excellence (SQUIRE) Guidelines were published and updated.6 7 Researchers considering embarking on QI-related projects for consideration of publication in Regional Anesthesia & Pain Medicine are encouraged to familiarize themselves with these guidelines as they provide a strategic framework for organizing and presenting such investigations.
In this edition of Regional Anesthesia & Pain Medicine, we have invited Greg Ogrinc, MD, MS, who is the Senior Vice President for Certification Standards and Programs at the American Board of Medical Specialties, to provide recommendations for conducting practice and QI investigations in acute and chronic pain medicine.8 Regional Anesthesia & Pain Medicine receives many submissions where the authors attempt to evaluate the efficacy of new practice changes. Such efforts are facilitated by the widespread availability of electronic medical records (EMR) and data analytic software. However, such quality assurance type work tends to be ranked with a lower priority given its observational nature, retrospective approach, and the course outcome metrics which are often driven by compliance and billing needs. We believe that QI work and the iterative process of understanding what works and what does not work is crucial to improving patient well-being. As such, the editorial board would like to assist authors in their preparation, analysis, and presentation of QI projects. We hope this manuscript serves as a valuable reference for both our readers and researchers who are considering publishing their own institutional-specific observational data.
The editorial board would like to highlight the value of statistical process control (SPC) charts. Typically, authors summarize performance before an intervention (eg, launching an enhanced recovery program) with performance after. This presentation is almost universally a unitary aggregate summary metric over some specified time period (eg, 6 months before vs 6 months after). The fundamental issue with before/after studies (with the outcome metrics such as length of stay, costs, and opioid administration) is that confounding temporal trends many exist independent of any intervention. A good example is length of stay (LOS) where most health systems are under intense pressure to open up bed access and decrease inpatient stay durations. Thus, decreases in LOS could erroneously be subscribed to an enhanced recovery intervention when it is really because (for example) an entire new team of social workers was hired to establish efficient patient dispositions. Even when advanced analytical techniques are used with aggregate data, it is very hard to conclude that the intervention was responsible for the witnessed change in performance.
We think an effective approach to present quality data is through a specialized time series analysis. The analytic approach is known as statistical process control and was developed in the late 1920s by Dr Walter Shewhart, a statistician at the AT&T Bell Laboratories in the USA who aspired to study production quality.9 In these approaches, which most have likely seen, the x-axis is time and the y-axis is the outcome. The goal is to identify patterns of change of data, and specific rules are applied to determine if a ‘special-cause’ signal is identified that is temporally associated with the intervention. A problem with summary aggregate analysis (eg, means, medians) is that you lose considerable information buried in the time covariate. As Dr Ogrinc points out, SPC analysis is often used as a monitoring process to identify a special-cause signal (which could be good or bad) in real time based on statistical principles. An attractive part about the SPC analysis is that you can provide a temporal control where you would examine a clinical area similar to the study environment that did NOT experience the intervention. For instance, if you were looking at a new nerve block for total knee replacement as it pertains to LOS, you could also analyze LOS in total hip arthroplasty that did not have a new nerve block. If the same pattern of dropping LOS over time is seen, then you would be much less optimistic that your new nerve block was the driver. We think that the best quality assurance papers would take advantage of both aggregate summary measures and SPC analysis. As always, the editorial board strongly favors the presentation of crude results BEFORE any modeling is undertaken.
We deeply appreciate the community’s support of our journal and the trust you place in us to assure the publication of meaningful work. We hope you find Dr Ogrinc’s framework interesting and applicable to your own QI work. Please do not hesitate to reach out to us to inquire about your own projects you are working on.
Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.
Competing interests None declared.
Provenance and peer review Commissioned; internally peer reviewed.