Article Text

Download PDFPDF

The Development and Validation of a Quality Assessment and Rating of Technique for Injections of the Spine (AQUARIUS)
  1. Mark C. Bicket, MD*,
  2. Robert W. Hurley, MD, PhD,
  3. Jee Youn Moon, MD, PhD,
  4. Chad M. Brummett, MD§,
  5. Steve Hanling, MD,
  6. Marc A. Huntoon, MD,
  7. Jan van Zundert, MD, PhD, FIPP# and
  8. Steven P. Cohen, MD**
  1. *Department of Anesthesiology and Critical Care Medicine, Johns Hopkins University School of Medicine, Baltimore, MD
  2. Department of Anesthesiology, Medical College of Wisconsin, Milwaukee, WI
  3. Department of Anesthesiology and Pain Medicine, Seoul National University Hospital, Seoul, Republic of Korea
  4. §Division of Pain Medicine, Department of Anesthesiology, University of Michigan Medical School, Ann Arbor, MI
  5. Pain Medicine Division, Department of Anesthesiology, Naval Medical Center-San Diego, CA
  6. Department of Anesthesiology, Vanderbilt University School of Medicine, Nashville, TN
  7. #Department of Anaesthesiology and Multidisciplinary Pain Center, Ziekenhuis Oost-Limburg, Genk, Belgium
  8. **Department of Anesthesiology and Critical Care Medicine, Johns Hopkins University School of Medicine, Baltimore, MD
  1. Address correspondence to Mark C. Bicket, MD, Department of Anesthesiology and Critical Care Medicine, Johns Hopkins University School of Medicine, 600 N Wolfe St, Baltimore, MD 21287 (e-mail: bicket{at}jhmi.edu).

Abstract

Background and Objectives Systematic reviews evaluate the utility of procedural interventions of the spine, including epidural steroid injections (ESIs). However, existing quality assessment tools either fail to account for proper technical quality and patient selection or are not validated. We developed and validated a simple scale for ESIs to provide a quality assessment and rating of technique for injections of the spine (AQUARIUS).

Methods Seven experts generated items iteratively based on prior ESI technique studies and professional judgment. Following testing for face and content validity, a 17-item instrument was used by 8 raters from 2 different backgrounds to assess 12 randomized controlled trials, selected from 3 different categories. Using frequency of assessment, a 12-item instrument was also generated. Both instruments underwent reliability (intraclass correlation coefficient), validity (ability to distinguish “low,” “random,” and “high” study categories), and diagnostic accuracy (receiver operating characteristics) testing.

Results Both 17- and 12-item instruments were scored consistently by raters regardless of background, with overall intraclass correlation coefficients of 0.72 (95% confidence interval [CI], 0.53–0.89) and 0.71 (95% CI, 0.51–0.89), respectively. Both instruments discriminated between clinical trials from all 3 categories. Diagnostic accuracy was similar for the 2 instruments, with areas under receiver operating characteristic curves of 0.89 (95% CI, 0.82–0.96) and 0.90 (95% CI, 0.82–0.97), respectively.

Conclusions The instrument in both 17- and 12-item formats demonstrates good reliability and diagnostic accuracy in rating ESI studies. As a complement to other tools that assess bias, the instrument may improve the ability to evaluate evidence for systematic reviews and improve clinical trial design.

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Footnotes

  • This study was funded in part by the Center for Rehabilitation Sciences Research, Bethesda, MD. Otherwise, funds for this study were not provided directly by any entity.

    The authors declare no conflict of interest.

    Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and PDF versions of this article on the journal's Web site (www.rapm.org).