Article Text

Download PDFPDF
A Valid and Reliable Assessment Tool for Remote Simulation-Based Ultrasound-Guided Regional Anesthesia
  1. David A. Burckett–St. Laurent, MBBS, FRCA*,
  2. Ahtsham U. Niazi, MBBS, FRCPC*,
  3. Melissa S. Cunningham, MSc,
  4. Melanie Jaeger, MD, FRCPC,
  5. Sherif Abbas, MD*,
  6. Jason McVicar, MD, FRCPC* and
  7. Vincent W. Chan, MD, FRCPC*
  1. *Department of Anesthesia and Pain Management, Toronto Western Hospital, University Health Network, Toronto, Ontario, Canada
  2. Temerty/Chang International Centre for Telesimulation and Innovation in Medical Education, Toronto Western Hospital, University Health Network, Toronto, Ontario, Canada
  3. Department of Anesthesiology and Perioperative Medicine, Queen’s University, Kingston, Ontario, Canada
  1. Address correspondence to: Ahtsham U. Niazi, MBBS, FRCPC, Department of Anesthesia and Pain Management, Toronto Western Hospital, University Health Network, 399 Bathurst St, Toronto, Ontario, Canada M5T 2S8 (e-mail: ahtsham.niazi{at}uhn.ca).

Abstract

Background and Objectives The purpose of this study was to establish construct and concurrent validity and interrater reliability of an assessment tool for ultrasound-guided regional anesthesia (UGRA) performance on a high-fidelity simulation model.

Methods Twenty participants were evaluated using a Checklist and Global Rating Scale designed for assessing any UGRA block. The participants performed an ultrasound-guided supraclavicular brachial plexus block on both a patient and a simulator. Evaluations were completed in-person by an expert and remotely by a blinded expert using video recordings. Using previous number of blocks performed as an indication of expertise, participants were divided into Novice (n = 8) and Experienced (n = 12) groups. Construct validity was assessed through the tool’s reliable on-site and remote discrimination of Novice and Experienced anesthetists. Concurrent validity was established by comparisons of patient versus simulator scoring. Finally, interrater reliability was determined by comparing the scores of on-site and off-site evaluators.

Results The Global Rating Scale was able to differentiate Novice from Experienced anesthetists both by on-site and remote assessment on a patient and simulation model. The Checklist was unable to discern the 2 groups on a simulation model remotely and was marginally significant with on-site scoring.

Conclusions This is the first study to demonstrate the validity and reliability of a Global Rating Scale assessment tool for use in UGRA simulation training. Although the checklist may require further refinement, the Global Rating Scale can be used for remote and on-site assessment of UGRA skills.

Statistics from Altmetric.com

Footnotes

  • The authors declare no conflict of interest.

    Work Attributed to Department of Anesthesia and Pain Management, Toronto Western Hospital, University Health Network, Toronto, Canada.

    This project has been entirely funded by departmental funds.

    Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and PDF versions of this article on the journal’s Web site (www.rapm.org).

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.