Article Text

Download PDFPDF
Evaluation of a Task-Specific Checklist and Global Rating Scale for Ultrasound-Guided Regional Anesthesia
  1. Daniel M. Wong, MBBS, FANZCA*,
  2. Mathew J. Watson, BMSc,
  3. Roman Kluger, MBBS, FANZCA, MBiostat*,
  4. Alwin Chuan, MBBS, PGCertCU, FANZCA,
  5. Michael D. Herrick, MD§,
  6. Irene Ng, MBBS, FANZCA, GradDipClinResearch,,
  7. Damian J. Castanelli, MBBS, MClinEd, FANZCA#,**,
  8. Lisa C. Lin, MBBS, FANZCA††,
  9. Andrew Lansdown, MBBS, FANZCA‡‡ and
  10. Michael J. Barrington, MBBS, FANZCA, PhD*,§§
  1. *St Vincent’s Hospital, Melbourne
  2. University of Newcastle, Newcastle
  3. Liverpool Hospital, Sydney, Australia
  4. §Geisel School of Medicine (Dartmouth), Lebanon, NH
  5. Royal Melbourne Hospital
  6. University of Melbourne
  7. #Monash Medical Centre, Melbourne
  8. **Department of Anaesthesia and Perioperative Medicine, Monash University
  9. ††Austin Health, Melbourne
  10. ‡‡Sydney Medical School, Sydney
  11. §§Melbourne Medical School, Faculty of Medicine, Dentistry and Health Sciences, University of Melbourne, Melbourne, Australia
  1. Address correspondence to: Daniel Wong, MBBS, FANZCA, Department of Anaesthesia, St Vincent’s Hospital, Melbourne, PO Box 2900, Fitzroy, Victoria 3065, Australia (e-mail: danielm.ywong{at}


Background and Objectives Checklists and global rating scales (GRSs) are used for assessment of anesthesia procedural skills. The purpose of this study was to evaluate the reliability and validity of a recently proposed assessment tool comprising a checklist and GRS specific for ultrasound-guided regional anesthesia.

Methods In this prospective, fully crossed study, we videotaped 30 single-target nerve block procedures performed by anesthesia trainees. Following pilot assessment and observer training, videos were assessed in random order by 6 blinded, expert observers. Interrater reliability was evaluated with intraclass correlation coefficients (ICCs) based on a 2-way random-effects model that took into account both agreement and correlation between observer results. Construct validity and feasibility were also evaluated.

Results The ICC between assessors’ total scores was 0.44 (95% confidence interval, 0.27–0.62). All 6 observers scored “experienced trainees” higher than “inexperienced trainees” (median total score 76.7 vs 54.2, P = 0.01), supporting the test’s construct validity. The median time to assess the videos was 4 minutes 29 seconds.

Conclusions This is the first study to evaluate the reliability and validity of a combined checklist and GRS for ultrasound-guided regional anesthesia using multiple observers and taking into account both absolute agreement and correlation in determining the ICC of 0.44 for interrater reliability. There was evidence to support construct validity.

Statistics from

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.


  • The authors declare no conflict of interest.

    This work is attributed to the Department of Anaesthesia, St Vincent’s Hospital, Melbourne.

    Funding: Financing from Department of Anaesthesia, St Vincent’s Hospital, Melbourne. Financial support was also provided by the Australian and New Zealand College of Anaesthetists in the form of scholarship (10/023) and project (14/030) grants. This funding enabled development of the online interface for data entry.