Validity Evidence for STATUS To Assess Resident Tolerance for and Competence in Communicating Medical Ambiguity

Ariel S. Frey-Vogel, Harvard Medical School and Mass General for Children, Department of Pediatrics, 175 Cambridge St, Boston, MA 02114, USA. Electronic address: afrey@mgh.harvard.edu.
Kevin Ching, Weill Cornell Medicine and NewYork Presbyterian, Departments of Emergency Medicine and Pediatrics, 525 East 68(th) St., New York, NY 10065, USA. Electronic address: kec9012@med.cornell.edu.
Michael G. Healy, Massachusetts General Hospital and Harvard Medical School, Department of Surgery, 55 Fruit St, Boston, MA 02114, USA. Electronic address: mghealy@mgh.harvard.edu.
Dandan Chen, Massachusetts General Hospital and Harvard Medical School, Department of Surgery, 55 Fruit St, Boston, MA 02114, USA. Electronic address: dchen43@mgh.harvard.edu.
Yoon Soo Park, University of Illinois College of Medicine, Department of Medical Education, 808 S. Wood St, MC 591, Chicago, IL 60612, USA. Electronic address: yspark2@uic.edu.
Emil Petrusa, Massachusetts General Hospital and Harvard Medical School, Department of Surgery, 55 Fruit St, Boston, MA 02114, USA. Electronic address: epetrusa@mgb.org.
Hadi B. Anwar, Pediatic Critical Care of Virginia, 5801 Bremo Rd, Richmond, VA 23226. Electronic address: hadinanwar1@gmail.com.

Abstract

OBJECTIVE: No assessment instrument with validity evidence exists to assess resident competence in communicating medical ambiguity. Here, validity evidence was collected for STATUS (Scalable Tolerating Ambiguity/Uncertainty Tool Utilizing Simulation). METHODS: Using avatar patients and two simulated cases, investigators created a guidebook and trained ten faculty in STATUS use. Pediatric residents completed two video-recorded simulated cases. Residents self-assessed tolerance for communicating medical ambiguity. Two faculty reviewed each video, assessing participants for communicating medical ambiguity. Validity evidence collected included: content, response process, internal structure, and relationship to other variables. Generalizability theory analysis was conducted to understand the assessment tools' reliability. RESULTS: Of 89 eligible residents, forty-three (48.3%) had sessions recorded. Eighty-six videos were analyzed. Faculty rater training increased inter-rater reliability by 0.34 units. The Φ-coefficient was 0.72 for the resident self-assessment tool and 0.26 for the faculty rater assessment tool. The decision study found that with 11 faculty raters and 11 scenarios, the Φ-coefficient would be 0.70. Resident self-assessment negatively associated faculty rater assessment with a Spearman correlation of -0.21 overall, indicating a possible weak correlation. CONCLUSION: Results provide sufficient reliability to measure resident self-assessment of tolerance for communicating medical ambiguity. More scenarios would likely result in higher reliability for faculty assessment.