Evaluating the fairness and accuracy of machine learning-based predictions of clinical outcomes after anatomic and reverse total shoulder arthroplasty

Document Type

Article

Publication Date

4-2024

Institution/Department

Orthopedics

Journal Title

Journal of shoulder and elbow surgery

MeSH Headings

Humans; Middle Aged; Arthroplasty, Replacement, Shoulder (adverse effects); Shoulder Joint (surgery); Treatment Outcome; Retrospective Studies; Range of Motion, Articular

Abstract

BACKGROUND: Machine learning (ML)-based clinical decision support tools (CDSTs) make personalized predictions for different treatments; by comparing predictions of multiple treatments, these tools can be used to optimize decision making for a particular patient. However, CDST prediction accuracy varies for different patients and also for different treatment options. If these differences are sufficiently large and consistent for a particular subcohort of patients, then that bias may result in those patients not receiving a particular treatment. Such level of bias would deem the CDST "unfair." The purpose of this study is to evaluate the "fairness" of ML CDST-based clinical outcomes predictions after anatomic (aTSA) and reverse total shoulder arthroplasty (rTSA) for patients of different demographic attributes. METHODS: Clinical data from 8280 shoulder arthroplasty patients with 19,249 postoperative visits was used to evaluate the prediction fairness and accuracy associated with the following patient demographic attributes: ethnicity, sex, and age at the time of surgery. Performance of clinical outcome and range of motion regression predictions were quantified by the mean absolute error (MAE) and performance of minimal clinically important difference (MCID) and substantial clinical benefit classification predictions were quantified by accuracy, sensitivity, and the F1 score. Fairness of classification predictions leveraged the "four-fifths" legal guideline from the US Equal Employment Opportunity Commission and fairness of regression predictions leveraged established MCID thresholds associated with each outcome measure. RESULTS: For both aTSA and rTSA clinical outcome predictions, only minor differences in MAE were observed between patients of different ethnicity, sex, and age. Evaluation of prediction fairness demonstrated that 0 of 486 MCID (0%) and only 3 of 486 substantial clinical benefit (0.6%) classification predictions were outside the 20% fairness boundary and only 14 of 972 (1.4%) regression predictions were outside of the MCID fairness boundary. Hispanic and Black patients were more likely to have ML predictions out of fairness tolerance for aTSA and rTSA. Additionally, patients elevation, internal rotation score, American Shoulder and Elbow Surgeons Standardized Shoulder Assessment Form score, or global shoulder function. CONCLUSION: The ML algorithms analyzed in this study accurately predict clinical outcomes after aTSA and rTSA for patients of different ethnicity, sex, and age, where only 1.4% of regression predictions and only 0.3% of classification predictions were out of fairness tolerance using the proposed fairness evaluation method and acceptance criteria. Future work is required to externally validate these ML algorithms to ensure they are equally accurate for all legally protected patient groups.

First Page

888

Last Page

899

Share

COinS