Analyzing Surgical Technique in Diverse Open Surgical Videos With Multitask Machine Learning

Objective: To overcome limitations of open surgery artificial intelligence (AI) models by curating the largest collection of annotated videos and to leverage this AI-ready data set to develop a generalizable multitask AI model capable of real-time understanding of clinically significant surgical behaviors in prospectively collected real-world surgical videos.

Design, Setting, and Participants: The study team programmatically queried open surgery procedures on YouTube and manually annotated selected videos to create the AI-ready data set used to train a multitask AI model for 2 proof-of-concept studies, one generating surgical signatures that define the patterns of a given procedure and the other identifying kinematics of hand motion that correlate with surgeon skill level and experience. The Annotated Videos of Open Surgery (AVOS) data set includes 1997 videos from 23 open-surgical procedure types uploaded to YouTube from 50 countries over the last 15 years. Prospectively recorded surgical videos were collected from a single tertiary care academic medical center. Deidentified videos were recorded of surgeons performing open surgical procedures and analyzed for correlation with surgical training.

Exposures: The multitask AI model was trained on the AI-ready video data set and then retrospectively applied to the prospectively collected video data set.

Main Outcomes and Measures: Analysis of open surgical videos in near real-time, performance on AI-ready and prospectively collected videos, and quantification of surgeon skill.

Results: Using the AI-ready data set, the study team developed a multitask AI model capable of real-time understanding of surgical behaviors-the building blocks of procedural flow and surgeon skill-across space and time. Through principal component analysis, a single compound skill feature was identified, composed of a linear combination of kinematic hand attributes. This feature was a significant discriminator between experienced surgeons and surgical trainees across 101 prospectively collected surgical videos of 14 operators. For each unit increase in the compound feature value, the odds of the operator being an experienced surgeon were 3.6 times higher (95% CI, 1.67-7.62; P = .001).

Conclusions and Relevance: In this observational study, the AVOS-trained model was applied to analyze prospectively collected open surgical videos and identify kinematic descriptors of surgical skill related to efficiency of hand motion. The ability to provide AI-deduced insights into surgical structure and skill is valuable in optimizing surgical skill acquisition and ultimately improving surgical care.

Medienart:

E-Artikel

Erscheinungsjahr:

2024

Erschienen:

2024

Enthalten in:

Zur Gesamtaufnahme - volume:159

Enthalten in:

JAMA surgery - 159(2024), 2 vom: 01. Feb., Seite 185-192

Sprache:

Englisch

Beteiligte Personen:

Goodman, Emmett D [VerfasserIn]
Patel, Krishna K [VerfasserIn]
Zhang, Yilun [VerfasserIn]
Locke, William [VerfasserIn]
Kennedy, Chris J [VerfasserIn]
Mehrotra, Rohan [VerfasserIn]
Ren, Stephen [VerfasserIn]
Guan, Melody [VerfasserIn]
Zohar, Orr [VerfasserIn]
Downing, Maren [VerfasserIn]
Chen, Hao Wei [VerfasserIn]
Clark, Jevin Z [VerfasserIn]
Berrigan, Margaret T [VerfasserIn]
Brat, Gabriel A [VerfasserIn]
Yeung-Levy, Serena [VerfasserIn]

Links:

Volltext

Themen:

Journal Article
Observational Study

Anmerkungen:

Date Completed 15.02.2024

Date Revised 15.02.2024

published: Print

Citation Status MEDLINE

doi:

10.1001/jamasurg.2023.6262

funding:

Förderinstitution / Projekttitel:

PPN (Katalog-ID):

NLM365461423