Transferring Knowledge From Text to Video : Zero-Shot Anticipation for Procedural Actions

Can we teach a robot to recognize and make predictions for activities that it has never seen before? We tackle this problem by learning models for video from text. This paper presents a hierarchical model that generalizes instructional knowledge from large-scale text corpora and transfers the knowledge to video. Given a portion of an instructional video, our model recognizes and predicts coherent and plausible actions multiple steps into the future, all in rich natural language. To demonstrate the capabilities of our model, we introduce the Tasty Videos Dataset V2, a collection of 4022 recipes for zero-shot learning, recognition and anticipation. Extensive experiments with various evaluation metrics demonstrate the potential of our method for generalization, given limited video data for training models.

Medienart:

E-Artikel

Erscheinungsjahr:

2023

Erschienen:

2023

Enthalten in:

Zur Gesamtaufnahme - volume:45

Enthalten in:

IEEE transactions on pattern analysis and machine intelligence - 45(2023), 6 vom: 01. Juni, Seite 7836-7852

Sprache:

Englisch

Beteiligte Personen:

Sener, Fadime [VerfasserIn]
Saraf, Rishabh [VerfasserIn]
Yao, Angela [VerfasserIn]

Links:

Volltext

Themen:

Journal Article

Anmerkungen:

Date Completed 07.05.2023

Date Revised 07.05.2023

published: Print-Electronic

Citation Status PubMed-not-MEDLINE

doi:

10.1109/TPAMI.2022.3218596

funding:

Förderinstitution / Projekttitel:

PPN (Katalog-ID):

NLM348325746