Transferring Knowledge From Text to Video : Zero-Shot Anticipation for Procedural Actions
Can we teach a robot to recognize and make predictions for activities that it has never seen before? We tackle this problem by learning models for video from text. This paper presents a hierarchical model that generalizes instructional knowledge from large-scale text corpora and transfers the knowledge to video. Given a portion of an instructional video, our model recognizes and predicts coherent and plausible actions multiple steps into the future, all in rich natural language. To demonstrate the capabilities of our model, we introduce the Tasty Videos Dataset V2, a collection of 4022 recipes for zero-shot learning, recognition and anticipation. Extensive experiments with various evaluation metrics demonstrate the potential of our method for generalization, given limited video data for training models.
Medienart: |
E-Artikel |
---|
Erscheinungsjahr: |
2023 |
---|---|
Erschienen: |
2023 |
Enthalten in: |
Zur Gesamtaufnahme - volume:45 |
---|---|
Enthalten in: |
IEEE transactions on pattern analysis and machine intelligence - 45(2023), 6 vom: 01. Juni, Seite 7836-7852 |
Sprache: |
Englisch |
---|
Beteiligte Personen: |
Sener, Fadime [VerfasserIn] |
---|
Links: |
---|
Themen: |
---|
Anmerkungen: |
Date Completed 07.05.2023 Date Revised 07.05.2023 published: Print-Electronic Citation Status PubMed-not-MEDLINE |
---|
doi: |
10.1109/TPAMI.2022.3218596 |
---|
funding: |
|
---|---|
Förderinstitution / Projekttitel: |
|
PPN (Katalog-ID): |
NLM348325746 |
---|
LEADER | 01000naa a22002652 4500 | ||
---|---|---|---|
001 | NLM348325746 | ||
003 | DE-627 | ||
005 | 20231226035917.0 | ||
007 | cr uuu---uuuuu | ||
008 | 231226s2023 xx |||||o 00| ||eng c | ||
024 | 7 | |a 10.1109/TPAMI.2022.3218596 |2 doi | |
028 | 5 | 2 | |a pubmed24n1161.xml |
035 | |a (DE-627)NLM348325746 | ||
035 | |a (NLM)36318562 | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
100 | 1 | |a Sener, Fadime |e verfasserin |4 aut | |
245 | 1 | 0 | |a Transferring Knowledge From Text to Video |b Zero-Shot Anticipation for Procedural Actions |
264 | 1 | |c 2023 | |
336 | |a Text |b txt |2 rdacontent | ||
337 | |a ƒaComputermedien |b c |2 rdamedia | ||
338 | |a ƒa Online-Ressource |b cr |2 rdacarrier | ||
500 | |a Date Completed 07.05.2023 | ||
500 | |a Date Revised 07.05.2023 | ||
500 | |a published: Print-Electronic | ||
500 | |a Citation Status PubMed-not-MEDLINE | ||
520 | |a Can we teach a robot to recognize and make predictions for activities that it has never seen before? We tackle this problem by learning models for video from text. This paper presents a hierarchical model that generalizes instructional knowledge from large-scale text corpora and transfers the knowledge to video. Given a portion of an instructional video, our model recognizes and predicts coherent and plausible actions multiple steps into the future, all in rich natural language. To demonstrate the capabilities of our model, we introduce the Tasty Videos Dataset V2, a collection of 4022 recipes for zero-shot learning, recognition and anticipation. Extensive experiments with various evaluation metrics demonstrate the potential of our method for generalization, given limited video data for training models | ||
650 | 4 | |a Journal Article | |
700 | 1 | |a Saraf, Rishabh |e verfasserin |4 aut | |
700 | 1 | |a Yao, Angela |e verfasserin |4 aut | |
773 | 0 | 8 | |i Enthalten in |t IEEE transactions on pattern analysis and machine intelligence |d 1979 |g 45(2023), 6 vom: 01. Juni, Seite 7836-7852 |w (DE-627)NLM098212257 |x 1939-3539 |7 nnns |
773 | 1 | 8 | |g volume:45 |g year:2023 |g number:6 |g day:01 |g month:06 |g pages:7836-7852 |
856 | 4 | 0 | |u http://dx.doi.org/10.1109/TPAMI.2022.3218596 |3 Volltext |
912 | |a GBV_USEFLAG_A | ||
912 | |a GBV_NLM | ||
951 | |a AR | ||
952 | |d 45 |j 2023 |e 6 |b 01 |c 06 |h 7836-7852 |