Multi-View Hand-Hygiene Recognition for Food Safety

A majority of foodborne illnesses result from inappropriate food handling practices. One proven practice to reduce pathogens is to perform effective hand-hygiene before all stages of food handling. In this paper, we design a multi-camera system that uses video analytics to recognize hand-hygiene actions, with the goal of improving hand-hygiene effectiveness. Our proposed two-stage system processes untrimmed video from both egocentric and third-person cameras. In the first stage, a low-cost coarse classifier efficiently localizes the hand-hygiene period; in the second stage, more complex refinement classifiers recognize seven specific actions within the hand-hygiene period. We demonstrate that our two-stage system has significantly lower computational requirements without a loss of recognition accuracy. Specifically, the computationally complex refinement classifiers process less than 68% of the untrimmed videos, and we anticipate further computational gains in videos that contain a larger fraction of non-hygiene actions. Our results demonstrate that a carefully designed video action recognition system can play an important role in improving hand hygiene for food safety.

Medienart:

E-Artikel

Erscheinungsjahr:

2020

Erschienen:

2020

Enthalten in:

Zur Gesamtaufnahme - volume:6

Enthalten in:

Journal of imaging - 6(2020), 11 vom: 07. Nov.

Sprache:

Englisch

Beteiligte Personen:

Zhong, Chengzhang [VerfasserIn]
Reibman, Amy R [VerfasserIn]
Mina, Hansel A [VerfasserIn]
Deering, Amanda J [VerfasserIn]

Links:

Volltext

Themen:

Activity recognition
Deep learning
Egocentric video
Journal Article
Temporal segmentation

Anmerkungen:

Date Revised 03.09.2021

published: Electronic

Citation Status PubMed-not-MEDLINE

doi:

10.3390/jimaging6110120

funding:

Förderinstitution / Projekttitel:

PPN (Katalog-ID):

NLM330033301