Variational Autoencoder for Image-Based Augmentation of Eye-Tracking Data

Over the past decade, deep learning has achieved unprecedented successes in a diversity of application domains, given large-scale datasets. However, particular domains, such as healthcare, inherently suffer from data paucity and imbalance. Moreover, datasets could be largely inaccessible due to privacy concerns, or lack of data-sharing incentives. Such challenges have attached significance to the application of generative modeling and data augmentation in that domain. In this context, this study explores a machine learning-based approach for generating synthetic eye-tracking data. We explore a novel application of variational autoencoders (VAEs) in this regard. More specifically, a VAE model is trained to generate an image-based representation of the eye-tracking output, so-called scanpaths. Overall, our results validate that the VAE model could generate a plausible output from a limited dataset. Finally, it is empirically demonstrated that such approach could be employed as a mechanism for data augmentation to improve the performance in classification tasks.

Medienart:

E-Artikel

Erscheinungsjahr:

2021

Erschienen:

2021

Enthalten in:

Zur Gesamtaufnahme - volume:7

Enthalten in:

Journal of imaging - 7(2021), 5 vom: 03. Mai

Sprache:

Englisch

Beteiligte Personen:

Elbattah, Mahmoud [VerfasserIn]
Loughnane, Colm [VerfasserIn]
Guérin, Jean-Luc [VerfasserIn]
Carette, Romuald [VerfasserIn]
Cilia, Federica [VerfasserIn]
Dequen, Gilles [VerfasserIn]

Links:

Volltext

Themen:

Data augmentation
Deep learning
Eye-tracking
Journal Article
Variational autoencoder

Anmerkungen:

Date Revised 03.09.2021

published: Electronic

Citation Status PubMed-not-MEDLINE

doi:

10.3390/jimaging7050083

funding:

Förderinstitution / Projekttitel:

PPN (Katalog-ID):

NLM330034456