Unsupervised Learning of Disentangled Representation via Auto-Encoding: A Survey

In recent years, the rapid development of deep learning approaches has paved the way to explore the underlying factors that explain the data. In particular, several methods have been proposed to learn to identify and disentangle these underlying explanatory factors in order to improve the learning process and model generalization. However, extracting this representation with little or no supervision remains a key challenge in machine learning. In this paper, we provide a theoretical outlook on recent advances in the field of unsupervised representation learning with a focus on auto-encoding-based approaches and on the most well-known supervised disentanglement metrics. We cover the current state-of-the-art methods for learning disentangled representation in an unsupervised manner while pointing out the connection between each method and its added value on disentanglement. Further, we discuss how to quantify disentanglement and present an in-depth analysis of associated metrics. We conclude by carrying out a comparative evaluation of these metrics according to three criteria, (i) modularity, (ii) compactness and (iii) informativeness. Finally, we show that only the Mutual Information Gap score (MIG) meets all three criteria..

Medienart:

E-Artikel

Erscheinungsjahr:

2023

Erschienen:

2023

Enthalten in:

Zur Gesamtaufnahme - volume:23

Enthalten in:

Sensors - 23(2023), 4, p 2362

Sprache:

Englisch

Beteiligte Personen:

Ikram Eddahmani [VerfasserIn]
Chi-Hieu Pham [VerfasserIn]
Thibault Napoléon [VerfasserIn]
Isabelle Badoc [VerfasserIn]
Jean-Rassaire Fouefack [VerfasserIn]
Marwa El-Bouz [VerfasserIn]

Links:

doi.org [kostenfrei]
doaj.org [kostenfrei]
www.mdpi.com [kostenfrei]
Journal toc [kostenfrei]

Themen:

Auto-encoder
Chemical technology
Disentanglement
Generative models
Metrics
Neural networks
Representation learning

doi:

10.3390/s23042362

funding:

Förderinstitution / Projekttitel:

PPN (Katalog-ID):

DOAJ079977286