Mitigating Biases with Diverse Ensembles and Diffusion Models

Spurious correlations in the data, where multiple cues are predictive of the target labels, often lead to a phenomenon known as shortcut bias, where a model relies on erroneous, easy-to-learn cues while ignoring reliable ones. In this work, we propose an ensemble diversification framework exploiting Diffusion Probabilistic Models (DPMs) for shortcut bias mitigation. We show that at particular training intervals, DPMs can generate images with novel feature combinations, even when trained on samples displaying correlated input features. We leverage this crucial property to generate synthetic counterfactuals to increase model diversity via ensemble disagreement. We show that DPM-guided diversification is sufficient to remove dependence on primary shortcut cues, without a need for additional supervised signals. We further empirically quantify its efficacy on several diversification objectives, and finally show improved generalization and diversification performance on par with prior work that relies on auxiliary data collection..

Medienart:

Preprint

Erscheinungsjahr:

2023

Erschienen:

2023

Enthalten in:

arXiv.org - (2023) vom: 23. Nov. Zur Gesamtaufnahme - year:2023

Sprache:

Englisch

Beteiligte Personen:

Scimeca, Luca [VerfasserIn]
Rubinstein, Alexander [VerfasserIn]
Teney, Damien [VerfasserIn]
Oh, Seong Joon [VerfasserIn]
Nicolicioiu, Armand Mihai [VerfasserIn]
Bengio, Yoshua [VerfasserIn]

Links:

Volltext [kostenfrei]

Themen:

000
Computer Science - Artificial Intelligence
Computer Science - Computer Vision and Pattern Recognition
Computer Science - Machine Learning

Förderinstitution / Projekttitel:

PPN (Katalog-ID):

XCH042760747