Content preserving image translation with texture co-occurrence and spatial self-similarity for texture debiasing and domain adaptation
Copyright © 2023 Elsevier Ltd. All rights reserved..
Models trained on datasets with texture bias usually perform poorly on out-of-distribution samples since biased representations are embedded into the model. Recently, various image translation and debiasing methods have attempted to disentangle texture biased representations for downstream tasks, but accurately discarding biased features without altering other relevant information is still challenging. In this paper, we propose a novel framework that leverages image translation to generate additional training images using the content of a source image and the texture of a target image with a different bias property to explicitly mitigate texture bias when training a model on a target task. Our model ensures texture similarity between the target and generated images via a texture co-occurrence loss while preserving content details from source images with a spatial self-similarity loss. Both the generated and original training images are combined to train improved classification or segmentation models robust to inconsistent texture bias. Evaluation on five classification- and two segmentation-datasets with known texture biases demonstrates the utility of our method, and reports significant improvements over recent state-of-the-art methods in all cases.
Medienart: |
E-Artikel |
---|
Erscheinungsjahr: |
2023 |
---|---|
Erschienen: |
2023 |
Enthalten in: |
Zur Gesamtaufnahme - volume:166 |
---|---|
Enthalten in: |
Neural networks : the official journal of the International Neural Network Society - 166(2023) vom: 15. Sept., Seite 722-737 |
Sprache: |
Englisch |
---|
Beteiligte Personen: |
Kang, Myeongkyun [VerfasserIn] |
---|
Links: |
---|
Themen: |
Debiasing |
---|
Anmerkungen: |
Date Completed 11.09.2023 Date Revised 11.09.2023 published: Print-Electronic Citation Status PubMed-not-MEDLINE |
---|
doi: |
10.1016/j.neunet.2023.07.049 |
---|
funding: |
|
---|---|
Förderinstitution / Projekttitel: |
|
PPN (Katalog-ID): |
NLM361059280 |
---|
LEADER | 01000naa a22002652 4500 | ||
---|---|---|---|
001 | NLM361059280 | ||
003 | DE-627 | ||
005 | 20231226084337.0 | ||
007 | cr uuu---uuuuu | ||
008 | 231226s2023 xx |||||o 00| ||eng c | ||
024 | 7 | |a 10.1016/j.neunet.2023.07.049 |2 doi | |
028 | 5 | 2 | |a pubmed24n1203.xml |
035 | |a (DE-627)NLM361059280 | ||
035 | |a (NLM)37607423 | ||
035 | |a (PII)S0893-6080(23)00407-0 | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
100 | 1 | |a Kang, Myeongkyun |e verfasserin |4 aut | |
245 | 1 | 0 | |a Content preserving image translation with texture co-occurrence and spatial self-similarity for texture debiasing and domain adaptation |
264 | 1 | |c 2023 | |
336 | |a Text |b txt |2 rdacontent | ||
337 | |a ƒaComputermedien |b c |2 rdamedia | ||
338 | |a ƒa Online-Ressource |b cr |2 rdacarrier | ||
500 | |a Date Completed 11.09.2023 | ||
500 | |a Date Revised 11.09.2023 | ||
500 | |a published: Print-Electronic | ||
500 | |a Citation Status PubMed-not-MEDLINE | ||
520 | |a Copyright © 2023 Elsevier Ltd. All rights reserved. | ||
520 | |a Models trained on datasets with texture bias usually perform poorly on out-of-distribution samples since biased representations are embedded into the model. Recently, various image translation and debiasing methods have attempted to disentangle texture biased representations for downstream tasks, but accurately discarding biased features without altering other relevant information is still challenging. In this paper, we propose a novel framework that leverages image translation to generate additional training images using the content of a source image and the texture of a target image with a different bias property to explicitly mitigate texture bias when training a model on a target task. Our model ensures texture similarity between the target and generated images via a texture co-occurrence loss while preserving content details from source images with a spatial self-similarity loss. Both the generated and original training images are combined to train improved classification or segmentation models robust to inconsistent texture bias. Evaluation on five classification- and two segmentation-datasets with known texture biases demonstrates the utility of our method, and reports significant improvements over recent state-of-the-art methods in all cases | ||
650 | 4 | |a Journal Article | |
650 | 4 | |a Debiasing | |
650 | 4 | |a Self-similarity | |
650 | 4 | |a Texture co-occurrence | |
650 | 4 | |a Unpaired image translation | |
650 | 4 | |a Unsupervised domain adaptation | |
700 | 1 | |a Won, Dongkyu |e verfasserin |4 aut | |
700 | 1 | |a Luna, Miguel |e verfasserin |4 aut | |
700 | 1 | |a Chikontwe, Philip |e verfasserin |4 aut | |
700 | 1 | |a Hong, Kyung Soo |e verfasserin |4 aut | |
700 | 1 | |a Ahn, June Hong |e verfasserin |4 aut | |
700 | 1 | |a Park, Sang Hyun |e verfasserin |4 aut | |
773 | 0 | 8 | |i Enthalten in |t Neural networks : the official journal of the International Neural Network Society |d 1996 |g 166(2023) vom: 15. Sept., Seite 722-737 |w (DE-627)NLM087746824 |x 1879-2782 |7 nnns |
773 | 1 | 8 | |g volume:166 |g year:2023 |g day:15 |g month:09 |g pages:722-737 |
856 | 4 | 0 | |u http://dx.doi.org/10.1016/j.neunet.2023.07.049 |3 Volltext |
912 | |a GBV_USEFLAG_A | ||
912 | |a GBV_NLM | ||
951 | |a AR | ||
952 | |d 166 |j 2023 |b 15 |c 09 |h 722-737 |