Exploring Robust Features for Improving Adversarial Robustness
While deep neural networks (DNNs) have revolutionized many fields, their fragility to carefully designed adversarial attacks impedes the usage of DNNs in safety-critical applications. In this article, we strive to explore the robust features that are not affected by the adversarial perturbations, that is, invariant to the clean image and its adversarial examples (AEs), to improve the model's adversarial robustness. Specifically, we propose a feature disentanglement model to segregate the robust features from nonrobust features and domain-specific features. The extensive experiments on five widely used datasets with different attacks demonstrate that robust features obtained from our model improve the model's adversarial robustness compared to the state-of-the-art approaches. Moreover, the trained domain discriminator is able to identify the domain-specific features from the clean images and AEs almost perfectly. This enables AE detection without incurring additional computational costs. With that, we can also specify different classifiers for clean images and AEs, thereby avoiding any drop in clean image accuracy.
Medienart: |
E-Artikel |
---|
Erscheinungsjahr: |
2024 |
---|---|
Erschienen: |
2024 |
Enthalten in: |
Zur Gesamtaufnahme - volume:PP |
---|---|
Enthalten in: |
IEEE transactions on cybernetics - PP(2024) vom: 09. Apr. |
Sprache: |
Englisch |
---|
Beteiligte Personen: |
Wang, Hong [VerfasserIn] |
---|
Links: |
---|
Themen: |
---|
Anmerkungen: |
Date Revised 09.04.2024 published: Print-Electronic Citation Status Publisher |
---|
doi: |
10.1109/TCYB.2024.3380437 |
---|
funding: |
|
---|---|
Förderinstitution / Projekttitel: |
|
PPN (Katalog-ID): |
NLM370823591 |
---|
LEADER | 01000naa a22002652 4500 | ||
---|---|---|---|
001 | NLM370823591 | ||
003 | DE-627 | ||
005 | 20240410233149.0 | ||
007 | cr uuu---uuuuu | ||
008 | 240410s2024 xx |||||o 00| ||eng c | ||
024 | 7 | |a 10.1109/TCYB.2024.3380437 |2 doi | |
028 | 5 | 2 | |a pubmed24n1371.xml |
035 | |a (DE-627)NLM370823591 | ||
035 | |a (NLM)38593009 | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
100 | 1 | |a Wang, Hong |e verfasserin |4 aut | |
245 | 1 | 0 | |a Exploring Robust Features for Improving Adversarial Robustness |
264 | 1 | |c 2024 | |
336 | |a Text |b txt |2 rdacontent | ||
337 | |a ƒaComputermedien |b c |2 rdamedia | ||
338 | |a ƒa Online-Ressource |b cr |2 rdacarrier | ||
500 | |a Date Revised 09.04.2024 | ||
500 | |a published: Print-Electronic | ||
500 | |a Citation Status Publisher | ||
520 | |a While deep neural networks (DNNs) have revolutionized many fields, their fragility to carefully designed adversarial attacks impedes the usage of DNNs in safety-critical applications. In this article, we strive to explore the robust features that are not affected by the adversarial perturbations, that is, invariant to the clean image and its adversarial examples (AEs), to improve the model's adversarial robustness. Specifically, we propose a feature disentanglement model to segregate the robust features from nonrobust features and domain-specific features. The extensive experiments on five widely used datasets with different attacks demonstrate that robust features obtained from our model improve the model's adversarial robustness compared to the state-of-the-art approaches. Moreover, the trained domain discriminator is able to identify the domain-specific features from the clean images and AEs almost perfectly. This enables AE detection without incurring additional computational costs. With that, we can also specify different classifiers for clean images and AEs, thereby avoiding any drop in clean image accuracy | ||
650 | 4 | |a Journal Article | |
700 | 1 | |a Deng, Yuefan |e verfasserin |4 aut | |
700 | 1 | |a Yoo, Shinjae |e verfasserin |4 aut | |
700 | 1 | |a Lin, Yuewei |e verfasserin |4 aut | |
773 | 0 | 8 | |i Enthalten in |t IEEE transactions on cybernetics |d 2013 |g PP(2024) vom: 09. Apr. |w (DE-627)NLM218340567 |x 2168-2275 |7 nnns |
773 | 1 | 8 | |g volume:PP |g year:2024 |g day:09 |g month:04 |
856 | 4 | 0 | |u http://dx.doi.org/10.1109/TCYB.2024.3380437 |3 Volltext |
912 | |a GBV_USEFLAG_A | ||
912 | |a GBV_NLM | ||
951 | |a AR | ||
952 | |d PP |j 2024 |b 09 |c 04 |