SMGEA : A New Ensemble Adversarial Attack Powered by Long-Term Gradient Memories
Deep neural networks are vulnerable to adversarial attacks. More importantly, some adversarial examples crafted against an ensemble of source models transfer to other target models and, thus, pose a security threat to black-box applications (when attackers have no access to the target models). Current transfer-based ensemble attacks, however, only consider a limited number of source models to craft an adversarial example and, thus, obtain poor transferability. Besides, recent query-based black-box attacks, which require numerous queries to the target model, not only come under suspicion by the target model but also cause expensive query cost. In this article, we propose a novel transfer-based black-box attack, dubbed serial-minigroup-ensemble-attack (SMGEA). Concretely, SMGEA first divides a large number of pretrained white-box source models into several "minigroups." For each minigroup, we design three new ensemble strategies to improve the intragroup transferability. Moreover, we propose a new algorithm that recursively accumulates the "long-term" gradient memories of the previous minigroup to the subsequent minigroup. This way, the learned adversarial information can be preserved, and the intergroup transferability can be improved. Experiments indicate that SMGEA not only achieves state-of-the-art black-box attack ability over several data sets but also deceives two online black-box saliency prediction systems in real world, i.e., DeepGaze-II (https://deepgaze.bethgelab.org/) and SALICON (http://salicon.net/demo/). Finally, we contribute a new code repository to promote research on adversarial attack and defense over ubiquitous pixel-to-pixel computer vision tasks. We share our code together with the pretrained substitute model zoo at https://github.com/CZHQuality/AAA-Pix2pix.
Medienart: |
E-Artikel |
---|
Erscheinungsjahr: |
2022 |
---|---|
Erschienen: |
2022 |
Enthalten in: |
Zur Gesamtaufnahme - volume:33 |
---|---|
Enthalten in: |
IEEE transactions on neural networks and learning systems - 33(2022), 3 vom: 01. März, Seite 1051-1065 |
Sprache: |
Englisch |
---|
Beteiligte Personen: |
Che, Zhaohui [VerfasserIn] |
---|
Links: |
---|
Themen: |
---|
Anmerkungen: |
Date Completed 05.05.2022 Date Revised 05.05.2022 published: Print-Electronic Citation Status MEDLINE |
---|
doi: |
10.1109/TNNLS.2020.3039295 |
---|
funding: |
|
---|---|
Förderinstitution / Projekttitel: |
|
PPN (Katalog-ID): |
NLM318617889 |
---|
LEADER | 01000naa a22002652 4500 | ||
---|---|---|---|
001 | NLM318617889 | ||
003 | DE-627 | ||
005 | 20231225165728.0 | ||
007 | cr uuu---uuuuu | ||
008 | 231225s2022 xx |||||o 00| ||eng c | ||
024 | 7 | |a 10.1109/TNNLS.2020.3039295 |2 doi | |
028 | 5 | 2 | |a pubmed24n1062.xml |
035 | |a (DE-627)NLM318617889 | ||
035 | |a (NLM)33296311 | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
100 | 1 | |a Che, Zhaohui |e verfasserin |4 aut | |
245 | 1 | 0 | |a SMGEA |b A New Ensemble Adversarial Attack Powered by Long-Term Gradient Memories |
264 | 1 | |c 2022 | |
336 | |a Text |b txt |2 rdacontent | ||
337 | |a ƒaComputermedien |b c |2 rdamedia | ||
338 | |a ƒa Online-Ressource |b cr |2 rdacarrier | ||
500 | |a Date Completed 05.05.2022 | ||
500 | |a Date Revised 05.05.2022 | ||
500 | |a published: Print-Electronic | ||
500 | |a Citation Status MEDLINE | ||
520 | |a Deep neural networks are vulnerable to adversarial attacks. More importantly, some adversarial examples crafted against an ensemble of source models transfer to other target models and, thus, pose a security threat to black-box applications (when attackers have no access to the target models). Current transfer-based ensemble attacks, however, only consider a limited number of source models to craft an adversarial example and, thus, obtain poor transferability. Besides, recent query-based black-box attacks, which require numerous queries to the target model, not only come under suspicion by the target model but also cause expensive query cost. In this article, we propose a novel transfer-based black-box attack, dubbed serial-minigroup-ensemble-attack (SMGEA). Concretely, SMGEA first divides a large number of pretrained white-box source models into several "minigroups." For each minigroup, we design three new ensemble strategies to improve the intragroup transferability. Moreover, we propose a new algorithm that recursively accumulates the "long-term" gradient memories of the previous minigroup to the subsequent minigroup. This way, the learned adversarial information can be preserved, and the intergroup transferability can be improved. Experiments indicate that SMGEA not only achieves state-of-the-art black-box attack ability over several data sets but also deceives two online black-box saliency prediction systems in real world, i.e., DeepGaze-II (https://deepgaze.bethgelab.org/) and SALICON (http://salicon.net/demo/). Finally, we contribute a new code repository to promote research on adversarial attack and defense over ubiquitous pixel-to-pixel computer vision tasks. We share our code together with the pretrained substitute model zoo at https://github.com/CZHQuality/AAA-Pix2pix | ||
650 | 4 | |a Journal Article | |
650 | 4 | |a Research Support, Non-U.S. Gov't | |
700 | 1 | |a Borji, Ali |e verfasserin |4 aut | |
700 | 1 | |a Zhai, Guangtao |e verfasserin |4 aut | |
700 | 1 | |a Ling, Suiyi |e verfasserin |4 aut | |
700 | 1 | |a Li, Jing |e verfasserin |4 aut | |
700 | 1 | |a Min, Xiongkuo |e verfasserin |4 aut | |
700 | 1 | |a Guo, Guodong |e verfasserin |4 aut | |
700 | 1 | |a Le Callet, Patrick |e verfasserin |4 aut | |
773 | 0 | 8 | |i Enthalten in |t IEEE transactions on neural networks and learning systems |d 2012 |g 33(2022), 3 vom: 01. März, Seite 1051-1065 |w (DE-627)NLM23236897X |x 2162-2388 |7 nnns |
773 | 1 | 8 | |g volume:33 |g year:2022 |g number:3 |g day:01 |g month:03 |g pages:1051-1065 |
856 | 4 | 0 | |u http://dx.doi.org/10.1109/TNNLS.2020.3039295 |3 Volltext |
912 | |a GBV_USEFLAG_A | ||
912 | |a GBV_NLM | ||
951 | |a AR | ||
952 | |d 33 |j 2022 |e 3 |b 01 |c 03 |h 1051-1065 |