Robust image classification against adversarial attacks using elastic similarity measures between edge count sequences
Copyright © 2020 Elsevier Ltd. All rights reserved..
Due to their unprecedented capacity to learn patterns from raw data, deep neural networks have become the de facto modeling choice to address complex machine learning tasks. However, recent works have emphasized the vulnerability of deep neural networks when being fed with intelligently manipulated adversarial data instances tailored to confuse the model. In order to overcome this issue, a major effort has been made to find methods capable of making deep learning models robust against adversarial inputs. This work presents a new perspective for improving the robustness of deep neural networks in image classification. In computer vision scenarios, adversarial images are crafted by manipulating legitimate inputs so that the target classifier is eventually fooled, but the manipulation is not visually distinguishable by an external observer. The reason for the imperceptibility of the attack is that the human visual system fails to detect minor variations in color space, but excels at detecting anomalies in geometric shapes. We capitalize on this fact by extracting color gradient features from input images at multiple sensitivity levels to detect possible manipulations. We resort to a deep neural classifier to predict the category of unseen images, whereas a discrimination model analyzes the extracted color gradient features with time series techniques to determine the legitimacy of input images. The performance of our method is assessed over experiments comprising state-of-the-art techniques for crafting adversarial attacks. Results corroborate the increased robustness of the classifier when using our discrimination module, yielding drastically reduced success rates of adversarial attacks that operate on the whole image rather than on localized regions or around the existing shapes of the image. Future research is outlined towards improving the detection accuracy of the proposed method for more general attack strategies.
Medienart: |
E-Artikel |
---|
Erscheinungsjahr: |
2020 |
---|---|
Erschienen: |
2020 |
Enthalten in: |
Zur Gesamtaufnahme - volume:128 |
---|---|
Enthalten in: |
Neural networks : the official journal of the International Neural Network Society - 128(2020) vom: 01. Aug., Seite 61-72 |
Sprache: |
Englisch |
---|
Beteiligte Personen: |
Oregi, Izaskun [VerfasserIn] |
---|
Links: |
---|
Themen: |
Adversarial machine learning |
---|
Anmerkungen: |
Date Completed 26.10.2020 Date Revised 26.10.2020 published: Print-Electronic Citation Status MEDLINE |
---|
doi: |
10.1016/j.neunet.2020.04.030 |
---|
funding: |
|
---|---|
Förderinstitution / Projekttitel: |
|
PPN (Katalog-ID): |
NLM310248663 |
---|
LEADER | 01000naa a22002652 4500 | ||
---|---|---|---|
001 | NLM310248663 | ||
003 | DE-627 | ||
005 | 20231225135702.0 | ||
007 | cr uuu---uuuuu | ||
008 | 231225s2020 xx |||||o 00| ||eng c | ||
024 | 7 | |a 10.1016/j.neunet.2020.04.030 |2 doi | |
028 | 5 | 2 | |a pubmed24n1034.xml |
035 | |a (DE-627)NLM310248663 | ||
035 | |a (NLM)32442627 | ||
035 | |a (PII)S0893-6080(20)30158-1 | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
100 | 1 | |a Oregi, Izaskun |e verfasserin |4 aut | |
245 | 1 | 0 | |a Robust image classification against adversarial attacks using elastic similarity measures between edge count sequences |
264 | 1 | |c 2020 | |
336 | |a Text |b txt |2 rdacontent | ||
337 | |a ƒaComputermedien |b c |2 rdamedia | ||
338 | |a ƒa Online-Ressource |b cr |2 rdacarrier | ||
500 | |a Date Completed 26.10.2020 | ||
500 | |a Date Revised 26.10.2020 | ||
500 | |a published: Print-Electronic | ||
500 | |a Citation Status MEDLINE | ||
520 | |a Copyright © 2020 Elsevier Ltd. All rights reserved. | ||
520 | |a Due to their unprecedented capacity to learn patterns from raw data, deep neural networks have become the de facto modeling choice to address complex machine learning tasks. However, recent works have emphasized the vulnerability of deep neural networks when being fed with intelligently manipulated adversarial data instances tailored to confuse the model. In order to overcome this issue, a major effort has been made to find methods capable of making deep learning models robust against adversarial inputs. This work presents a new perspective for improving the robustness of deep neural networks in image classification. In computer vision scenarios, adversarial images are crafted by manipulating legitimate inputs so that the target classifier is eventually fooled, but the manipulation is not visually distinguishable by an external observer. The reason for the imperceptibility of the attack is that the human visual system fails to detect minor variations in color space, but excels at detecting anomalies in geometric shapes. We capitalize on this fact by extracting color gradient features from input images at multiple sensitivity levels to detect possible manipulations. We resort to a deep neural classifier to predict the category of unseen images, whereas a discrimination model analyzes the extracted color gradient features with time series techniques to determine the legitimacy of input images. The performance of our method is assessed over experiments comprising state-of-the-art techniques for crafting adversarial attacks. Results corroborate the increased robustness of the classifier when using our discrimination module, yielding drastically reduced success rates of adversarial attacks that operate on the whole image rather than on localized regions or around the existing shapes of the image. Future research is outlined towards improving the detection accuracy of the proposed method for more general attack strategies | ||
650 | 4 | |a Journal Article | |
650 | 4 | |a Adversarial machine learning | |
650 | 4 | |a Computer vision | |
650 | 4 | |a Deep neural networks | |
650 | 4 | |a Time series analysis | |
700 | 1 | |a Del Ser, Javier |e verfasserin |4 aut | |
700 | 1 | |a Pérez, Aritz |e verfasserin |4 aut | |
700 | 1 | |a Lozano, José A |e verfasserin |4 aut | |
773 | 0 | 8 | |i Enthalten in |t Neural networks : the official journal of the International Neural Network Society |d 1996 |g 128(2020) vom: 01. Aug., Seite 61-72 |w (DE-627)NLM087746824 |x 1879-2782 |7 nnns |
773 | 1 | 8 | |g volume:128 |g year:2020 |g day:01 |g month:08 |g pages:61-72 |
856 | 4 | 0 | |u http://dx.doi.org/10.1016/j.neunet.2020.04.030 |3 Volltext |
912 | |a GBV_USEFLAG_A | ||
912 | |a GBV_NLM | ||
951 | |a AR | ||
952 | |d 128 |j 2020 |b 01 |c 08 |h 61-72 |