Experimental assessment of аdversarial attacks to the deep neural networks in medical image recognition
This paper addresses the problem of dependence of the success rate of adversarial attacks to the deep neural networks on the biomedical image type and control parameters of generation of adversarial examples. With this work we are going to contribute towards accumulation of experimental results on a...
Saved in:
Main Authors: | , |
---|---|
Format: | Article |
Language: | Russian |
Published: |
National Academy of Sciences of Belarus, the United Institute of Informatics Problems
2019-09-01
|
Series: | Informatika |
Subjects: | |
Online Access: | https://inf.grid.by/jour/article/view/876 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
_version_ | 1832543166475010048 |
---|---|
author | D. M. Voynov V. A. Kovalev |
author_facet | D. M. Voynov V. A. Kovalev |
author_sort | D. M. Voynov |
collection | DOAJ |
description | This paper addresses the problem of dependence of the success rate of adversarial attacks to the deep neural networks on the biomedical image type and control parameters of generation of adversarial examples. With this work we are going to contribute towards accumulation of experimental results on adversarial attacks for the community dealing with biomedical images. The white-box Projected Gradient Descent attacks were examined based on 8 classification tasks and 13 image datasets containing more than 900 000 chest X-ray and histology images of malignant tumors. An increase of the amplitude and the number of iterations of adversarial perturbations in generating malicious adversarial images leads to a growth of the fraction of successful attacks for the majority of image types examined in this study. Histology images tend to be less sensitive to the growth of amplitude of adversarial perturbations. It was found that the success of attacks was dropping dramatically when the original confidence of predicting image class exceeded 0,95. |
format | Article |
id | doaj-art-95f7efc0fd6c46b58a72e84bb3deb8a1 |
institution | Kabale University |
issn | 1816-0301 |
language | Russian |
publishDate | 2019-09-01 |
publisher | National Academy of Sciences of Belarus, the United Institute of Informatics Problems |
record_format | Article |
series | Informatika |
spelling | doaj-art-95f7efc0fd6c46b58a72e84bb3deb8a12025-02-03T11:51:49ZrusNational Academy of Sciences of Belarus, the United Institute of Informatics ProblemsInformatika1816-03012019-09-011631422846Experimental assessment of аdversarial attacks to the deep neural networks in medical image recognitionD. M. Voynov0V. A. Kovalev1Belarusian State UniversityThe United Institute of Informatics Problems of the National Academy of Sciences of BelarusThis paper addresses the problem of dependence of the success rate of adversarial attacks to the deep neural networks on the biomedical image type and control parameters of generation of adversarial examples. With this work we are going to contribute towards accumulation of experimental results on adversarial attacks for the community dealing with biomedical images. The white-box Projected Gradient Descent attacks were examined based on 8 classification tasks and 13 image datasets containing more than 900 000 chest X-ray and histology images of malignant tumors. An increase of the amplitude and the number of iterations of adversarial perturbations in generating malicious adversarial images leads to a growth of the fraction of successful attacks for the majority of image types examined in this study. Histology images tend to be less sensitive to the growth of amplitude of adversarial perturbations. It was found that the success of attacks was dropping dramatically when the original confidence of predicting image class exceeded 0,95.https://inf.grid.by/jour/article/view/876adversarial attacksdeep learningsecurity of neural networkschest x-ray imageshistology images |
spellingShingle | D. M. Voynov V. A. Kovalev Experimental assessment of аdversarial attacks to the deep neural networks in medical image recognition Informatika adversarial attacks deep learning security of neural networks chest x-ray images histology images |
title | Experimental assessment of аdversarial attacks to the deep neural networks in medical image recognition |
title_full | Experimental assessment of аdversarial attacks to the deep neural networks in medical image recognition |
title_fullStr | Experimental assessment of аdversarial attacks to the deep neural networks in medical image recognition |
title_full_unstemmed | Experimental assessment of аdversarial attacks to the deep neural networks in medical image recognition |
title_short | Experimental assessment of аdversarial attacks to the deep neural networks in medical image recognition |
title_sort | experimental assessment of аdversarial attacks to the deep neural networks in medical image recognition |
topic | adversarial attacks deep learning security of neural networks chest x-ray images histology images |
url | https://inf.grid.by/jour/article/view/876 |
work_keys_str_mv | AT dmvoynov experimentalassessmentofadversarialattackstothedeepneuralnetworksinmedicalimagerecognition AT vakovalev experimentalassessmentofadversarialattackstothedeepneuralnetworksinmedicalimagerecognition |