Experimental assessment of аdversarial attacks to the deep neural networks in medical image recognition

This paper addresses the problem of dependence of the success rate of adversarial attacks to the deep neural networks on the biomedical image type and control parameters of generation of adversarial examples. With this work we are going to contribute towards accumulation of experimental results on a...

Full description

Saved in:
Bibliographic Details
Main Authors: D. M. Voynov, V. A. Kovalev
Format: Article
Language:Russian
Published: National Academy of Sciences of Belarus, the United Institute of Informatics Problems 2019-09-01
Series:Informatika
Subjects:
Online Access:https://inf.grid.by/jour/article/view/876
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:This paper addresses the problem of dependence of the success rate of adversarial attacks to the deep neural networks on the biomedical image type and control parameters of generation of adversarial examples. With this work we are going to contribute towards accumulation of experimental results on adversarial attacks for the community dealing with biomedical images. The white-box Projected Gradient Descent attacks were examined based on 8 classification tasks and 13 image datasets containing more than 900 000 chest X-ray and histology images of malignant tumors. An increase of the amplitude and the number of iterations of adversarial perturbations in generating malicious adversarial images leads to a growth of the fraction of successful attacks for the majority of image types examined in this study. Histology images tend to be less sensitive to the growth of amplitude of adversarial perturbations. It was found that the success of attacks was dropping dramatically when the original confidence of predicting image class exceeded 0,95.
ISSN:1816-0301