Two Improved Methods of Generating Adversarial Examples against Faster R-CNNs for Tram Environment Perception Systems

Trams have increasingly deployed object detectors to perceive running conditions, and deep learning networks have been widely adopted by those detectors. Growing neural networks have incurred severe attacks such as adversarial example attacks, imposing threats to tram safety. Only if adversarial att...

Full description

Saved in:
Bibliographic Details
Main Authors: Shize Huang, Xiaowen Liu, Xiaolu Yang, Zhaoxin Zhang, Lingyu Yang
Format: Article
Language:English
Published: Wiley 2020-01-01
Series:Complexity
Online Access:http://dx.doi.org/10.1155/2020/6814263
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1832566347433771008
author Shize Huang
Xiaowen Liu
Xiaolu Yang
Zhaoxin Zhang
Lingyu Yang
author_facet Shize Huang
Xiaowen Liu
Xiaolu Yang
Zhaoxin Zhang
Lingyu Yang
author_sort Shize Huang
collection DOAJ
description Trams have increasingly deployed object detectors to perceive running conditions, and deep learning networks have been widely adopted by those detectors. Growing neural networks have incurred severe attacks such as adversarial example attacks, imposing threats to tram safety. Only if adversarial attacks are studied thoroughly, researchers can come up with better defence methods against them. However, most existing methods of generating adversarial examples have been devoted to classification, and none of them target tram environment perception systems. In this paper, we propose an improved projected gradient descent (PGD) algorithm and an improved Carlini and Wagner (C&W) algorithm to generate adversarial examples against Faster R-CNN object detectors. Experiments verify that both algorithms can successfully conduct nontargeted and targeted white-box digital attacks when trams are running. We also compare the performance of the two methods, including attack effects, similarity to clean images, and the generating time. The results show that both algorithms can generate adversarial examples within 220 seconds, a much shorter time, without decrease of the success rate.
format Article
id doaj-art-8071f021f4824c64994228221b3cb4ef
institution Kabale University
issn 1076-2787
1099-0526
language English
publishDate 2020-01-01
publisher Wiley
record_format Article
series Complexity
spelling doaj-art-8071f021f4824c64994228221b3cb4ef2025-02-03T01:04:28ZengWileyComplexity1076-27871099-05262020-01-01202010.1155/2020/68142636814263Two Improved Methods of Generating Adversarial Examples against Faster R-CNNs for Tram Environment Perception SystemsShize Huang0Xiaowen Liu1Xiaolu Yang2Zhaoxin Zhang3Lingyu Yang4Shanghai Key Laboratory of Rail Infrastructure Durability and System Safety, Tongji University, Shanghai, ChinaThe Key Laboratory of Road and Traffic Engineering, Ministry of Education, Tongji University, Shanghai, ChinaChina Railway Shanghai Group Co., Ltd., Shanghai Signal and Communication Division, Shanghai, ChinaThe Key Laboratory of Road and Traffic Engineering, Ministry of Education, Tongji University, Shanghai, ChinaThe Key Laboratory of Road and Traffic Engineering, Ministry of Education, Tongji University, Shanghai, ChinaTrams have increasingly deployed object detectors to perceive running conditions, and deep learning networks have been widely adopted by those detectors. Growing neural networks have incurred severe attacks such as adversarial example attacks, imposing threats to tram safety. Only if adversarial attacks are studied thoroughly, researchers can come up with better defence methods against them. However, most existing methods of generating adversarial examples have been devoted to classification, and none of them target tram environment perception systems. In this paper, we propose an improved projected gradient descent (PGD) algorithm and an improved Carlini and Wagner (C&W) algorithm to generate adversarial examples against Faster R-CNN object detectors. Experiments verify that both algorithms can successfully conduct nontargeted and targeted white-box digital attacks when trams are running. We also compare the performance of the two methods, including attack effects, similarity to clean images, and the generating time. The results show that both algorithms can generate adversarial examples within 220 seconds, a much shorter time, without decrease of the success rate.http://dx.doi.org/10.1155/2020/6814263
spellingShingle Shize Huang
Xiaowen Liu
Xiaolu Yang
Zhaoxin Zhang
Lingyu Yang
Two Improved Methods of Generating Adversarial Examples against Faster R-CNNs for Tram Environment Perception Systems
Complexity
title Two Improved Methods of Generating Adversarial Examples against Faster R-CNNs for Tram Environment Perception Systems
title_full Two Improved Methods of Generating Adversarial Examples against Faster R-CNNs for Tram Environment Perception Systems
title_fullStr Two Improved Methods of Generating Adversarial Examples against Faster R-CNNs for Tram Environment Perception Systems
title_full_unstemmed Two Improved Methods of Generating Adversarial Examples against Faster R-CNNs for Tram Environment Perception Systems
title_short Two Improved Methods of Generating Adversarial Examples against Faster R-CNNs for Tram Environment Perception Systems
title_sort two improved methods of generating adversarial examples against faster r cnns for tram environment perception systems
url http://dx.doi.org/10.1155/2020/6814263
work_keys_str_mv AT shizehuang twoimprovedmethodsofgeneratingadversarialexamplesagainstfasterrcnnsfortramenvironmentperceptionsystems
AT xiaowenliu twoimprovedmethodsofgeneratingadversarialexamplesagainstfasterrcnnsfortramenvironmentperceptionsystems
AT xiaoluyang twoimprovedmethodsofgeneratingadversarialexamplesagainstfasterrcnnsfortramenvironmentperceptionsystems
AT zhaoxinzhang twoimprovedmethodsofgeneratingadversarialexamplesagainstfasterrcnnsfortramenvironmentperceptionsystems
AT lingyuyang twoimprovedmethodsofgeneratingadversarialexamplesagainstfasterrcnnsfortramenvironmentperceptionsystems