Exploring the vulnerability in the inference phase of advanced persistent threats

In recent years, the Internet of Things has been widely used in modern life. Advanced persistent threats are long-term network attacks on specific targets with attackers using advanced attack methods. The Internet of Things targets have also been threatened by advanced persistent threats with the wi...

Full description

Saved in:
Bibliographic Details
Main Authors: Qi Wu, Qiang Li, Dong Guo, Xiangyu Meng
Format: Article
Language:English
Published: Wiley 2022-03-01
Series:International Journal of Distributed Sensor Networks
Online Access:https://doi.org/10.1177/15501329221080417
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:In recent years, the Internet of Things has been widely used in modern life. Advanced persistent threats are long-term network attacks on specific targets with attackers using advanced attack methods. The Internet of Things targets have also been threatened by advanced persistent threats with the widespread application of Internet of Things. The Internet of Things device such as sensors is weaker than host in security. In the field of advanced persistent threat detection, most works used machine learning methods whether host-based detection or network-based detection. However, models using machine learning methods lack robustness because it can be attacked easily by adversarial examples. In this article, we summarize the characteristics of advanced persistent threats traffic and propose the algorithm to make adversarial examples for the advanced persistent threat detection model. We first train advanced persistent threat detection models using different machine learning methods, among which the highest F1-score is 0.9791. Then, we use the algorithm proposed to grey-box attack one of models and the detection success rate of the model drop from 98.52% to 1.47%. We prove that advanced persistent threats adversarial examples are transitive and we successfully black-box attack other models according to this. The detection success rate of the attacked model with the best attacked effect dropped from 98.66% to 0.13%.
ISSN:1550-1477