Neural Linguistic Steganalysis via Multi-Head Self-Attention
Linguistic steganalysis can indicate the existence of steganographic content in suspicious text carriers. Precise linguistic steganalysis on suspicious carrier is critical for multimedia security. In this paper, we introduced a neural linguistic steganalysis approach based on multi-head self-attenti...
Saved in:
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Wiley
2021-01-01
|
Series: | Journal of Electrical and Computer Engineering |
Online Access: | http://dx.doi.org/10.1155/2021/6668369 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Linguistic steganalysis can indicate the existence of steganographic content in suspicious text carriers. Precise linguistic steganalysis on suspicious carrier is critical for multimedia security. In this paper, we introduced a neural linguistic steganalysis approach based on multi-head self-attention. In the proposed steganalysis approach, words in text are firstly mapped into semantic space with a hidden representation for better modeling the semantic features. Then, we utilize multi-head self-attention to model the interactions between words in carrier. Finally, a softmax layer is utilized to categorize the input text as cover or stego. Extensive experiments validate the effectiveness of our approach. |
---|---|
ISSN: | 2090-0147 2090-0155 |