AGASI: A Generative Adversarial Network-Based Approach to Strengthening Adversarial Image Steganography

Steganography has been widely used in the field of image privacy protection. However, with the advancement of steganalysis techniques, deep learning-based models are now capable of accurately detecting modifications in stego-images, posing a significant threat to traditional steganography. To addres...

Full description

Saved in:
Bibliographic Details
Main Authors: Haiju Fan, Changyuan Jin, Ming Li
Format: Article
Language:English
Published: MDPI AG 2025-03-01
Series:Entropy
Subjects:
Online Access:https://www.mdpi.com/1099-4300/27/3/282
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Steganography has been widely used in the field of image privacy protection. However, with the advancement of steganalysis techniques, deep learning-based models are now capable of accurately detecting modifications in stego-images, posing a significant threat to traditional steganography. To address this, we propose AGASI, a GAN-based approach for strengthening adversarial image steganography. This method employs an encoder as the generator in conjunction with a discriminator to form a generative adversarial network (GAN), thereby enhancing the robustness of stego-images against steganalysis tools. Additionally, the GAN framework reduces the gap between the original secret image and the extracted image, while the decoder effectively extracts the secret image from the stego-image, achieving the goal of image privacy protection. Experimental results demonstrate that the AGASI method not only ensures high-quality secret images but also effectively reduces the accuracy of neural network classifiers, inducing misclassifications and significantly increasing the embedding capacity of the steganography system. For instance, under PGD attack, the adversarial stego-images generated by the GAN, at higher disturbance levels, successfully maintain the quality of the secret image while achieving an 84.73% misclassification rate in neural network detection. Compared to images with the same visual quality, our method increased the misclassification rate by 23.31%.
ISSN:1099-4300