Dual-Targeted adversarial example in evasion attack on graph neural networks
Abstract This study proposes a novel approach for generating dual-targeted adversarial examples in Graph Neural Networks (GNNs), significantly advancing the field of graph-based adversarial attacks. Unlike traditional methods that focus on inducing specific misclassifications in a single model, our...
Saved in:
Main Authors: | , |
---|---|
Format: | Article |
Language: | English |
Published: |
Nature Portfolio
2025-01-01
|
Series: | Scientific Reports |
Subjects: | |
Online Access: | https://doi.org/10.1038/s41598-025-85493-2 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
_version_ | 1832571795776995328 |
---|---|
author | Hyun Kwon Dae-Jin Kim |
author_facet | Hyun Kwon Dae-Jin Kim |
author_sort | Hyun Kwon |
collection | DOAJ |
description | Abstract This study proposes a novel approach for generating dual-targeted adversarial examples in Graph Neural Networks (GNNs), significantly advancing the field of graph-based adversarial attacks. Unlike traditional methods that focus on inducing specific misclassifications in a single model, our approach creates adversarial samples that can simultaneously target multiple models, each inducing distinct misclassifications. This innovation addresses a critical gap in existing techniques by enabling adversarial attacks that are capable of affecting various models with different objectives. We provide a detailed explanation of the method’s principles and structure, rigorously evaluate its effectiveness across several GNN models, and visualize the impact using datasets such as Reddit and OGBN-Products. Our contributions highlight the potential for dual-targeted attacks to disrupt GNN performance and emphasize the need for enhanced defensive strategies in graph-based learning systems. |
format | Article |
id | doaj-art-72b9b956c11249d6b86d27f5d007fb6e |
institution | Kabale University |
issn | 2045-2322 |
language | English |
publishDate | 2025-01-01 |
publisher | Nature Portfolio |
record_format | Article |
series | Scientific Reports |
spelling | doaj-art-72b9b956c11249d6b86d27f5d007fb6e2025-02-02T12:17:33ZengNature PortfolioScientific Reports2045-23222025-01-0115111510.1038/s41598-025-85493-2Dual-Targeted adversarial example in evasion attack on graph neural networksHyun Kwon0Dae-Jin Kim1Department of Artificial Intelligence and Data Science, Korea Military AcademyDepartment of Architectural Engineering, Kyung Hee UniversityAbstract This study proposes a novel approach for generating dual-targeted adversarial examples in Graph Neural Networks (GNNs), significantly advancing the field of graph-based adversarial attacks. Unlike traditional methods that focus on inducing specific misclassifications in a single model, our approach creates adversarial samples that can simultaneously target multiple models, each inducing distinct misclassifications. This innovation addresses a critical gap in existing techniques by enabling adversarial attacks that are capable of affecting various models with different objectives. We provide a detailed explanation of the method’s principles and structure, rigorously evaluate its effectiveness across several GNN models, and visualize the impact using datasets such as Reddit and OGBN-Products. Our contributions highlight the potential for dual-targeted attacks to disrupt GNN performance and emphasize the need for enhanced defensive strategies in graph-based learning systems.https://doi.org/10.1038/s41598-025-85493-2Graph neural networkAdversarial exampleEvasion attackNode classificationMachine learning |
spellingShingle | Hyun Kwon Dae-Jin Kim Dual-Targeted adversarial example in evasion attack on graph neural networks Scientific Reports Graph neural network Adversarial example Evasion attack Node classification Machine learning |
title | Dual-Targeted adversarial example in evasion attack on graph neural networks |
title_full | Dual-Targeted adversarial example in evasion attack on graph neural networks |
title_fullStr | Dual-Targeted adversarial example in evasion attack on graph neural networks |
title_full_unstemmed | Dual-Targeted adversarial example in evasion attack on graph neural networks |
title_short | Dual-Targeted adversarial example in evasion attack on graph neural networks |
title_sort | dual targeted adversarial example in evasion attack on graph neural networks |
topic | Graph neural network Adversarial example Evasion attack Node classification Machine learning |
url | https://doi.org/10.1038/s41598-025-85493-2 |
work_keys_str_mv | AT hyunkwon dualtargetedadversarialexampleinevasionattackongraphneuralnetworks AT daejinkim dualtargetedadversarialexampleinevasionattackongraphneuralnetworks |