Dual-Targeted adversarial example in evasion attack on graph neural networks
Abstract This study proposes a novel approach for generating dual-targeted adversarial examples in Graph Neural Networks (GNNs), significantly advancing the field of graph-based adversarial attacks. Unlike traditional methods that focus on inducing specific misclassifications in a single model, our...
Saved in:
Main Authors: | , |
---|---|
Format: | Article |
Language: | English |
Published: |
Nature Portfolio
2025-01-01
|
Series: | Scientific Reports |
Subjects: | |
Online Access: | https://doi.org/10.1038/s41598-025-85493-2 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Abstract This study proposes a novel approach for generating dual-targeted adversarial examples in Graph Neural Networks (GNNs), significantly advancing the field of graph-based adversarial attacks. Unlike traditional methods that focus on inducing specific misclassifications in a single model, our approach creates adversarial samples that can simultaneously target multiple models, each inducing distinct misclassifications. This innovation addresses a critical gap in existing techniques by enabling adversarial attacks that are capable of affecting various models with different objectives. We provide a detailed explanation of the method’s principles and structure, rigorously evaluate its effectiveness across several GNN models, and visualize the impact using datasets such as Reddit and OGBN-Products. Our contributions highlight the potential for dual-targeted attacks to disrupt GNN performance and emphasize the need for enhanced defensive strategies in graph-based learning systems. |
---|---|
ISSN: | 2045-2322 |