Clean-label backdoor attack on link prediction task

Abstract Graph Neural Networks (GNNs) have shown excellent performance as a powerful tool on link prediction task. Recent studies have shown that link prediction based on GNNs is vulnerable to backdoor attacks. However, existing backdoor attack methods on link prediction task require modification of...

Full description

Saved in:
Bibliographic Details
Main Authors: Junming Mo, Ming Xu, Xiaogang Xing
Format: Article
Language:English
Published: SpringerOpen 2025-08-01
Series:Cybersecurity
Subjects:
Online Access:https://doi.org/10.1186/s42400-024-00353-2
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Abstract Graph Neural Networks (GNNs) have shown excellent performance as a powerful tool on link prediction task. Recent studies have shown that link prediction based on GNNs is vulnerable to backdoor attacks. However, existing backdoor attack methods on link prediction task require modification of the link state, which results in poor stealthiness of the backdoor. To address this issue, a clean-label backdoor attack method on link prediction task (CL-Link) is proposed in this paper. Specifically, CL-Link utilizes subgraphs as backdoor triggers and achieves trigger injection by attaching subgraphs to target links. In order to enhance the stealthiness of the attack, CL-Link attaches the trigger without modifying the original connection state of the target links. Instead, it utilizes the original connection state as the label, thus minimizing disturbances to the dataset. To ensure the effectiveness of the attack, the gradient information of the model and the similarity between the trigger nodes and the nodes in the graph are used to optimize the features of the trigger nodes. Extensive experiments were performed on multiple benchmark datasets (i.e., Cora, Citeseer, and Pubmed), and the proposed method achieved the highest attack success rate of 97.69% with a poisoning rate of only 5%, which validates the effectiveness of our proposed approach.
ISSN:2523-3246