Efficient Method for Robust Backdoor Detection and Removal in Feature Space Using Clean Data
The steady increase of proposed backdoor attacks on deep neural networks highlights the need for robust defense methods for their detection and removal. A backdoor attack is a type of attack where hidden triggers are added to the input data during training, with the goal of changing the behavior of...
Saved in:
Main Authors: | Donik Vrsnak, Marko Subasic, Sven Loncaric |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2025-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/10845767/ |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Defending Deep Neural Networks Against Backdoor Attack by Using De-Trigger Autoencoder
by: Hyun Kwon
Published: (2025-01-01) -
TIBW: Task-Independent Backdoor Watermarking with Fine-Tuning Resilience for Pre-Trained Language Models
by: Weichuan Mo, et al.
Published: (2025-01-01) -
GuardianMPC: Backdoor-Resilient Neural Network Computation
by: Mohammad Hashemi, et al.
Published: (2025-01-01) -
Enhancing Security in International Data Spaces: A STRIDE Framework Approach
by: Nikola Gavric, et al.
Published: (2024-12-01) -
Optimization of Water Guarantee for Making Face Cleaning Soap
by: Yulia Vera, et al.
Published: (2022-03-01)