TIBW: Task-Independent Backdoor Watermarking with Fine-Tuning Resilience for Pre-Trained Language Models
Pre-trained language models such as BERT, GPT-3, and T5 have made significant advancements in natural language processing (NLP). However, their widespread adoption raises concerns about intellectual property (IP) protection, as unauthorized use can undermine innovation. Watermarking has emerged as a...
Saved in:
Main Authors: | Weichuan Mo, Kongyang Chen, Yatie Xiao |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2025-01-01
|
Series: | Mathematics |
Subjects: | |
Online Access: | https://www.mdpi.com/2227-7390/13/2/272 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Efficient Method for Robust Backdoor Detection and Removal in Feature Space Using Clean Data
by: Donik Vrsnak, et al.
Published: (2025-01-01) -
GuardianMPC: Backdoor-Resilient Neural Network Computation
by: Mohammad Hashemi, et al.
Published: (2025-01-01) -
Frozen Weights as Prior for Parameter-Efficient Fine-Tuning
by: Xiaolong Ma, et al.
Published: (2025-01-01) -
Klasifikasi Citra Generasi Artificial Intellegence menggunakan Metodde Fine Tuning pada Residual Network
by: Sulthan Abiyyu Hakim, et al.
Published: (2024-07-01) -
Enhancing zero-shot stance detection via multi-task fine-tuning with debate data and knowledge augmentation
by: Qinlong Fan, et al.
Published: (2025-01-01)