Adversarial measurements for convolutional neural network-based energy theft detection model in smart grid

Electricity theft has become a major problem worldwide and is a significant headache for utility companies. It not only results in revenue loss but also disrupts the quality of electricity, increases generation costs, and raises overall electricity prices. Electricity or Energy theft detection (ETD)...

Full description

Saved in:
Bibliographic Details
Main Authors: Santosh Nirmal, Pramod Patil, Sagar Shinde
Format: Article
Language:English
Published: Elsevier 2025-03-01
Series:e-Prime: Advances in Electrical Engineering, Electronics and Energy
Subjects:
Online Access:http://www.sciencedirect.com/science/article/pii/S2772671125000166
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1832542475837767680
author Santosh Nirmal
Pramod Patil
Sagar Shinde
author_facet Santosh Nirmal
Pramod Patil
Sagar Shinde
author_sort Santosh Nirmal
collection DOAJ
description Electricity theft has become a major problem worldwide and is a significant headache for utility companies. It not only results in revenue loss but also disrupts the quality of electricity, increases generation costs, and raises overall electricity prices. Electricity or Energy theft detection (ETD) systems utilizing machine learning, particularly those employing neural networks, have high accuracy and have become popular in literature, achieving higher detection performance. Recent studies reveal that machine learning and deep learning models are vulnerable. Day by day, different attack techniques are coming up in different fields, including energy, financial, etc. As the use of machine learning for energy theft detection has grown, it has become important to explore its weaknesses. Research has shown that most of the ETD models are vulnerable to evasion attacks (EA). Its goal is to reduce electricity costs by deceiving the model into recognizing a fraudulent customer as legitimate.In this paper, four different experiments are conducted in which we check the performance of Convolutional Neural Network and adaboost (CNN-Adaboost) ETD system. Then, we design an evasion attack to assess the model's performance under attack. The attack comprises two methods: the first is we originally propose a novel Adversarial Data Generation Method (ADGM), which is an algorithm designed to generate adversarial data, and the other is Fast Gradient Sign Method (FGSM). In the third scenario, test the attack success rate on different percentages of malicious consumers. Finally, the performance of CNN-Adaboost and other state-of-the-art methods is tested and compared using 10 % and 20 % adversarial data. Our proposed attack is validated with State Grid Corporation of China (SGCC) dataset.ADGM and FGSM attack models generate adversarial evasion attack samples by modifying the benign sample along with already available malicious data. These samples are transferred to the surrogate model in order to test how efficiently it works on malicious data, and we forward only those data that successfully deceive the surrogate model. The CNN_Adaboost ETD model's overall performance significantly decreased for both methods. The accuracy reduced up to 53.61 % from 96.3 % for ADGM and 63.42 % for FGSM and the transferability rates are 95.82 % and 90.68 % for ADGM and FGSM, respectively. Our findings reveal that the attack success rate (ASR) of ADGM is 94.11 % which is better than FGSM. It is also observed that as the percentage of adversarial data increased, the accuracy of the models decreased. The accuracy of CNN-Adaboost, initially 96.3 %, decreased to 85.45 % and 79.43 % for 10 % and 20 % adversarial data, respectively. These adversaries are transferable and are useful for designing robust and secure machine learning (ML) models.
format Article
id doaj-art-d45d2f1887a147f29331697091cbac06
institution Kabale University
issn 2772-6711
language English
publishDate 2025-03-01
publisher Elsevier
record_format Article
series e-Prime: Advances in Electrical Engineering, Electronics and Energy
spelling doaj-art-d45d2f1887a147f29331697091cbac062025-02-04T04:10:42ZengElseviere-Prime: Advances in Electrical Engineering, Electronics and Energy2772-67112025-03-0111100909Adversarial measurements for convolutional neural network-based energy theft detection model in smart gridSantosh Nirmal0Pramod Patil1Sagar Shinde2Dr. D.Y. Patil Institutte Of Technology, Pimpri, Pune, Maharashtra, IndiaDr. D.Y. Patil Insitute Of Technology, CSIR-Unti for Research and Development of Information Product, Pune, Maharashtra, India; Corresponding author.PCETS NMVPMs, Nutan Maharashtra Institute of Engineering and Technology, Pune, Maharashtra, IndiaElectricity theft has become a major problem worldwide and is a significant headache for utility companies. It not only results in revenue loss but also disrupts the quality of electricity, increases generation costs, and raises overall electricity prices. Electricity or Energy theft detection (ETD) systems utilizing machine learning, particularly those employing neural networks, have high accuracy and have become popular in literature, achieving higher detection performance. Recent studies reveal that machine learning and deep learning models are vulnerable. Day by day, different attack techniques are coming up in different fields, including energy, financial, etc. As the use of machine learning for energy theft detection has grown, it has become important to explore its weaknesses. Research has shown that most of the ETD models are vulnerable to evasion attacks (EA). Its goal is to reduce electricity costs by deceiving the model into recognizing a fraudulent customer as legitimate.In this paper, four different experiments are conducted in which we check the performance of Convolutional Neural Network and adaboost (CNN-Adaboost) ETD system. Then, we design an evasion attack to assess the model's performance under attack. The attack comprises two methods: the first is we originally propose a novel Adversarial Data Generation Method (ADGM), which is an algorithm designed to generate adversarial data, and the other is Fast Gradient Sign Method (FGSM). In the third scenario, test the attack success rate on different percentages of malicious consumers. Finally, the performance of CNN-Adaboost and other state-of-the-art methods is tested and compared using 10 % and 20 % adversarial data. Our proposed attack is validated with State Grid Corporation of China (SGCC) dataset.ADGM and FGSM attack models generate adversarial evasion attack samples by modifying the benign sample along with already available malicious data. These samples are transferred to the surrogate model in order to test how efficiently it works on malicious data, and we forward only those data that successfully deceive the surrogate model. The CNN_Adaboost ETD model's overall performance significantly decreased for both methods. The accuracy reduced up to 53.61 % from 96.3 % for ADGM and 63.42 % for FGSM and the transferability rates are 95.82 % and 90.68 % for ADGM and FGSM, respectively. Our findings reveal that the attack success rate (ASR) of ADGM is 94.11 % which is better than FGSM. It is also observed that as the percentage of adversarial data increased, the accuracy of the models decreased. The accuracy of CNN-Adaboost, initially 96.3 %, decreased to 85.45 % and 79.43 % for 10 % and 20 % adversarial data, respectively. These adversaries are transferable and are useful for designing robust and secure machine learning (ML) models.http://www.sciencedirect.com/science/article/pii/S2772671125000166Adversarial examplesEnergy theft detection systemEvasion attackSmart grid
spellingShingle Santosh Nirmal
Pramod Patil
Sagar Shinde
Adversarial measurements for convolutional neural network-based energy theft detection model in smart grid
e-Prime: Advances in Electrical Engineering, Electronics and Energy
Adversarial examples
Energy theft detection system
Evasion attack
Smart grid
title Adversarial measurements for convolutional neural network-based energy theft detection model in smart grid
title_full Adversarial measurements for convolutional neural network-based energy theft detection model in smart grid
title_fullStr Adversarial measurements for convolutional neural network-based energy theft detection model in smart grid
title_full_unstemmed Adversarial measurements for convolutional neural network-based energy theft detection model in smart grid
title_short Adversarial measurements for convolutional neural network-based energy theft detection model in smart grid
title_sort adversarial measurements for convolutional neural network based energy theft detection model in smart grid
topic Adversarial examples
Energy theft detection system
Evasion attack
Smart grid
url http://www.sciencedirect.com/science/article/pii/S2772671125000166
work_keys_str_mv AT santoshnirmal adversarialmeasurementsforconvolutionalneuralnetworkbasedenergytheftdetectionmodelinsmartgrid
AT pramodpatil adversarialmeasurementsforconvolutionalneuralnetworkbasedenergytheftdetectionmodelinsmartgrid
AT sagarshinde adversarialmeasurementsforconvolutionalneuralnetworkbasedenergytheftdetectionmodelinsmartgrid