Perspectives on Soft Actor–Critic (SAC)-Aided Operational Control Strategies for Modern Power Systems with Growing Stochastics and Dynamics
The ever-growing penetration of renewable energy with substantial uncertainties and stochastic characteristics significantly affects the modern power grid’s secure and economical operation. Nevertheless, coordinating various types of resources to derive effective online control decisions for a large...
Saved in:
Main Authors: | , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2025-01-01
|
Series: | Applied Sciences |
Subjects: | |
Online Access: | https://www.mdpi.com/2076-3417/15/2/900 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
_version_ | 1832589197689487360 |
---|---|
author | Jinbo Liu Qinglai Guo Jing Zhang Ruisheng Diao Guangjun Xu |
author_facet | Jinbo Liu Qinglai Guo Jing Zhang Ruisheng Diao Guangjun Xu |
author_sort | Jinbo Liu |
collection | DOAJ |
description | The ever-growing penetration of renewable energy with substantial uncertainties and stochastic characteristics significantly affects the modern power grid’s secure and economical operation. Nevertheless, coordinating various types of resources to derive effective online control decisions for a large-scale power network remains a big challenge. To tackle the limitations of existing control approaches that require full-system models with accurate parameters and conduct real-time extensive sensitivity-based analyses in handling the growing uncertainties, this paper presents a novel data-driven control framework using reinforcement learning (RL) algorithms to train robust RL agents from high-fidelity grid simulations for providing immediate and effective controls in a real-time environment. A two-stage method, consisting of offline training and periodic updates, is proposed to train agents to enable robust controls of voltage profiles, transmission losses, and line flows using a state-of-the-art RL algorithm, soft actor–critic (SAC). The effectiveness of the proposed RL-based control framework is validated via comprehensive case studies conducted on the East China power system with actual operation scenarios. |
format | Article |
id | doaj-art-8723e587e4224bfdaa38aae41c661a08 |
institution | Kabale University |
issn | 2076-3417 |
language | English |
publishDate | 2025-01-01 |
publisher | MDPI AG |
record_format | Article |
series | Applied Sciences |
spelling | doaj-art-8723e587e4224bfdaa38aae41c661a082025-01-24T13:21:15ZengMDPI AGApplied Sciences2076-34172025-01-0115290010.3390/app15020900Perspectives on Soft Actor–Critic (SAC)-Aided Operational Control Strategies for Modern Power Systems with Growing Stochastics and DynamicsJinbo Liu0Qinglai Guo1Jing Zhang2Ruisheng Diao3Guangjun Xu4Department of Electrical Engineering, Tsinghua University, Beijing 100084, ChinaDepartment of Electrical Engineering, Tsinghua University, Beijing 100084, ChinaSGCC Zhejiang Electric Power Company, Hangzhou 310007, ChinaThe Zhejiang University-University of Illinois Urbana-Champaign Institute, Zhejiang University, Haining 314400, ChinaThe Zhejiang University-University of Illinois Urbana-Champaign Institute, Zhejiang University, Haining 314400, ChinaThe ever-growing penetration of renewable energy with substantial uncertainties and stochastic characteristics significantly affects the modern power grid’s secure and economical operation. Nevertheless, coordinating various types of resources to derive effective online control decisions for a large-scale power network remains a big challenge. To tackle the limitations of existing control approaches that require full-system models with accurate parameters and conduct real-time extensive sensitivity-based analyses in handling the growing uncertainties, this paper presents a novel data-driven control framework using reinforcement learning (RL) algorithms to train robust RL agents from high-fidelity grid simulations for providing immediate and effective controls in a real-time environment. A two-stage method, consisting of offline training and periodic updates, is proposed to train agents to enable robust controls of voltage profiles, transmission losses, and line flows using a state-of-the-art RL algorithm, soft actor–critic (SAC). The effectiveness of the proposed RL-based control framework is validated via comprehensive case studies conducted on the East China power system with actual operation scenarios.https://www.mdpi.com/2076-3417/15/2/900artificial intelligenceline flow controlreinforcement learningsoft actor–criticvoltage control |
spellingShingle | Jinbo Liu Qinglai Guo Jing Zhang Ruisheng Diao Guangjun Xu Perspectives on Soft Actor–Critic (SAC)-Aided Operational Control Strategies for Modern Power Systems with Growing Stochastics and Dynamics Applied Sciences artificial intelligence line flow control reinforcement learning soft actor–critic voltage control |
title | Perspectives on Soft Actor–Critic (SAC)-Aided Operational Control Strategies for Modern Power Systems with Growing Stochastics and Dynamics |
title_full | Perspectives on Soft Actor–Critic (SAC)-Aided Operational Control Strategies for Modern Power Systems with Growing Stochastics and Dynamics |
title_fullStr | Perspectives on Soft Actor–Critic (SAC)-Aided Operational Control Strategies for Modern Power Systems with Growing Stochastics and Dynamics |
title_full_unstemmed | Perspectives on Soft Actor–Critic (SAC)-Aided Operational Control Strategies for Modern Power Systems with Growing Stochastics and Dynamics |
title_short | Perspectives on Soft Actor–Critic (SAC)-Aided Operational Control Strategies for Modern Power Systems with Growing Stochastics and Dynamics |
title_sort | perspectives on soft actor critic sac aided operational control strategies for modern power systems with growing stochastics and dynamics |
topic | artificial intelligence line flow control reinforcement learning soft actor–critic voltage control |
url | https://www.mdpi.com/2076-3417/15/2/900 |
work_keys_str_mv | AT jinboliu perspectivesonsoftactorcriticsacaidedoperationalcontrolstrategiesformodernpowersystemswithgrowingstochasticsanddynamics AT qinglaiguo perspectivesonsoftactorcriticsacaidedoperationalcontrolstrategiesformodernpowersystemswithgrowingstochasticsanddynamics AT jingzhang perspectivesonsoftactorcriticsacaidedoperationalcontrolstrategiesformodernpowersystemswithgrowingstochasticsanddynamics AT ruishengdiao perspectivesonsoftactorcriticsacaidedoperationalcontrolstrategiesformodernpowersystemswithgrowingstochasticsanddynamics AT guangjunxu perspectivesonsoftactorcriticsacaidedoperationalcontrolstrategiesformodernpowersystemswithgrowingstochasticsanddynamics |