Distributed secondary control for DC microgrids using two-stage multi-agent reinforcement learning
Multi-agent reinforcement learning has emerged as a promising candidate for the secondary control of DC microgrids. However, the one-stage reward function incorporating both voltage regulation and current sharing results in the significant bus voltage fluctuations and long current sharing time. To a...
Saved in:
Main Authors: | , , , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Elsevier
2025-03-01
|
Series: | International Journal of Electrical Power & Energy Systems |
Subjects: | |
Online Access: | http://www.sciencedirect.com/science/article/pii/S0142061524005581 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
_version_ | 1832595432960688128 |
---|---|
author | Fei Li Weifei Tu Yun Zhou Heng Li Feng Zhou Weirong Liu Chao Hu |
author_facet | Fei Li Weifei Tu Yun Zhou Heng Li Feng Zhou Weirong Liu Chao Hu |
author_sort | Fei Li |
collection | DOAJ |
description | Multi-agent reinforcement learning has emerged as a promising candidate for the secondary control of DC microgrids. However, the one-stage reward function incorporating both voltage regulation and current sharing results in the significant bus voltage fluctuations and long current sharing time. To address this issue, in this paper, we propose a two-stage reinforcement learning secondary control method for DC microgrids, which can effectively suppress the bus voltage fluctuations and reduce the current sharing time. The multi-agent Proximal Policy Optimization (PPO) algorithm is utilized to regulate the current and voltage of each node in the microgrids. Specifically, a two-stage reward function based on voltage error and current error is designed, which can effectively improve the convergence speed. Moreover, an action safe mechanism is constructed to mitigate the effects of random noise and ensure the smooth operation of the DC microgrids. We have built a hardware-in-the-loop platform to verify the effectiveness of the proposed method. Experiment results show that the proposed method can effectively improve the current sharing speed and reduce the bus voltage fluctuation when compared with existing methods. |
format | Article |
id | doaj-art-e76a94c13b3e416ca165db5931857131 |
institution | Kabale University |
issn | 0142-0615 |
language | English |
publishDate | 2025-03-01 |
publisher | Elsevier |
record_format | Article |
series | International Journal of Electrical Power & Energy Systems |
spelling | doaj-art-e76a94c13b3e416ca165db59318571312025-01-19T06:23:49ZengElsevierInternational Journal of Electrical Power & Energy Systems0142-06152025-03-01164110335Distributed secondary control for DC microgrids using two-stage multi-agent reinforcement learningFei Li0Weifei Tu1Yun Zhou2Heng Li3Feng Zhou4Weirong Liu5Chao Hu6School of Automation, Central South University, Changsha, ChinaSchool of Automation, Central South University, Changsha, ChinaHunan University of Finance and Economics, Changsha, China; Corresponding authors.School of Electronic Information, Central South University, Changsha, China; Corresponding authors.School of Electrical and Information Engineering, Changsha University of Science and Technology, Changsha, ChinaSchool of Electronic Information, Central South University, Changsha, ChinaSchool of Electronic Information, Central South University, Changsha, ChinaMulti-agent reinforcement learning has emerged as a promising candidate for the secondary control of DC microgrids. However, the one-stage reward function incorporating both voltage regulation and current sharing results in the significant bus voltage fluctuations and long current sharing time. To address this issue, in this paper, we propose a two-stage reinforcement learning secondary control method for DC microgrids, which can effectively suppress the bus voltage fluctuations and reduce the current sharing time. The multi-agent Proximal Policy Optimization (PPO) algorithm is utilized to regulate the current and voltage of each node in the microgrids. Specifically, a two-stage reward function based on voltage error and current error is designed, which can effectively improve the convergence speed. Moreover, an action safe mechanism is constructed to mitigate the effects of random noise and ensure the smooth operation of the DC microgrids. We have built a hardware-in-the-loop platform to verify the effectiveness of the proposed method. Experiment results show that the proposed method can effectively improve the current sharing speed and reduce the bus voltage fluctuation when compared with existing methods.http://www.sciencedirect.com/science/article/pii/S0142061524005581DC microgridsMulti-agent systemDeep reinforcement learningSecondary control |
spellingShingle | Fei Li Weifei Tu Yun Zhou Heng Li Feng Zhou Weirong Liu Chao Hu Distributed secondary control for DC microgrids using two-stage multi-agent reinforcement learning International Journal of Electrical Power & Energy Systems DC microgrids Multi-agent system Deep reinforcement learning Secondary control |
title | Distributed secondary control for DC microgrids using two-stage multi-agent reinforcement learning |
title_full | Distributed secondary control for DC microgrids using two-stage multi-agent reinforcement learning |
title_fullStr | Distributed secondary control for DC microgrids using two-stage multi-agent reinforcement learning |
title_full_unstemmed | Distributed secondary control for DC microgrids using two-stage multi-agent reinforcement learning |
title_short | Distributed secondary control for DC microgrids using two-stage multi-agent reinforcement learning |
title_sort | distributed secondary control for dc microgrids using two stage multi agent reinforcement learning |
topic | DC microgrids Multi-agent system Deep reinforcement learning Secondary control |
url | http://www.sciencedirect.com/science/article/pii/S0142061524005581 |
work_keys_str_mv | AT feili distributedsecondarycontrolfordcmicrogridsusingtwostagemultiagentreinforcementlearning AT weifeitu distributedsecondarycontrolfordcmicrogridsusingtwostagemultiagentreinforcementlearning AT yunzhou distributedsecondarycontrolfordcmicrogridsusingtwostagemultiagentreinforcementlearning AT hengli distributedsecondarycontrolfordcmicrogridsusingtwostagemultiagentreinforcementlearning AT fengzhou distributedsecondarycontrolfordcmicrogridsusingtwostagemultiagentreinforcementlearning AT weirongliu distributedsecondarycontrolfordcmicrogridsusingtwostagemultiagentreinforcementlearning AT chaohu distributedsecondarycontrolfordcmicrogridsusingtwostagemultiagentreinforcementlearning |