Harnessing distributed GPU computing for generalizable graph convolutional networks in power grid reliability assessments
Although machine learning (ML) has emerged as a powerful tool for rapidly assessing grid contingencies, prior studies have largely considered a static grid topology in their analyses. This limits their application, since they need to be re-trained for every new topology. This paper explores the deve...
Saved in:
Main Authors: | , , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Elsevier
2025-01-01
|
Series: | Energy and AI |
Subjects: | |
Online Access: | http://www.sciencedirect.com/science/article/pii/S2666546825000035 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
_version_ | 1832585045586477056 |
---|---|
author | Somayajulu L.N. Dhulipala Nicholas Casaprima Audrey Olivier Bjorn C. Vaagensmith Timothy R. McJunkin Ryan C. Hruska |
author_facet | Somayajulu L.N. Dhulipala Nicholas Casaprima Audrey Olivier Bjorn C. Vaagensmith Timothy R. McJunkin Ryan C. Hruska |
author_sort | Somayajulu L.N. Dhulipala |
collection | DOAJ |
description | Although machine learning (ML) has emerged as a powerful tool for rapidly assessing grid contingencies, prior studies have largely considered a static grid topology in their analyses. This limits their application, since they need to be re-trained for every new topology. This paper explores the development of generalizable graph convolutional network (GCN) models by pre-training them across a wide range of grid topologies and contingency types. We found that a GCN model with auto-regressive moving average (ARMA) layers with a line graph representation of the grid offered the best predictive performance in predicting voltage magnitudes (VM) and voltage angles (VA). We introduced the concept of phantom nodes to consider disparate grid topologies with a varying number of nodes and lines. For pre-training the GCN ARMA model across a variety of topologies, distributed graphics processing unit (GPU) computing afforded us significant training scalability. The predictive performance of this model on grid topologies that were part of the training data is substantially better than the direct current (DC) approximation. Although direct application of the pre-trained model to topologies that are not part of the grid is not particularly satisfactory, fine-tuning with small amounts of data from a specific topology of interest significantly improves predictive performance. In the context of foundational models in ML, this paper highlights the feasibility of training large-scale GNN models to assess the reliability of power grids by considering a wide variety of grid topologies and contingency types. |
format | Article |
id | doaj-art-1024b8e928724561a87bf7e506e60c8e |
institution | Kabale University |
issn | 2666-5468 |
language | English |
publishDate | 2025-01-01 |
publisher | Elsevier |
record_format | Article |
series | Energy and AI |
spelling | doaj-art-1024b8e928724561a87bf7e506e60c8e2025-01-27T04:22:23ZengElsevierEnergy and AI2666-54682025-01-0119100471Harnessing distributed GPU computing for generalizable graph convolutional networks in power grid reliability assessmentsSomayajulu L.N. Dhulipala0Nicholas Casaprima1Audrey Olivier2Bjorn C. Vaagensmith3Timothy R. McJunkin4Ryan C. Hruska5Nuclear Science & Technology, Idaho National Laboratory, Idaho Falls, ID 83415, USA; Civil & Environmental Engineering, Idaho State University, Pocatello, ID 83209, USA; Corresponding author at: Nuclear Science & Technology, Idaho National Laboratory, Idaho Falls, ID 83415, USA.Sonny Astani Department of Civil & Environmental Engineering, University of Southern California, Los Angeles, CA 90089, USASonny Astani Department of Civil & Environmental Engineering, University of Southern California, Los Angeles, CA 90089, USANational & Homeland Security, Idaho National Laboratory, Idaho Falls, ID 83415, USAEnergy and Environmental Science & Technology, Idaho National Laboratory, Idaho Falls, ID 83415, USANational & Homeland Security, Idaho National Laboratory, Idaho Falls, ID 83415, USAAlthough machine learning (ML) has emerged as a powerful tool for rapidly assessing grid contingencies, prior studies have largely considered a static grid topology in their analyses. This limits their application, since they need to be re-trained for every new topology. This paper explores the development of generalizable graph convolutional network (GCN) models by pre-training them across a wide range of grid topologies and contingency types. We found that a GCN model with auto-regressive moving average (ARMA) layers with a line graph representation of the grid offered the best predictive performance in predicting voltage magnitudes (VM) and voltage angles (VA). We introduced the concept of phantom nodes to consider disparate grid topologies with a varying number of nodes and lines. For pre-training the GCN ARMA model across a variety of topologies, distributed graphics processing unit (GPU) computing afforded us significant training scalability. The predictive performance of this model on grid topologies that were part of the training data is substantially better than the direct current (DC) approximation. Although direct application of the pre-trained model to topologies that are not part of the grid is not particularly satisfactory, fine-tuning with small amounts of data from a specific topology of interest significantly improves predictive performance. In the context of foundational models in ML, this paper highlights the feasibility of training large-scale GNN models to assess the reliability of power grids by considering a wide variety of grid topologies and contingency types.http://www.sciencedirect.com/science/article/pii/S2666546825000035Complex systemsGraph neural networksPower gridsGrid reliabilityGeneralizable models |
spellingShingle | Somayajulu L.N. Dhulipala Nicholas Casaprima Audrey Olivier Bjorn C. Vaagensmith Timothy R. McJunkin Ryan C. Hruska Harnessing distributed GPU computing for generalizable graph convolutional networks in power grid reliability assessments Energy and AI Complex systems Graph neural networks Power grids Grid reliability Generalizable models |
title | Harnessing distributed GPU computing for generalizable graph convolutional networks in power grid reliability assessments |
title_full | Harnessing distributed GPU computing for generalizable graph convolutional networks in power grid reliability assessments |
title_fullStr | Harnessing distributed GPU computing for generalizable graph convolutional networks in power grid reliability assessments |
title_full_unstemmed | Harnessing distributed GPU computing for generalizable graph convolutional networks in power grid reliability assessments |
title_short | Harnessing distributed GPU computing for generalizable graph convolutional networks in power grid reliability assessments |
title_sort | harnessing distributed gpu computing for generalizable graph convolutional networks in power grid reliability assessments |
topic | Complex systems Graph neural networks Power grids Grid reliability Generalizable models |
url | http://www.sciencedirect.com/science/article/pii/S2666546825000035 |
work_keys_str_mv | AT somayajululndhulipala harnessingdistributedgpucomputingforgeneralizablegraphconvolutionalnetworksinpowergridreliabilityassessments AT nicholascasaprima harnessingdistributedgpucomputingforgeneralizablegraphconvolutionalnetworksinpowergridreliabilityassessments AT audreyolivier harnessingdistributedgpucomputingforgeneralizablegraphconvolutionalnetworksinpowergridreliabilityassessments AT bjorncvaagensmith harnessingdistributedgpucomputingforgeneralizablegraphconvolutionalnetworksinpowergridreliabilityassessments AT timothyrmcjunkin harnessingdistributedgpucomputingforgeneralizablegraphconvolutionalnetworksinpowergridreliabilityassessments AT ryanchruska harnessingdistributedgpucomputingforgeneralizablegraphconvolutionalnetworksinpowergridreliabilityassessments |