ARM-IRL: Adaptive Resilience Metric Quantification Using Inverse Reinforcement Learning

<b>Background/Objectives:</b> The resilience of safety-critical systems is gaining importance due to the rise in cyber and physical threats, especially within critical infrastructure. Traditional static resilience metrics may not capture dynamic system states, leading to inaccurate asses...

Full description

Saved in:
Bibliographic Details
Main Authors: Abhijeet Sahu, Venkatesh Venkatramanan, Richard Macwan
Format: Article
Language:English
Published: MDPI AG 2025-05-01
Series:AI
Subjects:
Online Access:https://www.mdpi.com/2673-2688/6/5/103
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:<b>Background/Objectives:</b> The resilience of safety-critical systems is gaining importance due to the rise in cyber and physical threats, especially within critical infrastructure. Traditional static resilience metrics may not capture dynamic system states, leading to inaccurate assessments and ineffective responses to cyber threats. This work aims to develop a data-driven, adaptive method for resilience metric learning. <b>Methods:</b> We propose a data-driven approach using inverse reinforcement learning (IRL) to learn a single, adaptive resilience metric. The method infers a reward function from expert control actions. Unlike previous approaches using static weights or fuzzy logic, this work applies adversarial inverse reinforcement learning (AIRL), training a generator and discriminator in parallel to learn the reward structure and derive an optimal policy. <b>Results:</b> The proposed approach is evaluated on multiple scenarios: optimal communication network rerouting, power distribution network reconfiguration, and cyber–physical restoration of critical loads using the IEEE 123-bus system. <b>Conclusions:</b> The adaptive, learned resilience metric enables faster critical load restoration in comparison to conventional RL approaches.
ISSN:2673-2688