Reinforcement Learning for Fail-Operational Systems with Disentangled Dual-Skill Variables

We present a novel approach to reinforcement learning (RL) specifically designed for fail-operational systems in critical safety applications. Our technique incorporates disentangled skill variables, significantly enhancing the resilience of conventional RL frameworks against mechanical failures and...

Full description

Saved in:
Bibliographic Details
Main Authors: Taewoo Kim, Shiho Kim
Format: Article
Language:English
Published: MDPI AG 2025-04-01
Series:Technologies
Subjects:
Online Access:https://www.mdpi.com/2227-7080/13/4/156
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:We present a novel approach to reinforcement learning (RL) specifically designed for fail-operational systems in critical safety applications. Our technique incorporates disentangled skill variables, significantly enhancing the resilience of conventional RL frameworks against mechanical failures and unforeseen environmental changes. This innovation arises from the imperative need for RL mechanisms to sustain uninterrupted and dependable operations, even in the face of abrupt malfunctions. Our research highlights the system’s ability to swiftly adjust and reformulate its strategy in response to sudden disruptions, maintaining operational integrity and ensuring the completion of tasks without compromising safety. The system’s capacity for immediate, secure reactions is vital, especially in scenarios where halting operations could escalate risks. We examine the system’s adaptability in various mechanical failure scenarios, highlighting its effectiveness in maintaining safety and functionality in unpredictable situations. Our research represents a significant advancement in the safety and performance of RL systems, paving the way for their deployment in safety-critical environments.
ISSN:2227-7080