Enhancing BVR Air Combat Agent Development With Attention-Driven Reinforcement Learning
This study explores the use of Reinforcement Learning (RL) to develop autonomous agents for Beyond Visual Range (BVR) air combat, addressing the challenges of dynamic and uncertain adversarial scenarios. We propose a novel approach that introduces a task-based layer, leveraging domain expertise to o...
Saved in:
| Main Authors: | , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
IEEE
2025-01-01
|
| Series: | IEEE Access |
| Subjects: | |
| Online Access: | https://ieeexplore.ieee.org/document/10966908/ |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | This study explores the use of Reinforcement Learning (RL) to develop autonomous agents for Beyond Visual Range (BVR) air combat, addressing the challenges of dynamic and uncertain adversarial scenarios. We propose a novel approach that introduces a task-based layer, leveraging domain expertise to optimize decision-making and training efficiency. By integrating multi-head attention mechanisms into the policy model and employing an improved DQN algorithm, agents dynamically select context-aware tasks, enabling the learning of efficient emergent behaviors for variable engagement conditions. Evaluations in single- and multi-agent BVR scenarios against adversaries with diverse tactical characteristics demonstrate superior training efficiency and enhanced agent capabilities compared to leading RL algorithms commonly applied in similar domains, including PPO, DDPG, and SAC. A robustness study underscores the critical role of diverse enemy selection in the RL process, showing that adversaries with variable tactical behaviors are essential for developing robust agents. This work advances RL methodologies for autonomous BVR air combat and provides insights applicable to other problems with challenging adversarial scenarios. |
|---|---|
| ISSN: | 2169-3536 |