Biasing Exploration towards Positive Error for Efficient Reinforcement Learning
Efficient exploration remains a critical challenge in Reinforcement Learning (RL), significantly affecting sample efficiency. This paper demonstrates that biasing exploration towards state-action pairs with positive temporal difference error speeds up convergence and, in some challenging environmen...
Saved in:
| Main Authors: | Adam Parker, John Sheppard |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
LibraryPress@UF
2025-05-01
|
| Series: | Proceedings of the International Florida Artificial Intelligence Research Society Conference |
| Subjects: | |
| Online Access: | https://journals.flvc.org/FLAIRS/article/view/138835 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Adaptive Intelligent Reflecting Surfaces for Enhanced Wireless Communication via Multi-Agent Deep Reinforcement Learning
by: Sakhshra Monga, et al.
Published: (2025-01-01) -
Neural Network-Based Bandit: A Medium Access Control for the IIoT Alarm Scenario
by: Prasoon Raghuwanshi, et al.
Published: (2024-01-01) -
Exploiting full-duplex opportunities in WLANs via a reinforcement learning-based medium access control protocol
by: Song Liu, et al.
Published: (2024-12-01) -
Deep reinforcement learning applications and prospects in industrial scenarios
by: JING TAN, et al.
Published: (2025-04-01) -
Efficient and assured reinforcement learning-based building HVAC control with heterogeneous expert-guided training
by: Shichao Xu, et al.
Published: (2025-03-01)