A study of value iteration and policy iteration for Markov decision processes in Deterministic systems
In the context of deterministic discrete-time control systems, we examined the implementation of value iteration (VI) and policy (PI) algorithms in Markov decision processes (MDPs) situated within Borel spaces. The deterministic nature of the system's transfer function plays a pivotal role, as...
Saved in:
Main Authors: | , |
---|---|
Format: | Article |
Language: | English |
Published: |
AIMS Press
2024-11-01
|
Series: | AIMS Mathematics |
Subjects: | |
Online Access: | https://www.aimspress.com/article/doi/10.3934/math.20241613 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | In the context of deterministic discrete-time control systems, we examined the implementation of value iteration (VI) and policy (PI) algorithms in Markov decision processes (MDPs) situated within Borel spaces. The deterministic nature of the system's transfer function plays a pivotal role, as the convergence criteria of these algorithms are deeply interconnected with the inherent characteristics of the probability function governing state transitions. For VI, convergence is contingent upon verifying that the cost difference function stabilizes to a constant $ k $ ensuring uniformity across iterations. In contrast, PI achieves convergence when the value function maintains consistent values over successive iterations. Finally, a detailed example demonstrates the conditions under which convergence of the algorithm is achieved, underscoring the practicality of these methods in deterministic settings. |
---|---|
ISSN: | 2473-6988 |