Hybrid Online and Offline Reinforcement Learning for Tibetan Jiu Chess
In this study, hybrid state-action-reward-state-action (SARSAλ) and Q-learning algorithms are applied to different stages of an upper confidence bound applied to tree search for Tibetan Jiu chess. Q-learning is also used to update all the nodes on the search path when each game ends. A learning stra...
Saved in:
| Main Authors: | Xiali Li, Zhengyu Lv, Licheng Wu, Yue Zhao, Xiaona Xu |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Wiley
2020-01-01
|
| Series: | Complexity |
| Online Access: | http://dx.doi.org/10.1155/2020/4708075 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Scheduling framework based on reinforcement learning in online-offline colocated cloud environment
by: Ling MA, et al.
Published: (2023-06-01) -
Liz Przybylski: Hybrid Ethnography: Online, Offline, and In Between
by: Mehmet Özgün Özkul
Published: (2025-02-01) -
Online and Offline Hybrid Teaching System Based on Virtual Reality
by: Hou Yu, et al.
Published: (2024-09-01) -
‘Chess studies’ for String Quartet: Composition based on chess
by: Alberto Hortigüela, et al.
Published: (2025-07-01) -
Forecasting: Analyze Online and Offline Learning Mode with Machine Learning Algorithms
by: Farida Ardiani, et al.
Published: (2023-02-01)