Network Analysis on the Symmetric Coordination in a Reinforcement-Learning-Based Minority Game
The Minority Game (MG) is a paradigmatic model in econophysics, widely used to study inductive reasoning and self-organization in multi-agent systems. Traditionally, coordinated phases in the MG are associated with spontaneous symmetry breaking, where agents differentiate into polarized roles. Recen...
Saved in:
| Main Authors: | , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
MDPI AG
2025-06-01
|
| Series: | Entropy |
| Subjects: | |
| Online Access: | https://www.mdpi.com/1099-4300/27/7/676 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | The Minority Game (MG) is a paradigmatic model in econophysics, widely used to study inductive reasoning and self-organization in multi-agent systems. Traditionally, coordinated phases in the MG are associated with spontaneous symmetry breaking, where agents differentiate into polarized roles. Recent work shows that policy-based reinforcement-learning can give rise to a new form of symmetric coordination—one achieved without role segregation or strategy specialization. In this study, we thoroughly analyze this novel coordination using tools from complex networks. By constructing the correlation networks among agents, we carry out a structural, functional, and temporal analysis of the emergent symmetric coordination. Our results confirm the preservation of symmetry at the collective level, and reveal a consistent and robust form of distributed coordination, demonstrating the power of network-based approaches in understanding the emergent order in adaptive multi-agent systems. |
|---|---|
| ISSN: | 1099-4300 |