Robust fully distributed file caching for delay-tolerant networks: A reward-based incentive mechanism

This article exhibits a reward-based incentive mechanism for file caching in delay-tolerant networks. In delay-tolerant networks, nodes use relay’s store-carry-forward paradigm to reach the final destination. Thereby, relay nodes may store data in their buffer and carry it till an appropriate contac...

Full description

Saved in:
Bibliographic Details
Main Authors: Sidi Ahmed Ezzahidi, Essaid Sabir, Sara Koulali, El-Houssine Bouyakhf
Format: Article
Language:English
Published: Wiley 2017-04-01
Series:International Journal of Distributed Sensor Networks
Online Access:https://doi.org/10.1177/1550147717700149
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:This article exhibits a reward-based incentive mechanism for file caching in delay-tolerant networks. In delay-tolerant networks, nodes use relay’s store-carry-forward paradigm to reach the final destination. Thereby, relay nodes may store data in their buffer and carry it till an appropriate contact opportunity with destination arises. However, the relays are not always willing to assist data forwarding due to a limited energy or a low storage capacity. Our proposal suggests a reward mechanism to uphold and to sustain cooperation among relay nodes. We model this distributed network interaction as a non-cooperative game. Namely, the source node offers to the relay nodes a positive reward if they accept to cache and to forward a given file successfully to a target destination, whereas the relay nodes may either accept or reject the source deal, depending on the reward attractiveness and on their battery status (their actual energy level). Next, full characterizations of both pure and mixed Nash equilibria are provided. Then, we propose three fully distributed algorithms to ensure convergence to the Nash equilibria (for both pure equilibrium and mixed equilibrium). Finally, we validate our proposal through extensive numerical examples and many learning simulations and draw some conclusions and insightful remarks.
ISSN:1550-1477