Enhancing Distributed Machine Learning through Data Shuffling: Techniques, Challenges, and Implications
In distributed machine learning, data shuffling is a crucial data preprocessing technique that significantly impacts the efficiency and performance of model training. As distributed machine learning scales across multiple computing nodes, the ability to shuffle data effectively and efficiently has b...
Saved in:
| Main Author: | Zhang Zikai |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
EDP Sciences
2025-01-01
|
| Series: | ITM Web of Conferences |
| Online Access: | https://www.itm-conferences.org/articles/itmconf/pdf/2025/04/itmconf_iwadi2024_03018.pdf |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Evolutionary Novelty in a Butterfly Wing Pattern through Enhancer Shuffling.
by: Richard W R Wallbank, et al.
Published: (2016-01-01) -
Advancements in Coded Computation: Integrating Encoding Matrices with Data Shuffling for Enhanced Data Transmission Efficiency
by: Yuan Shijie
Published: (2025-01-01) -
Enhancing Missense Variant Pathogenicity Prediction with MissenseNet: Integrating Structural Insights and ShuffleNet-Based Deep Learning Techniques
by: Jing Liu, et al.
Published: (2024-09-01) -
Privacy-Preserving SGD on Shuffle Model
by: Lingjie Zhang, et al.
Published: (2023-01-01) -
Shuffle Model of Differential Privacy: Numerical Composition for Federated Learning
by: Shaowei Wang, et al.
Published: (2025-02-01)