Gradient Amplification: An Efficient Way to Train Deep Neural Networks
Improving performance of deep learning models and reducing their training times are ongoing challenges in deep neural networks. There are several approaches proposed to address these challenges, one of which is to increase the depth of the neural networks. Such deeper networks not only increase trai...
Saved in:
Main Authors: | Sunitha Basodi, Chunyan Ji, Haiping Zhang, Yi Pan |
---|---|
Format: | Article |
Language: | English |
Published: |
Tsinghua University Press
2020-09-01
|
Series: | Big Data Mining and Analytics |
Subjects: | |
Online Access: | https://www.sciopen.com/article/10.26599/BDMA.2020.9020004 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Enhancing classification efficiency in capsule networks through windowed routing: tackling gradient vanishing, dynamic routing, and computational complexity challenges
by: Gangqi Chen, et al.
Published: (2024-11-01) -
On Quantum Natural Policy Gradients
by: Andre Sequeira, et al.
Published: (2024-01-01) -
Comparison of the efficiency of zero and first order minimization methods in neural networks
by: E. A. Gubareva, et al.
Published: (2022-12-01) -
G-UNETR++: A Gradient-Enhanced Network for Accurate and Robust Liver Segmentation from Computed Tomography Images
by: Seungyoo Lee, et al.
Published: (2025-01-01) -
An Airborne Gravity Gradient Compensation Method Based on Convolutional and Long Short-Term Memory Neural Networks
by: Shuai Zhou, et al.
Published: (2025-01-01)