A low functional redundancy-based network slimming method for accelerating deep neural networks
Deep neural networks (DNNs) have been widely criticized for their large parameters and computation demands, hindering deployment to edge and embedded devices. In order to reduce the floating point operations (FLOPs) running DNNs and accelerate the inference speed, we start from the model pruning, an...
Saved in:
Main Authors: | Zheng Fang, Bo Yin |
---|---|
Format: | Article |
Language: | English |
Published: |
Elsevier
2025-04-01
|
Series: | Alexandria Engineering Journal |
Subjects: | |
Online Access: | http://www.sciencedirect.com/science/article/pii/S1110016824017162 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Convolutional Neural Network Compression via Dynamic Parameter Rank Pruning
by: Manish Sharma, et al.
Published: (2025-01-01) -
Tilted-Mode All-Optical Diffractive Deep Neural Networks
by: Mingzhu Song, et al.
Published: (2024-12-01) -
A deep neural network for general scattering matrix
by: Jing Yongxin, et al.
Published: (2023-04-01) -
Water quality assessment for aquaculture using deep neural network
by: Rajeshwarrao Arabelli, et al.
Published: (2025-01-01) -
Patient-Specific Detection of Atrial Fibrillation in Segments of ECG Signals using Deep Neural Networks
by: Jeyson A. Castillo, et al.
Published: (2019-11-01)