Compressing fully connected layers of deep neural networks using permuted features

Abstract Modern deep neural networks typically have some fully connected layers at the final classification stages. These stages have large memory requirements that can be expensive on resource‐constrained embedded devices and also consume significant energy just to read the parameters from external...

Full description

Saved in:
Bibliographic Details
Main Authors: Dara Nagaraju, Nitin Chandrachoodan
Format: Article
Language:English
Published: Wiley 2023-07-01
Series:IET Computers & Digital Techniques
Subjects:
Online Access:https://doi.org/10.1049/cdt2.12060
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1832547349603287040
author Dara Nagaraju
Nitin Chandrachoodan
author_facet Dara Nagaraju
Nitin Chandrachoodan
author_sort Dara Nagaraju
collection DOAJ
description Abstract Modern deep neural networks typically have some fully connected layers at the final classification stages. These stages have large memory requirements that can be expensive on resource‐constrained embedded devices and also consume significant energy just to read the parameters from external memory into the processing chip. The authors show that the weights in such layers can be modelled as permutations of a common sequence with minimal impact on recognition accuracy. This allows the storage requirements of FC layer(s) to be significantly reduced, which reflects in the reduction of total network parameters from 1.3× to 36× with a median of 4.45× on several benchmark networks. The authors compare the results with existing pruning, bitwidth reduction, and deep compression techniques and show the superior compression that can be achieved with this method. The authors also showed 7× reduction of parameters on VGG16 architecture with ImageNet dataset. The authors also showed that the proposed method can be used in the classification stage of the transfer learning networks.
format Article
id doaj-art-3ff7fd962fbd4f21b8155b90c2d80c5e
institution Kabale University
issn 1751-8601
1751-861X
language English
publishDate 2023-07-01
publisher Wiley
record_format Article
series IET Computers & Digital Techniques
spelling doaj-art-3ff7fd962fbd4f21b8155b90c2d80c5e2025-02-03T06:45:12ZengWileyIET Computers & Digital Techniques1751-86011751-861X2023-07-01173-414916110.1049/cdt2.12060Compressing fully connected layers of deep neural networks using permuted featuresDara Nagaraju0Nitin Chandrachoodan1Electrical Engineering Indian Institute of Technology Madras Chennai IndiaElectrical Engineering Indian Institute of Technology Madras Chennai IndiaAbstract Modern deep neural networks typically have some fully connected layers at the final classification stages. These stages have large memory requirements that can be expensive on resource‐constrained embedded devices and also consume significant energy just to read the parameters from external memory into the processing chip. The authors show that the weights in such layers can be modelled as permutations of a common sequence with minimal impact on recognition accuracy. This allows the storage requirements of FC layer(s) to be significantly reduced, which reflects in the reduction of total network parameters from 1.3× to 36× with a median of 4.45× on several benchmark networks. The authors compare the results with existing pruning, bitwidth reduction, and deep compression techniques and show the superior compression that can be achieved with this method. The authors also showed 7× reduction of parameters on VGG16 architecture with ImageNet dataset. The authors also showed that the proposed method can be used in the classification stage of the transfer learning networks.https://doi.org/10.1049/cdt2.12060neural netsoptimisation
spellingShingle Dara Nagaraju
Nitin Chandrachoodan
Compressing fully connected layers of deep neural networks using permuted features
IET Computers & Digital Techniques
neural nets
optimisation
title Compressing fully connected layers of deep neural networks using permuted features
title_full Compressing fully connected layers of deep neural networks using permuted features
title_fullStr Compressing fully connected layers of deep neural networks using permuted features
title_full_unstemmed Compressing fully connected layers of deep neural networks using permuted features
title_short Compressing fully connected layers of deep neural networks using permuted features
title_sort compressing fully connected layers of deep neural networks using permuted features
topic neural nets
optimisation
url https://doi.org/10.1049/cdt2.12060
work_keys_str_mv AT daranagaraju compressingfullyconnectedlayersofdeepneuralnetworksusingpermutedfeatures
AT nitinchandrachoodan compressingfullyconnectedlayersofdeepneuralnetworksusingpermutedfeatures