Tailored Channel Pruning: Achieve Targeted Model Complexity Through Adaptive Sparsity Regularization
In deep learning, the size and complexity of neural networks have been rapidly increased to achieve higher performance. However, this poses a challenge when utilized in resource-limited environments, such as mobile devices, particularly when trying to preserve the network’s performance. T...
Saved in:
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2025-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/10840184/ |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
_version_ | 1832590355424346112 |
---|---|
author | Suwoong Lee Yunho Jeon Seungjae Lee Junmo Kim |
author_facet | Suwoong Lee Yunho Jeon Seungjae Lee Junmo Kim |
author_sort | Suwoong Lee |
collection | DOAJ |
description | In deep learning, the size and complexity of neural networks have been rapidly increased to achieve higher performance. However, this poses a challenge when utilized in resource-limited environments, such as mobile devices, particularly when trying to preserve the network’s performance. To address this problem, structured pruning has been widely studied as it effectively reduces the network with little impact on performance. To enhance a model’s performance with limited resources, it is crucial to 1) utilize all available resources and 2) maximize performance within these limitations. However, existing pruning methods often require iterations of training and pruning or many experiments to find hyperparameters that satisfy a given budget or forcibly truncate parameters with a given budget, resulting in performance loss. To solve this problem, we propose a novel channel pruning method called Tailored Channel Pruning. Given a target budget (e.g., FLOPs and parameters), our method outputs a tailored network that automatically takes the budget into account during training and satisfies the target budget. During the integrated training and pruning process, our method adaptively controls sparsity regularization and selects important weights that can help maximize the accuracy within the target budget. Through various experiments on the CIFAR-10 and ImageNet datasets, we demonstrate the effectiveness of the proposed method and achieve state-of-the-art accuracy after pruning. |
format | Article |
id | doaj-art-be23d3ee8eca49af9ff2c1c01347213e |
institution | Kabale University |
issn | 2169-3536 |
language | English |
publishDate | 2025-01-01 |
publisher | IEEE |
record_format | Article |
series | IEEE Access |
spelling | doaj-art-be23d3ee8eca49af9ff2c1c01347213e2025-01-24T00:01:43ZengIEEEIEEE Access2169-35362025-01-0113121131212610.1109/ACCESS.2025.352946510840184Tailored Channel Pruning: Achieve Targeted Model Complexity Through Adaptive Sparsity RegularizationSuwoong Lee0Yunho Jeon1https://orcid.org/0000-0001-8043-480XSeungjae Lee2Junmo Kim3https://orcid.org/0000-0002-7174-7932Electronics and Telecommunications Research Institute (ETRI), Daejeon, Republic of KoreaDepartment of Artificial Intelligence Software, Hanbat National University, Daejeon, Republic of KoreaElectronics and Telecommunications Research Institute (ETRI), Daejeon, Republic of KoreaKorea Advanced Institute of Science and Technology (KAIST), Daejeon, Republic of KoreaIn deep learning, the size and complexity of neural networks have been rapidly increased to achieve higher performance. However, this poses a challenge when utilized in resource-limited environments, such as mobile devices, particularly when trying to preserve the network’s performance. To address this problem, structured pruning has been widely studied as it effectively reduces the network with little impact on performance. To enhance a model’s performance with limited resources, it is crucial to 1) utilize all available resources and 2) maximize performance within these limitations. However, existing pruning methods often require iterations of training and pruning or many experiments to find hyperparameters that satisfy a given budget or forcibly truncate parameters with a given budget, resulting in performance loss. To solve this problem, we propose a novel channel pruning method called Tailored Channel Pruning. Given a target budget (e.g., FLOPs and parameters), our method outputs a tailored network that automatically takes the budget into account during training and satisfies the target budget. During the integrated training and pruning process, our method adaptively controls sparsity regularization and selects important weights that can help maximize the accuracy within the target budget. Through various experiments on the CIFAR-10 and ImageNet datasets, we demonstrate the effectiveness of the proposed method and achieve state-of-the-art accuracy after pruning.https://ieeexplore.ieee.org/document/10840184/Convolutional neural networks (CNN)efficient deep learningnon-convex sparsity regularizationstructured pruning |
spellingShingle | Suwoong Lee Yunho Jeon Seungjae Lee Junmo Kim Tailored Channel Pruning: Achieve Targeted Model Complexity Through Adaptive Sparsity Regularization IEEE Access Convolutional neural networks (CNN) efficient deep learning non-convex sparsity regularization structured pruning |
title | Tailored Channel Pruning: Achieve Targeted Model Complexity Through Adaptive Sparsity Regularization |
title_full | Tailored Channel Pruning: Achieve Targeted Model Complexity Through Adaptive Sparsity Regularization |
title_fullStr | Tailored Channel Pruning: Achieve Targeted Model Complexity Through Adaptive Sparsity Regularization |
title_full_unstemmed | Tailored Channel Pruning: Achieve Targeted Model Complexity Through Adaptive Sparsity Regularization |
title_short | Tailored Channel Pruning: Achieve Targeted Model Complexity Through Adaptive Sparsity Regularization |
title_sort | tailored channel pruning achieve targeted model complexity through adaptive sparsity regularization |
topic | Convolutional neural networks (CNN) efficient deep learning non-convex sparsity regularization structured pruning |
url | https://ieeexplore.ieee.org/document/10840184/ |
work_keys_str_mv | AT suwoonglee tailoredchannelpruningachievetargetedmodelcomplexitythroughadaptivesparsityregularization AT yunhojeon tailoredchannelpruningachievetargetedmodelcomplexitythroughadaptivesparsityregularization AT seungjaelee tailoredchannelpruningachievetargetedmodelcomplexitythroughadaptivesparsityregularization AT junmokim tailoredchannelpruningachievetargetedmodelcomplexitythroughadaptivesparsityregularization |