Unit-Centric Regularization for Efficient Deep Neural Networks
Deep neural networks excel by learning hierarchical representations, often requiring architectural enhancements like increased width, normalization layers, or skip connections, each adding complexity and computational cost. This paper proposes Jumpstart, a novel regularization technique that enables...
Saved in:
| Main Authors: | , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
IEEE
2025-01-01
|
| Series: | IEEE Access |
| Subjects: | |
| Online Access: | https://ieeexplore.ieee.org/document/11087585/ |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | Deep neural networks excel by learning hierarchical representations, often requiring architectural enhancements like increased width, normalization layers, or skip connections, each adding complexity and computational cost. This paper proposes Jumpstart, a novel regularization technique that enables the use of simpler architectures by promoting efficient utilization of both network units and data points. The method penalizes units that become inactive (dead) or operate strictly in the linear regime, as well as data points whose activations within a layer are uniformly zero or strictly positive. This strategy enables the training of plain ReLU networks without relying on overparameterization, specialized initialization, normalization layers, or architectural modifications like skip connections. As a result, it promotes more efficient use of units and data, maintaining performance while avoiding waste of computational resources during both training and inference. On the ImageNet benchmark, it matches the top-1 accuracy of a standard ResNet50 with Batch Normalization and skip connections. On UCI tabular datasets, it consistently outperforms batch normalization and often surpasses residual connections. The method is evaluated using four global metrics: Dead Units, Linear Units, Trainability, and Convergence. Jumpstart significantly reduces the presence of inactive and linear units (0.07 and 0.12, respectively), outperforming most baselines and achieves superior trainability (1.0) and convergence (-0.03). These results demonstrate that simpler, regularized networks can maintain competitive accuracy while significantly lowering architectural complexity and computational burden. Jumpstart offers a sustainable and effective alternative to conventional deep learning design strategies, facilitating efficient training without compromising performance. |
|---|---|
| ISSN: | 2169-3536 |