TCL: Time-Dependent Clustering Loss for Optimizing Post-Training Feature Map Quantization for Partitioned DNNs

This paper introduces an enhanced approach for deploying deep learning models on resource-constrained IoT devices by combining model partitioning, autoencoder-based compression, quantization with Time Dependent Clustering Loss (TCL) regularization, and lossless compression, to reduce communication o...

Full description

Saved in:
Bibliographic Details
Main Authors: Oscar Artur Bernd Berg, Eiraj Saqib, Axel Jantsch, Irida Shallari, Silvia Krug, Isaac Sanchez Leal, Mattias O'Nils
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/11031457/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:This paper introduces an enhanced approach for deploying deep learning models on resource-constrained IoT devices by combining model partitioning, autoencoder-based compression, quantization with Time Dependent Clustering Loss (TCL) regularization, and lossless compression, to reduce communication overhead, minimizing latency while maintaining accuracy. The autoencoder compresses feature maps at the partitioning point before quantization, effectively reducing data size and preserving accuracy. TCL regularization clusters activations at the partitioning point to align with quantization levels, minimizing quantization error and ensuring accuracy even with extreme low-bitwidth quantization. Our method is evaluated on classification models (ResNet-50, EfficientNetV2-S) and an object detection model (YOLOv10n) using the TinyImageNet-200 and Pascal VOC datasets. Deployed on Raspberry Pi 4 B and GPU, each model is tested across various partitioning points, quantization bit-widths (1-bit, 2-bit, and 3-bit), communication datarate (1MB/s to 10MB/s), and LZMA lossless compression. For a partitioned ResNet-50 after the convolutional stem block, the speed-up against a server solution is <inline-formula> <tex-math notation="LaTeX">$2.33\times $ </tex-math></inline-formula> and 1.85x compared to the all-in-node solution, with only a minimal accuracy drop of less than one percentage points. The proposed framework offers a scalable solution for deploying high-performance AI models on IoT devices, extending the feasibility of real-time inference in resource-constrained environments.
ISSN:2169-3536