Approximate CNN Hardware Accelerators for Resource Constrained Devices

Implementation of Convolutional Neural Networks (CNNs) on edge devices require reduction in computational complexity. Leveraging optimization techniques or approximate computing techniques can reduce the overhead associated with hardware implementation. In this paper, we propose a modular pipelined...

Full description

Saved in:
Bibliographic Details
Main Authors: P Thejaswini, Gautham Suresh, V. Chiraag, Sukumar Nandi
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10840189/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Implementation of Convolutional Neural Networks (CNNs) on edge devices require reduction in computational complexity. Leveraging optimization techniques or approximate computing techniques can reduce the overhead associated with hardware implementation. In this paper, we propose a modular pipelined Feedforward CNN Hardware Accelerator (FHA) and a novel Approximate Feedforward CNN Hardware Accelerator (AFHA). The AFHA design is achieved through the incorporation of hardware pruning and Approximate Multiply Accumulate (AMAC) units. Our proposed architectures are validated for functionality through an image classification application, utilising the popular MNIST dataset for 8-bit, 16-bit and 32-bit operational word size. Performance analysis of our proposed architectures show that the 32-bit FHA consumes 307.04pJ of energy while achieving acceleration of 76.91x. AFHA attains an acceleration of 120.69x with an energy consumption of 295.85pJ. Similarly, the 16-bit and 8-bit architectures demonstrate substantial acceleration while significantly reducing power consumption. The performance of our proposed architectures demonstrates significant acceleration and reduced power consumption compared to popular edge machine learning framework - TinyML TensorFlow Lite. FHA achieves a significant accuracy improvement of 6.2% along with a speedup of 1.07x and AFHA achieves accuracy enhancement of 4.3% and an impressive speedup of 1.42x.
ISSN:2169-3536