A Sliding‐Kernel Computation‐In‐Memory Architecture for Convolutional Neural Network

Abstract Presently described is a sliding‐kernel computation‐in‐memory (SKCIM) architecture conceptually involving two overlapping layers of functional arrays, one containing memory elements and artificial synapses for neuromorphic computation, the other is used for storing and sliding convolutional...

Full description

Saved in:
Bibliographic Details
Main Authors: Yushen Hu, Xinying Xie, Tengteng Lei, Runxiao Shi, Man Wong
Format: Article
Language:English
Published: Wiley 2024-12-01
Series:Advanced Science
Subjects:
Online Access:https://doi.org/10.1002/advs.202407440
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Abstract Presently described is a sliding‐kernel computation‐in‐memory (SKCIM) architecture conceptually involving two overlapping layers of functional arrays, one containing memory elements and artificial synapses for neuromorphic computation, the other is used for storing and sliding convolutional kernel matrices. A low‐temperature metal‐oxide thin‐film transistor (TFT) technology capable of monolithically integrating single‐gate TFTs, dual‐gate TFTs, and memory capacitors is deployed for the construction of a physical SKCIM system. Exhibiting an 88% reduction in memory access operations compared to state‐of‐the‐art systems, a 32 × 32 SKCIM system is applied to execute common convolution tasks. A more involved demonstration is the application of a 5‐layer, SKCIM‐based convolutional neural network to the classification of the modified national institute of standards and technology (MNIST) dataset of handwritten numerals, achieving an accuracy rate of over 95%.
ISSN:2198-3844