A comparison of computing-in-memory with non-volatile memory types and SRAM in DNN training

In recent years, as Deep Neural Network (DNN) has been widely used in various artificial intelligence (AI) applications, the demands for energy efficiency and computational speed have continuously increased. Computing-in-Memory (CIM), as a potential solution, can significantly reduce the energy cons...

Full description

Saved in:
Bibliographic Details
Main Authors: Shuai Zhou, Yanfeng Jiang
Format: Article
Language:English
Published: AIP Publishing LLC 2025-03-01
Series:AIP Advances
Online Access:http://dx.doi.org/10.1063/9.0000891
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:In recent years, as Deep Neural Network (DNN) has been widely used in various artificial intelligence (AI) applications, the demands for energy efficiency and computational speed have continuously increased. Computing-in-Memory (CIM), as a potential solution, can significantly reduce the energy consumption and the delay caused by data transmission. In the paper, the CIM application based on spintronic device in DNN training is explored. Architecture for CIM using spintronic devices can efficiently perform the computational tasks of neural networks at the memory level. Comparison is conducted with the computation training based on SRAM, RRAM, and FeFET for a standard DNN training task with the same inference accuracy.
ISSN:2158-3226