Layer ensemble averaging for fault tolerance in memristive neural networks

Abstract Artificial neural networks have advanced due to scaling dimensions, but conventional computing struggles with inefficiencies due to memory bottlenecks. In-memory computing architectures using memristor devices offer promise but face challenges due to hardware non-idealities. This work propo...

Full description

Saved in:
Bibliographic Details
Main Authors: Osama Yousuf, Brian D. Hoskins, Karthick Ramu, Mitchell Fream, William A. Borders, Advait Madhavan, Matthew W. Daniels, Andrew Dienstfrey, Jabez J. McClelland, Martin Lueker-Boden, Gina C. Adam
Format: Article
Language:English
Published: Nature Portfolio 2025-02-01
Series:Nature Communications
Online Access:https://doi.org/10.1038/s41467-025-56319-6
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Abstract Artificial neural networks have advanced due to scaling dimensions, but conventional computing struggles with inefficiencies due to memory bottlenecks. In-memory computing architectures using memristor devices offer promise but face challenges due to hardware non-idealities. This work proposes layer ensemble averaging—a hardware-oriented fault tolerance scheme for improving inference performance of non-ideal memristive neural networks programmed with pre-trained solutions. Simulations on an image classification task and hardware experiments on a continual learning problem with a custom 20,000-device prototyping platform show significant performance gains, outperforming prior methods at similar redundancy levels and overheads. For the image classification task with 20% stuck-at faults, accuracy improves from 40% to 89.6% (within 5% of baseline), and for the continual learning problem, accuracy improves from 55% to 71% (within 1% of baseline). The proposed scheme is broadly applicable to accelerators based on a variety of different non-volatile device technologies.
ISSN:2041-1723