Layer ensemble averaging for fault tolerance in memristive neural networks
Abstract Artificial neural networks have advanced due to scaling dimensions, but conventional computing struggles with inefficiencies due to memory bottlenecks. In-memory computing architectures using memristor devices offer promise but face challenges due to hardware non-idealities. This work propo...
Saved in:
Main Authors: | , , , , , , , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Nature Portfolio
2025-02-01
|
Series: | Nature Communications |
Online Access: | https://doi.org/10.1038/s41467-025-56319-6 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
_version_ | 1832571513874677760 |
---|---|
author | Osama Yousuf Brian D. Hoskins Karthick Ramu Mitchell Fream William A. Borders Advait Madhavan Matthew W. Daniels Andrew Dienstfrey Jabez J. McClelland Martin Lueker-Boden Gina C. Adam |
author_facet | Osama Yousuf Brian D. Hoskins Karthick Ramu Mitchell Fream William A. Borders Advait Madhavan Matthew W. Daniels Andrew Dienstfrey Jabez J. McClelland Martin Lueker-Boden Gina C. Adam |
author_sort | Osama Yousuf |
collection | DOAJ |
description | Abstract Artificial neural networks have advanced due to scaling dimensions, but conventional computing struggles with inefficiencies due to memory bottlenecks. In-memory computing architectures using memristor devices offer promise but face challenges due to hardware non-idealities. This work proposes layer ensemble averaging—a hardware-oriented fault tolerance scheme for improving inference performance of non-ideal memristive neural networks programmed with pre-trained solutions. Simulations on an image classification task and hardware experiments on a continual learning problem with a custom 20,000-device prototyping platform show significant performance gains, outperforming prior methods at similar redundancy levels and overheads. For the image classification task with 20% stuck-at faults, accuracy improves from 40% to 89.6% (within 5% of baseline), and for the continual learning problem, accuracy improves from 55% to 71% (within 1% of baseline). The proposed scheme is broadly applicable to accelerators based on a variety of different non-volatile device technologies. |
format | Article |
id | doaj-art-5526ceb43c08439daed25c6cf936a847 |
institution | Kabale University |
issn | 2041-1723 |
language | English |
publishDate | 2025-02-01 |
publisher | Nature Portfolio |
record_format | Article |
series | Nature Communications |
spelling | doaj-art-5526ceb43c08439daed25c6cf936a8472025-02-02T12:33:28ZengNature PortfolioNature Communications2041-17232025-02-0116111410.1038/s41467-025-56319-6Layer ensemble averaging for fault tolerance in memristive neural networksOsama Yousuf0Brian D. Hoskins1Karthick Ramu2Mitchell Fream3William A. Borders4Advait Madhavan5Matthew W. Daniels6Andrew Dienstfrey7Jabez J. McClelland8Martin Lueker-Boden9Gina C. Adam10Department of Electrical and Computer Engineering, George Washington UniversityNational Institute of Standards and TechnologyNational Institute of Standards and TechnologyNational Institute of Standards and TechnologyNational Institute of Standards and TechnologyNational Institute of Standards and TechnologyNational Institute of Standards and TechnologyNational Institute of Standards and TechnologyNational Institute of Standards and TechnologyWestern Digital TechnologiesDepartment of Electrical and Computer Engineering, George Washington UniversityAbstract Artificial neural networks have advanced due to scaling dimensions, but conventional computing struggles with inefficiencies due to memory bottlenecks. In-memory computing architectures using memristor devices offer promise but face challenges due to hardware non-idealities. This work proposes layer ensemble averaging—a hardware-oriented fault tolerance scheme for improving inference performance of non-ideal memristive neural networks programmed with pre-trained solutions. Simulations on an image classification task and hardware experiments on a continual learning problem with a custom 20,000-device prototyping platform show significant performance gains, outperforming prior methods at similar redundancy levels and overheads. For the image classification task with 20% stuck-at faults, accuracy improves from 40% to 89.6% (within 5% of baseline), and for the continual learning problem, accuracy improves from 55% to 71% (within 1% of baseline). The proposed scheme is broadly applicable to accelerators based on a variety of different non-volatile device technologies.https://doi.org/10.1038/s41467-025-56319-6 |
spellingShingle | Osama Yousuf Brian D. Hoskins Karthick Ramu Mitchell Fream William A. Borders Advait Madhavan Matthew W. Daniels Andrew Dienstfrey Jabez J. McClelland Martin Lueker-Boden Gina C. Adam Layer ensemble averaging for fault tolerance in memristive neural networks Nature Communications |
title | Layer ensemble averaging for fault tolerance in memristive neural networks |
title_full | Layer ensemble averaging for fault tolerance in memristive neural networks |
title_fullStr | Layer ensemble averaging for fault tolerance in memristive neural networks |
title_full_unstemmed | Layer ensemble averaging for fault tolerance in memristive neural networks |
title_short | Layer ensemble averaging for fault tolerance in memristive neural networks |
title_sort | layer ensemble averaging for fault tolerance in memristive neural networks |
url | https://doi.org/10.1038/s41467-025-56319-6 |
work_keys_str_mv | AT osamayousuf layerensembleaveragingforfaulttoleranceinmemristiveneuralnetworks AT briandhoskins layerensembleaveragingforfaulttoleranceinmemristiveneuralnetworks AT karthickramu layerensembleaveragingforfaulttoleranceinmemristiveneuralnetworks AT mitchellfream layerensembleaveragingforfaulttoleranceinmemristiveneuralnetworks AT williamaborders layerensembleaveragingforfaulttoleranceinmemristiveneuralnetworks AT advaitmadhavan layerensembleaveragingforfaulttoleranceinmemristiveneuralnetworks AT matthewwdaniels layerensembleaveragingforfaulttoleranceinmemristiveneuralnetworks AT andrewdienstfrey layerensembleaveragingforfaulttoleranceinmemristiveneuralnetworks AT jabezjmcclelland layerensembleaveragingforfaulttoleranceinmemristiveneuralnetworks AT martinluekerboden layerensembleaveragingforfaulttoleranceinmemristiveneuralnetworks AT ginacadam layerensembleaveragingforfaulttoleranceinmemristiveneuralnetworks |