LoCS-Net: Localizing convolutional spiking neural network for fast visual place recognition

Visual place recognition (VPR) is the ability to recognize locations in a physical environment based only on visual inputs. It is a challenging task due to perceptual aliasing, viewpoint and appearance variations and complexity of dynamic scenes. Despite promising demonstrations, many state-of-the-a...

Full description

Saved in:
Bibliographic Details
Main Authors: Ugur Akcal, Ivan Georgiev Raikov, Ekaterina Dmitrievna Gribkova, Anwesa Choudhuri, Seung Hyun Kim, Mattia Gazzola, Rhanor Gillette, Ivan Soltesz, Girish Chowdhary
Format: Article
Language:English
Published: Frontiers Media S.A. 2025-01-01
Series:Frontiers in Neurorobotics
Subjects:
Online Access:https://www.frontiersin.org/articles/10.3389/fnbot.2024.1490267/full
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1832582908892676096
author Ugur Akcal
Ugur Akcal
Ugur Akcal
Ivan Georgiev Raikov
Ekaterina Dmitrievna Gribkova
Ekaterina Dmitrievna Gribkova
Anwesa Choudhuri
Anwesa Choudhuri
Seung Hyun Kim
Mattia Gazzola
Rhanor Gillette
Rhanor Gillette
Ivan Soltesz
Girish Chowdhary
Girish Chowdhary
Girish Chowdhary
author_facet Ugur Akcal
Ugur Akcal
Ugur Akcal
Ivan Georgiev Raikov
Ekaterina Dmitrievna Gribkova
Ekaterina Dmitrievna Gribkova
Anwesa Choudhuri
Anwesa Choudhuri
Seung Hyun Kim
Mattia Gazzola
Rhanor Gillette
Rhanor Gillette
Ivan Soltesz
Girish Chowdhary
Girish Chowdhary
Girish Chowdhary
author_sort Ugur Akcal
collection DOAJ
description Visual place recognition (VPR) is the ability to recognize locations in a physical environment based only on visual inputs. It is a challenging task due to perceptual aliasing, viewpoint and appearance variations and complexity of dynamic scenes. Despite promising demonstrations, many state-of-the-art (SOTA) VPR approaches based on artificial neural networks (ANNs) suffer from computational inefficiency. However, spiking neural networks (SNNs) implemented on neuromorphic hardware are reported to have remarkable potential for more efficient solutions computationally. Still, training SOTA SNNs for VPR is often intractable on large and diverse datasets, and they typically demonstrate poor real-time operation performance. To address these shortcomings, we developed an end-to-end convolutional SNN model for VPR that leverages backpropagation for tractable training. Rate-based approximations of leaky integrate-and-fire (LIF) neurons are employed during training, which are then replaced with spiking LIF neurons during inference. The proposed method significantly outperforms existing SOTA SNNs on challenging datasets like Nordland and Oxford RobotCar, achieving 78.6% precision at 100% recall on the Nordland dataset (compared to 73.0% from the current SOTA) and 45.7% on the Oxford RobotCar dataset (compared to 20.2% from the current SOTA). Our approach offers a simpler training pipeline while yielding significant improvements in both training and inference times compared to SOTA SNNs for VPR. Hardware-in-the-loop tests using Intel's neuromorphic USB form factor, Kapoho Bay, show that our on-chip spiking models for VPR trained via the ANN-to-SNN conversion strategy continue to outperform their SNN counterparts, despite a slight but noticeable decrease in performance when transitioning from off-chip to on-chip, while offering significant energy efficiency. The results highlight the outstanding rapid prototyping and real-world deployment capabilities of this approach, showing it to be a substantial step toward more prevalent SNN-based real-world robotics solutions.
format Article
id doaj-art-86016cff9c574c6d9464dd2a8a36b3ff
institution Kabale University
issn 1662-5218
language English
publishDate 2025-01-01
publisher Frontiers Media S.A.
record_format Article
series Frontiers in Neurorobotics
spelling doaj-art-86016cff9c574c6d9464dd2a8a36b3ff2025-01-29T06:45:52ZengFrontiers Media S.A.Frontiers in Neurorobotics1662-52182025-01-011810.3389/fnbot.2024.14902671490267LoCS-Net: Localizing convolutional spiking neural network for fast visual place recognitionUgur Akcal0Ugur Akcal1Ugur Akcal2Ivan Georgiev Raikov3Ekaterina Dmitrievna Gribkova4Ekaterina Dmitrievna Gribkova5Anwesa Choudhuri6Anwesa Choudhuri7Seung Hyun Kim8Mattia Gazzola9Rhanor Gillette10Rhanor Gillette11Ivan Soltesz12Girish Chowdhary13Girish Chowdhary14Girish Chowdhary15The Grainger College of Engineering, Department of Aerospace Engineering, University of Illinois Urbana-Champaign, Urbana, IL, United StatesThe Grainger College of Engineering, Siebel School of Computing and Data Science, University of Illinois Urbana-Champaign, Urbana, IL, United StatesCoordinated Science Laboratory, University of Illinois Urbana-Champaign, Urbana, IL, United StatesDepartment of Neurosurgery, Stanford University, Stanford, CA, United StatesCoordinated Science Laboratory, University of Illinois Urbana-Champaign, Urbana, IL, United StatesNeuroscience Program, Center for Artificial Intelligence Innovation, University of Illinois Urbana-Champaign, Urbana, IL, United StatesCoordinated Science Laboratory, University of Illinois Urbana-Champaign, Urbana, IL, United StatesThe Grainger College of Engineering, Department of Electrical and Computer Engineering, University of Illinois Urbana-Champaign, Urbana, IL, United StatesThe Grainger College of Engineering, Mechanical Science and Engineering, University of Illinois Urbana-Champaign, Urbana, IL, United StatesThe Grainger College of Engineering, Mechanical Science and Engineering, University of Illinois Urbana-Champaign, Urbana, IL, United StatesNeuroscience Program, Center for Artificial Intelligence Innovation, University of Illinois Urbana-Champaign, Urbana, IL, United StatesDepartment of Molecular and Integrative Physiology, University of Illinois Urbana-Champaign, Urbana, IL, United StatesDepartment of Neurosurgery, Stanford University, Stanford, CA, United StatesThe Grainger College of Engineering, Siebel School of Computing and Data Science, University of Illinois Urbana-Champaign, Urbana, IL, United StatesCoordinated Science Laboratory, University of Illinois Urbana-Champaign, Urbana, IL, United StatesThe Grainger College of Engineering, College of Agriculture and Consumer Economics, Department of Agricultural and Biological Engineering, University of Illinois Urbana-Champaign, Urbana, IL, United StatesVisual place recognition (VPR) is the ability to recognize locations in a physical environment based only on visual inputs. It is a challenging task due to perceptual aliasing, viewpoint and appearance variations and complexity of dynamic scenes. Despite promising demonstrations, many state-of-the-art (SOTA) VPR approaches based on artificial neural networks (ANNs) suffer from computational inefficiency. However, spiking neural networks (SNNs) implemented on neuromorphic hardware are reported to have remarkable potential for more efficient solutions computationally. Still, training SOTA SNNs for VPR is often intractable on large and diverse datasets, and they typically demonstrate poor real-time operation performance. To address these shortcomings, we developed an end-to-end convolutional SNN model for VPR that leverages backpropagation for tractable training. Rate-based approximations of leaky integrate-and-fire (LIF) neurons are employed during training, which are then replaced with spiking LIF neurons during inference. The proposed method significantly outperforms existing SOTA SNNs on challenging datasets like Nordland and Oxford RobotCar, achieving 78.6% precision at 100% recall on the Nordland dataset (compared to 73.0% from the current SOTA) and 45.7% on the Oxford RobotCar dataset (compared to 20.2% from the current SOTA). Our approach offers a simpler training pipeline while yielding significant improvements in both training and inference times compared to SOTA SNNs for VPR. Hardware-in-the-loop tests using Intel's neuromorphic USB form factor, Kapoho Bay, show that our on-chip spiking models for VPR trained via the ANN-to-SNN conversion strategy continue to outperform their SNN counterparts, despite a slight but noticeable decrease in performance when transitioning from off-chip to on-chip, while offering significant energy efficiency. The results highlight the outstanding rapid prototyping and real-world deployment capabilities of this approach, showing it to be a substantial step toward more prevalent SNN-based real-world robotics solutions.https://www.frontiersin.org/articles/10.3389/fnbot.2024.1490267/fullspiking neural networksroboticsvisual place recognitionlocalizationsupervised learningconvolutional networks
spellingShingle Ugur Akcal
Ugur Akcal
Ugur Akcal
Ivan Georgiev Raikov
Ekaterina Dmitrievna Gribkova
Ekaterina Dmitrievna Gribkova
Anwesa Choudhuri
Anwesa Choudhuri
Seung Hyun Kim
Mattia Gazzola
Rhanor Gillette
Rhanor Gillette
Ivan Soltesz
Girish Chowdhary
Girish Chowdhary
Girish Chowdhary
LoCS-Net: Localizing convolutional spiking neural network for fast visual place recognition
Frontiers in Neurorobotics
spiking neural networks
robotics
visual place recognition
localization
supervised learning
convolutional networks
title LoCS-Net: Localizing convolutional spiking neural network for fast visual place recognition
title_full LoCS-Net: Localizing convolutional spiking neural network for fast visual place recognition
title_fullStr LoCS-Net: Localizing convolutional spiking neural network for fast visual place recognition
title_full_unstemmed LoCS-Net: Localizing convolutional spiking neural network for fast visual place recognition
title_short LoCS-Net: Localizing convolutional spiking neural network for fast visual place recognition
title_sort locs net localizing convolutional spiking neural network for fast visual place recognition
topic spiking neural networks
robotics
visual place recognition
localization
supervised learning
convolutional networks
url https://www.frontiersin.org/articles/10.3389/fnbot.2024.1490267/full
work_keys_str_mv AT ugurakcal locsnetlocalizingconvolutionalspikingneuralnetworkforfastvisualplacerecognition
AT ugurakcal locsnetlocalizingconvolutionalspikingneuralnetworkforfastvisualplacerecognition
AT ugurakcal locsnetlocalizingconvolutionalspikingneuralnetworkforfastvisualplacerecognition
AT ivangeorgievraikov locsnetlocalizingconvolutionalspikingneuralnetworkforfastvisualplacerecognition
AT ekaterinadmitrievnagribkova locsnetlocalizingconvolutionalspikingneuralnetworkforfastvisualplacerecognition
AT ekaterinadmitrievnagribkova locsnetlocalizingconvolutionalspikingneuralnetworkforfastvisualplacerecognition
AT anwesachoudhuri locsnetlocalizingconvolutionalspikingneuralnetworkforfastvisualplacerecognition
AT anwesachoudhuri locsnetlocalizingconvolutionalspikingneuralnetworkforfastvisualplacerecognition
AT seunghyunkim locsnetlocalizingconvolutionalspikingneuralnetworkforfastvisualplacerecognition
AT mattiagazzola locsnetlocalizingconvolutionalspikingneuralnetworkforfastvisualplacerecognition
AT rhanorgillette locsnetlocalizingconvolutionalspikingneuralnetworkforfastvisualplacerecognition
AT rhanorgillette locsnetlocalizingconvolutionalspikingneuralnetworkforfastvisualplacerecognition
AT ivansoltesz locsnetlocalizingconvolutionalspikingneuralnetworkforfastvisualplacerecognition
AT girishchowdhary locsnetlocalizingconvolutionalspikingneuralnetworkforfastvisualplacerecognition
AT girishchowdhary locsnetlocalizingconvolutionalspikingneuralnetworkforfastvisualplacerecognition
AT girishchowdhary locsnetlocalizingconvolutionalspikingneuralnetworkforfastvisualplacerecognition