Multiple Feature Fusion Based on Co-Training Approach and Time Regularization for Place Classification in Wearable Video
The analysis of video acquired with a wearable camera is a challenge that multimedia community is facing with the proliferation of such sensors in various applications. In this paper, we focus on the problem of automatic visual place recognition in a weakly constrained environment, targeting the ind...
Saved in:
Main Authors: | , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Wiley
2013-01-01
|
Series: | Advances in Multimedia |
Online Access: | http://dx.doi.org/10.1155/2013/175064 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
_version_ | 1832562381297811456 |
---|---|
author | Vladislavs Dovgalecs Rémi Mégret Yannick Berthoumieu |
author_facet | Vladislavs Dovgalecs Rémi Mégret Yannick Berthoumieu |
author_sort | Vladislavs Dovgalecs |
collection | DOAJ |
description | The analysis of video acquired with a wearable camera is a challenge that multimedia community is facing with the proliferation of such sensors in various applications. In this paper, we focus on the problem of automatic visual place recognition in a weakly constrained environment, targeting the indexing of video streams by topological place recognition. We propose to combine several machine learning approaches in a time regularized framework for image-based place recognition indoors. The framework combines the power of multiple visual cues and integrates the temporal continuity information of video. We extend it with computationally efficient semisupervised method leveraging unlabeled video sequences for an improved indexing performance. The proposed approach was applied on challenging video corpora. Experiments on a public and a real-world video sequence databases show the gain brought by the different stages of the method. |
format | Article |
id | doaj-art-11a408f439a8444ea89dc7fea7441c71 |
institution | Kabale University |
issn | 1687-5680 1687-5699 |
language | English |
publishDate | 2013-01-01 |
publisher | Wiley |
record_format | Article |
series | Advances in Multimedia |
spelling | doaj-art-11a408f439a8444ea89dc7fea7441c712025-02-03T01:22:43ZengWileyAdvances in Multimedia1687-56801687-56992013-01-01201310.1155/2013/175064175064Multiple Feature Fusion Based on Co-Training Approach and Time Regularization for Place Classification in Wearable VideoVladislavs Dovgalecs0Rémi Mégret1Yannick Berthoumieu2IMS Laboratory, University of Bordeaux, UMR5218 CNRS, Bâtiment A4, 351 cours de la Libération, 33405 Talence, FranceIMS Laboratory, University of Bordeaux, UMR5218 CNRS, Bâtiment A4, 351 cours de la Libération, 33405 Talence, FranceIMS Laboratory, University of Bordeaux, UMR5218 CNRS, Bâtiment A4, 351 cours de la Libération, 33405 Talence, FranceThe analysis of video acquired with a wearable camera is a challenge that multimedia community is facing with the proliferation of such sensors in various applications. In this paper, we focus on the problem of automatic visual place recognition in a weakly constrained environment, targeting the indexing of video streams by topological place recognition. We propose to combine several machine learning approaches in a time regularized framework for image-based place recognition indoors. The framework combines the power of multiple visual cues and integrates the temporal continuity information of video. We extend it with computationally efficient semisupervised method leveraging unlabeled video sequences for an improved indexing performance. The proposed approach was applied on challenging video corpora. Experiments on a public and a real-world video sequence databases show the gain brought by the different stages of the method.http://dx.doi.org/10.1155/2013/175064 |
spellingShingle | Vladislavs Dovgalecs Rémi Mégret Yannick Berthoumieu Multiple Feature Fusion Based on Co-Training Approach and Time Regularization for Place Classification in Wearable Video Advances in Multimedia |
title | Multiple Feature Fusion Based on Co-Training Approach and Time Regularization for Place Classification in Wearable Video |
title_full | Multiple Feature Fusion Based on Co-Training Approach and Time Regularization for Place Classification in Wearable Video |
title_fullStr | Multiple Feature Fusion Based on Co-Training Approach and Time Regularization for Place Classification in Wearable Video |
title_full_unstemmed | Multiple Feature Fusion Based on Co-Training Approach and Time Regularization for Place Classification in Wearable Video |
title_short | Multiple Feature Fusion Based on Co-Training Approach and Time Regularization for Place Classification in Wearable Video |
title_sort | multiple feature fusion based on co training approach and time regularization for place classification in wearable video |
url | http://dx.doi.org/10.1155/2013/175064 |
work_keys_str_mv | AT vladislavsdovgalecs multiplefeaturefusionbasedoncotrainingapproachandtimeregularizationforplaceclassificationinwearablevideo AT remimegret multiplefeaturefusionbasedoncotrainingapproachandtimeregularizationforplaceclassificationinwearablevideo AT yannickberthoumieu multiplefeaturefusionbasedoncotrainingapproachandtimeregularizationforplaceclassificationinwearablevideo |