Multilabel Image Annotation Based on Double-Layer PLSA Model

Due to the semantic gap between visual features and semantic concepts, automatic image annotation has become a difficult issue in computer vision recently. We propose a new image multilabel annotation method based on double-layer probabilistic latent semantic analysis (PLSA) in this paper. The new d...

Full description

Saved in:
Bibliographic Details
Main Authors: Jing Zhang, Da Li, Weiwei Hu, Zhihua Chen, Yubo Yuan
Format: Article
Language:English
Published: Wiley 2014-01-01
Series:The Scientific World Journal
Online Access:http://dx.doi.org/10.1155/2014/494387
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1832554505065988096
author Jing Zhang
Da Li
Weiwei Hu
Zhihua Chen
Yubo Yuan
author_facet Jing Zhang
Da Li
Weiwei Hu
Zhihua Chen
Yubo Yuan
author_sort Jing Zhang
collection DOAJ
description Due to the semantic gap between visual features and semantic concepts, automatic image annotation has become a difficult issue in computer vision recently. We propose a new image multilabel annotation method based on double-layer probabilistic latent semantic analysis (PLSA) in this paper. The new double-layer PLSA model is constructed to bridge the low-level visual features and high-level semantic concepts of images for effective image understanding. The low-level features of images are represented as visual words by Bag-of-Words model; latent semantic topics are obtained by the first layer PLSA from two aspects of visual and texture, respectively. Furthermore, we adopt the second layer PLSA to fuse the visual and texture latent semantic topics and achieve a top-layer latent semantic topic. By the double-layer PLSA, the relationships between visual features and semantic concepts of images are established, and we can predict the labels of new images by their low-level features. Experimental results demonstrate that our automatic image annotation model based on double-layer PLSA can achieve promising performance for labeling and outperform previous methods on standard Corel dataset.
format Article
id doaj-art-874765bbcd904656b8d35e28ec546452
institution Kabale University
issn 2356-6140
1537-744X
language English
publishDate 2014-01-01
publisher Wiley
record_format Article
series The Scientific World Journal
spelling doaj-art-874765bbcd904656b8d35e28ec5464522025-02-03T05:51:21ZengWileyThe Scientific World Journal2356-61401537-744X2014-01-01201410.1155/2014/494387494387Multilabel Image Annotation Based on Double-Layer PLSA ModelJing Zhang0Da Li1Weiwei Hu2Zhihua Chen3Yubo Yuan4School of Information Science and Engineering, East China University of Science and Technology, Shanghai 200237, ChinaSchool of Information Science and Engineering, East China University of Science and Technology, Shanghai 200237, ChinaSchool of Information Science and Engineering, East China University of Science and Technology, Shanghai 200237, ChinaSchool of Information Science and Engineering, East China University of Science and Technology, Shanghai 200237, ChinaSchool of Information Science and Engineering, East China University of Science and Technology, Shanghai 200237, ChinaDue to the semantic gap between visual features and semantic concepts, automatic image annotation has become a difficult issue in computer vision recently. We propose a new image multilabel annotation method based on double-layer probabilistic latent semantic analysis (PLSA) in this paper. The new double-layer PLSA model is constructed to bridge the low-level visual features and high-level semantic concepts of images for effective image understanding. The low-level features of images are represented as visual words by Bag-of-Words model; latent semantic topics are obtained by the first layer PLSA from two aspects of visual and texture, respectively. Furthermore, we adopt the second layer PLSA to fuse the visual and texture latent semantic topics and achieve a top-layer latent semantic topic. By the double-layer PLSA, the relationships between visual features and semantic concepts of images are established, and we can predict the labels of new images by their low-level features. Experimental results demonstrate that our automatic image annotation model based on double-layer PLSA can achieve promising performance for labeling and outperform previous methods on standard Corel dataset.http://dx.doi.org/10.1155/2014/494387
spellingShingle Jing Zhang
Da Li
Weiwei Hu
Zhihua Chen
Yubo Yuan
Multilabel Image Annotation Based on Double-Layer PLSA Model
The Scientific World Journal
title Multilabel Image Annotation Based on Double-Layer PLSA Model
title_full Multilabel Image Annotation Based on Double-Layer PLSA Model
title_fullStr Multilabel Image Annotation Based on Double-Layer PLSA Model
title_full_unstemmed Multilabel Image Annotation Based on Double-Layer PLSA Model
title_short Multilabel Image Annotation Based on Double-Layer PLSA Model
title_sort multilabel image annotation based on double layer plsa model
url http://dx.doi.org/10.1155/2014/494387
work_keys_str_mv AT jingzhang multilabelimageannotationbasedondoublelayerplsamodel
AT dali multilabelimageannotationbasedondoublelayerplsamodel
AT weiweihu multilabelimageannotationbasedondoublelayerplsamodel
AT zhihuachen multilabelimageannotationbasedondoublelayerplsamodel
AT yuboyuan multilabelimageannotationbasedondoublelayerplsamodel