Deep Neural Learning Adaptive Sequential Monte Carlo for Automatic Image and Speech Recognition

To enhance the performance of image classification and speech recognition, the optimizer is considered an important factor for achieving high accuracy. The state-of-the-art optimizer can perform to serve in applications that may not require very high accuracy, yet the demand for high-precision image...

Full description

Saved in:
Bibliographic Details
Main Authors: Patcharin Kamsing, Peerapong Torteeka, Wuttichai Boonpook, Chunxiang Cao
Format: Article
Language:English
Published: Wiley 2020-01-01
Series:Applied Computational Intelligence and Soft Computing
Online Access:http://dx.doi.org/10.1155/2020/8866259
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1832547706443137024
author Patcharin Kamsing
Peerapong Torteeka
Wuttichai Boonpook
Chunxiang Cao
author_facet Patcharin Kamsing
Peerapong Torteeka
Wuttichai Boonpook
Chunxiang Cao
author_sort Patcharin Kamsing
collection DOAJ
description To enhance the performance of image classification and speech recognition, the optimizer is considered an important factor for achieving high accuracy. The state-of-the-art optimizer can perform to serve in applications that may not require very high accuracy, yet the demand for high-precision image classification and speech recognition is increasing. This study implements an adaptive method for applying the particle filter technique with a gradient descent optimizer to improve model learning performance. Using a pretrained model helps reduce the computational time to deploy an image classification model and uses a simple deep convolutional neural network for speech recognition. The applied method results in a higher speech recognition accuracy score—89.693% for the test dataset—than the conventional method, which reaches 89.325%. The applied method also performs well on the image classification task, reaching an accuracy of 89.860% on the test dataset, better than the conventional method, which has an accuracy of 89.644%. Despite a slight difference in accuracy, the applied optimizer performs well in this dataset overall.
format Article
id doaj-art-64613111f58e4788bc5d49bb18e6a875
institution Kabale University
issn 1687-9724
1687-9732
language English
publishDate 2020-01-01
publisher Wiley
record_format Article
series Applied Computational Intelligence and Soft Computing
spelling doaj-art-64613111f58e4788bc5d49bb18e6a8752025-02-03T06:43:37ZengWileyApplied Computational Intelligence and Soft Computing1687-97241687-97322020-01-01202010.1155/2020/88662598866259Deep Neural Learning Adaptive Sequential Monte Carlo for Automatic Image and Speech RecognitionPatcharin Kamsing0Peerapong Torteeka1Wuttichai Boonpook2Chunxiang Cao3Air-Space Control, Optimization and Management Laboratory, Department of Aeronautical Engineering, International Academy of Aviation Industry, King Mongkut’s Institute of Technology, Ladkrabang, Bangkok 10520, ThailandNational Astronomical Research Institute of Thailand, ChiangMai 50180, ThailandDepartment of Geography, Faculty of Social Sciences, Srinakharinwirot University, Bangkok 10110, ThailandState Key Laboratory of Remote Sensing Science, Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100101, ChinaTo enhance the performance of image classification and speech recognition, the optimizer is considered an important factor for achieving high accuracy. The state-of-the-art optimizer can perform to serve in applications that may not require very high accuracy, yet the demand for high-precision image classification and speech recognition is increasing. This study implements an adaptive method for applying the particle filter technique with a gradient descent optimizer to improve model learning performance. Using a pretrained model helps reduce the computational time to deploy an image classification model and uses a simple deep convolutional neural network for speech recognition. The applied method results in a higher speech recognition accuracy score—89.693% for the test dataset—than the conventional method, which reaches 89.325%. The applied method also performs well on the image classification task, reaching an accuracy of 89.860% on the test dataset, better than the conventional method, which has an accuracy of 89.644%. Despite a slight difference in accuracy, the applied optimizer performs well in this dataset overall.http://dx.doi.org/10.1155/2020/8866259
spellingShingle Patcharin Kamsing
Peerapong Torteeka
Wuttichai Boonpook
Chunxiang Cao
Deep Neural Learning Adaptive Sequential Monte Carlo for Automatic Image and Speech Recognition
Applied Computational Intelligence and Soft Computing
title Deep Neural Learning Adaptive Sequential Monte Carlo for Automatic Image and Speech Recognition
title_full Deep Neural Learning Adaptive Sequential Monte Carlo for Automatic Image and Speech Recognition
title_fullStr Deep Neural Learning Adaptive Sequential Monte Carlo for Automatic Image and Speech Recognition
title_full_unstemmed Deep Neural Learning Adaptive Sequential Monte Carlo for Automatic Image and Speech Recognition
title_short Deep Neural Learning Adaptive Sequential Monte Carlo for Automatic Image and Speech Recognition
title_sort deep neural learning adaptive sequential monte carlo for automatic image and speech recognition
url http://dx.doi.org/10.1155/2020/8866259
work_keys_str_mv AT patcharinkamsing deepneurallearningadaptivesequentialmontecarloforautomaticimageandspeechrecognition
AT peerapongtorteeka deepneurallearningadaptivesequentialmontecarloforautomaticimageandspeechrecognition
AT wuttichaiboonpook deepneurallearningadaptivesequentialmontecarloforautomaticimageandspeechrecognition
AT chunxiangcao deepneurallearningadaptivesequentialmontecarloforautomaticimageandspeechrecognition