Method for User Interface of Large Displays Using Arm Pointing and Finger Counting Gesture Recognition

Although many three-dimensional pointing gesture recognition methods have been proposed, the problem of self-occlusion has not been considered. Furthermore, because almost all pointing gesture recognition methods use a wide-angle camera, additional sensors or cameras are required to concurrently per...

Full description

Saved in:
Bibliographic Details
Main Authors: Hansol Kim, Yoonkyung Kim, Eui Chul Lee
Format: Article
Language:English
Published: Wiley 2014-01-01
Series:The Scientific World Journal
Online Access:http://dx.doi.org/10.1155/2014/683045
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Although many three-dimensional pointing gesture recognition methods have been proposed, the problem of self-occlusion has not been considered. Furthermore, because almost all pointing gesture recognition methods use a wide-angle camera, additional sensors or cameras are required to concurrently perform finger gesture recognition. In this paper, we propose a method for performing both pointing gesture and finger gesture recognition for large display environments, using a single Kinect device and a skeleton tracking model. By considering self-occlusion, a compensation technique can be performed on the user’s detected shoulder position when a hand occludes the shoulder. In addition, we propose a technique to facilitate finger counting gesture recognition, based on the depth image of the hand position. In this technique, the depth image is extracted from the end of the pointing vector. By using exception handling for self-occlusions, experimental results indicate that the pointing accuracy of a specific reference position was significantly improved. The average root mean square error was approximately 13 pixels for a 1920 × 1080 pixels screen resolution. Moreover, the finger counting gesture recognition accuracy was 98.3%.
ISSN:2356-6140
1537-744X