Loading

American Sign Language Video Hand Gestures Recognition using Deep Neural Networks
Shivashankara S1, Srinath S2

1Shivashankara S, Research Scholar, Department of Computer Science and Engineering, Sri Jayachamarajendra College of Engineering, Mysuru (Karnataka), India.
2Srinath S, Department of Computer Science and Engineering, Sri Jayachamarajendra College of Engineering, Mysuru (Karnataka), India.

Manuscript received on 18 June 2019 | Revised Manuscript received on 25 June 2019 | Manuscript published on 30 June 2019 | PP: 2742-2751 | Volume-8 Issue-5, June 2019 | Retrieval Number: E7205068519/19©BEIESP
Open Access | Ethics and Policies | Cite | Mendeley | Indexing and Abstracting
© The Authors. Blue Eyes Intelligence Engineering and Sciences Publication (BEIESP). This is an open access article under the CC-BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)

Abstract: In this paper an effort has been placed to translate / recognize some of the video based hand gestures of American Sign Language (ASL) into human and / or machine readable English text using deep neural networks. Initially, the recognition process is carried out by fetching the input video gestures. In the recognition process of the proposed algorithm, for background elimination and foreground detection, the Gaussian Mixture Model (GMM) is used. The basic preprocessing operations are used for better segmentation of the video gestures. The various feature extraction techniques like, Speeded Up Robust Features (SURF), Zernike Moment (ZM), Discrete Cosine Transform (DCT), Radon Features (RF), and R, G, B levels are used to extract the hand features from frames of the video gestures. The extracted video hand gesture features are used for classification and recognition process in forthcoming stage. For classification and followed by recognition, the Deep Neural Networks (stacked autoencoder) is used. This video hand gesture recognition system can be used as tool for filling the communication gap between the normal and hearing impaired people. As a result of this proposed ASL video hand gesture recognition (VHGR), an average recognition rate of 96.43% is achieved. This is the better and motivational performance compared to state of art techniques.
Keywords: American Sign Language, Deep Neural Networks, Hand Gestures Recognition, Radon Features, Stacked Autoencoder, SURF, Zernike Moment

Scope of the Article: Natural Language Processing