Home   >   CSC-OpenAccess Library   >    Manuscript Information
Full Text Available

(1.48MB)
This is an Open Access publication published under CSC-OpenAccess Policy.
Using The Hausdorff Algorithm to Enhance Kinect's Recognition of Arabic Sign Language Gestures
Miada Ahmmed Almasre, Hana A. Al-Nuaim
Pages - 1 - 18     |    Revised - 01-03-2017     |    Published - 01-04-2017
Volume - 7   Issue - 1    |    Publication Date - April 2017  Table of Contents
MORE INFORMATION
KEYWORDS
Pattern Recognition, Hand Gesturing, Arabic Sign Language, Kinect, Candescent.
ABSTRACT
The objective of this research is to utilize mathematical algorithms to overcome the limitations of the depth sensor Kinect in detecting the movement and details of fingers and joints used for the Arabic alphabet sign language (ArSL). This research proposes a model to accurately recognize and interpret a specific ArSL alphabet using Microsoft's Kinect SDK Version 2 and a supervised machine learning classifier algorithm (Hausdorff distance) with the Candescent Library. The dataset of the model, considered prior knowledge for the algorithm, was collected by allowing volunteers to gesture certain letters of the Arabic alphabet. To accurately classify the gestured letters, the algorithm matches the letters by first filtering each sign based on the number of fingers and their visibility, then by calculating the Euclidean distance between contour points of a gestured sign and a stored sign, while comparing the results with an appropriate threshold.

To be able to evaluate the classifier, participants gesturing different letters with the same Euclidean distance value of the stored gestures. The class name that was closest to the gestured sign appeared directly on the display sign window. Then the results of the classifier were analyzed mathematically.

When comparing unknown incoming gestures signed with the stored gestures from the dataset collected, the model matched the gesture with the correct letter with high accuracy.
CITED BY (0)  
1 CiteSeerX
2 Scribd
3 slideshare
4 PdfSR
1 L. Chen, F. Wang, H. Deng, and K. Ji, "A Survey on Hand Gesture Recognition," in 2013 International Conference on Computer Sciences and Applications (CSA), 2013, pp. 313-316.
2 A. A. Youssif, A. E. Aboutabl, and H. H. Ali, "Arabic sign language (arsl) recognition system using hmm," Int. J. Adv. Comput. Sci. Appl. IJACSA, vol. 2, no. 11, 2011.
3 H. Liang and J. Yuan, "Hand Parsing and Gesture Recognition with a Commodity Depth Camera," in Computer Vision and Machine Learning with RGB-D Sensors, L. Shao, J. Han, P. Kohli, and Z. Zhang, Eds. Cham: Springer International Publishing, 2014, pp. 239-265.
4 H. Liang, J. Yuan, D. Thalmann, and Z. Zhang, "Model-based hand pose estimation via spatial-temporal hand parsing and 3D fingertip localization," Vis. Comput., vol. 29, no. 6-8, pp. 837-848, Jun. 2013.
5 R. Z. Khan and N. A. Ibraheem, "Hand Gesture Recognition: A Literature Review," ResearchGate, vol. 3, no. 4, pp. 161-174, Aug. 2012.
6 Y. Li, "Hand gesture recognition using Kinect," in 2012 IEEE International Conference on Computer Science and Automation Engineering, 2012, pp. 196-199.
7 A. Erol, G. Bebis, M. Nicolescu, R. D. Boyle, and X. Twombly, "Vision-based hand pose estimation: A review," Comput. Vis. Image Underst., vol. 108, no. 1-2, pp. 52-73, Oct. 2007.
8 I. Oikonomidis, N. Kyriazis, and A. A. Argyros, "Tracking the articulated motion of human hands in 3D," ERCIM NEWS, p. 23, 2013.
9 A. Jana, Kinect for Windows SDK programming guide: build motion-sensing applications with Microsoft's Kinect for Windows SDK quickly and easily. Birmingham: Packt Publ, 2012.
10 H. Liang, J. Yuan, and D. Thalmann, "Resolving Ambiguous Hand Pose Predictions by Exploiting Part Correlations," IEEE Trans. Circuits Syst. Video Technol., vol. 25, no. 7, pp. 1125-1139, Jul. 2015.
11 Y. Bengio, A. Courville, and P. Vincent, "Representation Learning: A Review and New Perspectives," IEEE Trans. Pattern Anal. Mach. Intell., vol. 35, no. 8, pp. 1798-1828, Aug. 2013.
12 M. A. Almasre and H. Al-Nuaim, "Recognizing Arabic Sign Language gestures using depth sensors and a KSVM classifier," in 2016 8th Computer Science and Electronic Engineering (CEEC), 2016, pp. 146-151.
13 M. A. Almasre and H. Al-Nuaim, "A Real-Time Letter Recognition Model for Arabic Sign Language Using Kinect and Leap Motion Controller v2," IJAEMS Open Access Int. J. Infogain Publ., vol. Vol-2, no. Issue-5, 2016.
14 Y. Wu and T. S. Huang, "Vision-Based Gesture Recognition: A Review," in Gesture-Based Communication in Human-Computer Interaction, A. Braffort, R. Gherbi, S. Gibet, D. Teil, and J. Richardson, Eds. Springer Berlin Heidelberg, 1999, pp. 103-115.
15 A. R. Sarkar, G. Sanyal, and S. Majumder, "Hand gesture recognition systems: a survey," Int. J. Comput. Appl., vol. 71, no. 15, 2013.
16 "SignSpeak," SignSpeak, 2016. [Online]. Available: http://www.signspeak.eu/en/publications.html. [Accessed: 19-Nov-2016].
17 R. J. Senghas and L. Monaghan, "Signs of Their Times: Deaf Communities and the Culture of Language," Annu. Rev. Anthropol., vol. 31, pp. 69-97, 2002.
18 A. Kurakin, Z. Zhang, and Z. Liu, "A real time system for dynamic hand gesture recognition with a depth sensor," in Signal Processing Conference (EUSIPCO), 2012 Proceedings of the 20th European, 2012, pp. 1975-1979.
19 L. Song, R. M. Hu, Y. L. Xiao, and L. Y. Gong, "Real-Time 3D Hand Tracking from Depth Images," Adv. Mater. Res., vol. 765-767, pp. 2822-2825, 2013.
20 H. Y. Lai and H. J. Lai, "Real-Time Dynamic Hand Gesture Recognition," in 2014 International Symposium on Computer, Consumer and Control (IS3C), 2014, pp. 658-661.
21 C. Wang, Z. Liu, and S. C. Chan, "Superpixel-Based Hand Gesture Recognition With Kinect Depth Camera," IEEE Trans. Multimed., vol. 17, no. 1, pp. 29-39, Jan. 2015.
22 T. Starner, J. Weaver, and A. Pentland, "Real-time American sign language recognition using desk and wearable computer based video," IEEE Trans. Pattern Anal. Mach. Intell., vol. 20, no. 12, pp. 1371-1375, Dec. 1998.
23 Y. Xu, Y. Zhai, J. Gan, and J. Zeng, "A novel finger-knuckle-print recognition based on convex optimization," in 2014 12th International Conference on Signal Processing (ICSP), 2014, pp. 1785-1789.
24 M. El-Jaber, K. Assaleh, and T. Shanableh, "Enhanced user-dependent recognition of Arabic Sign language via disparity images," in 2010 7th International Symposium on Mechatronics and its Applications (ISMA), 2010, pp. 1-4.
25 M. Mohandes, M. Deriche, and J. Liu, "Image-Based and Sensor-Based Approaches to Arabic Sign Language Recognition," IEEE Trans. Hum.-Mach. Syst., vol. 44, no. 4, pp. 551-557, Aug. 2014.
26 S. M. ElQahtani, The Arabic Dictionary of Gestures for the Deaf - Issue No. 1, 2nd edition. Sayeed M. Al-Qahtani / H.R.H. Al Jowhara Bint Faisal Bin Turki Al Saud, 2008.
27 J. Han, L. Shao, D. Xu, and Jamie Shotton, "Enhanced Computer Vision With Microsoft Kinect Sensor: A Review," IEEE Trans. Cybern., vol. 43, no. 5, pp. 1318-1334, Oct. 2013.
28 Z. Zhang, "Microsoft Kinect Sensor and Its Effect," IEEE Multimed., vol. 19, no. 2, pp. 4-10, Feb. 2012.
29 X. Chen, H. Li, T. Pan, S. Tansley, and M. Zhou, "Kinect Sign Language Translator expands communication possibilities," Microsoft Res. Outubro2013 Disponível Em Httpresearch Microsoft Comenuscollaborationstorieskinect-Sign-Lang.-Transl. Aspx, 2013.
30 G. R. S. Murthy and R. S. Jadon, "A review of vision based hand gestures recognition," Int. J. Inf. Technol. Knowl. Manag., vol. 2, no. 2, pp. 405-410, 2009.
31 J. Suarez and R. R. Murphy, "Hand gesture recognition with depth images: A review," in 2012 IEEE RO-MAN: The 21st IEEE International Symposium on Robot and Human Interactive Communication, 2012, pp. 411-417.
32 M. K. Bhuyan, K. F. MacDorman, M. K. Kar, D. R. Neog, B. C. Lovell, and P. Gadde, "Hand pose recognition from monocular images by geometrical and texture analysis," J. Vis. Lang. Comput., vol. 28, pp. 39-55, Jun. 2015.
33 "Hausdorff distance," 2016. [Online]. Available: http://cgm.cs.mcgill.ca/~godfried/teaching/cg-projects/98/normand/main.html. [Accessed: 19-Nov-2016].
34 M. Tang, M. Lee, and Y. J. Kim, "Interactive Hausdorff Distance Computation for General Polygonal Models," in ACM SIGGRAPH 2009 Papers, New York, NY, USA, 2009, p. 74:1-74:9.
35 H. Liang, "Hand Pose Estimation with Depth Camera," Projects, 2016. [Online]. Available: https://sites.google.com/site/seraphlh/projects. [Accessed: 18-Jan-2017].
36 "Candescent NUI," CodePlex, 2016. [Online]. Available: https://candescentnui.codeplex.com/documentation?ProjectName=candescentnui. [Accessed: 19-Nov-2016].
37 S. Brunner and D. Lalanne, "Using Microsoft Kinect Sensor to Perform Commands on Virtual Objects," Publ. Oct, vol. 2, 2012.
38 Online XML, "Convert XML To Excel Spreadsheet xls/xlsx File Online," 2016. [Online]. Available: http://xmlgrid.net/xmlToExcel.html. [Accessed: 19-Nov-2016].
39 M. Hu, F. Shen, and J. Zhao, "Hidden Markov models based dynamic hand gesture recognition with incremental learning method," in 2014 International Joint Conference on Neural Networks (IJCNN), 2014, pp. 3108-3115.
Mrs. Miada Ahmmed Almasre
Faculty of Computing and Information Technology/ Department of Computer Science King AbdulAziz University Jeddah - Saudi Arabia
malmasre@kau.edu.sa
Dr. Hana A. Al-Nuaim
Faculty of Computing and Information Technology/ Department of Computer Science King AbdulAziz University Jeddah - Saudi Arabia