dc.description.abstract |
Sign language is the primary modality of communication among deaf and mute society all over the world. Lack of their ability to speak limits and confines their communication only within the deaf society. In order to build a communication bridge between the dumb and hearing, there is a need to design a system recognizes the hand movements of a deaf person and translates them into voice. Such a tool would be useful in public places such as hospitals and police stations where human interpreters are not immediately available.
Gesture recognition is the process by which gestures formed by a user are made known to the system. The integration of sign language recognition and sign language synthesis jointly comprise a human-computer sign language interpreter, which facilitates the interaction between deaf-mute people and their surroundings. In recent years, research has progressed steadily in regard to the use of computers to recognize and render sign language.
The goal of this project is to develop a program implementing the hand gestures recognition as hand gestures are one of the most powerful means of communication not only for the mute but for normal humans as well, and were first established long before speech and language developed. Various methods have been proposed already to locate and track hands including markers and gloves; however these are expensive and not easily available. Most attempts to detect hands from a video made from digital camera and stored in movie format, place restrictions on the environment. For example, when using skin color segmentation for hand detection background cannot be complex and cluttered. Our approach focuses on skin color segmentation plus the motion analysis of the hand. The two approaches are combined to get the best detection and tracking of human hand in cluttered backgrounds. For the recognition part of the project we have selected the approaches of Edge detection, Boundary based descriptor (for getting the shape of the hand) and Co-relation techniques to get the best results for the comparison between two images. Following this sound is generated corresponding to the best matched image.
The system was tested on different videos, filmed on different people, for checking the hand detection and tracking algorithm, and it showed a high rate of success. Experimentation has shown that the system is capable of generating sound, corresponding to the sign generated by hand. This system is designed using the MATLAB 7.0 tool and provides ease of communication, cost effectiveness and reliability to the end customer. |
en_US |