Abstract:
The aim of Automated Sign Language Recognition System is to facilitate communication between normal people and hearing impaired people. C # (C-sharp) is used as Implementation Language as it requires less computational power and less memory. C # is used under .net Framework 4 as it supports System Threading and Parallel Implementation of Video.
The solution is cost effective as it does not require a Data Glove as Input Device rather it uses a Camera & Image Processing Algorithms for Tracking and Localizing Hand Postures. Image Binarization and Contour Extraction is applied to extract hand from background.
The system is trained using Supervised Learning Technique in order to recognize gestures with accuracy using Artificial Neural Networks. Aforge Library is used as it supports Artificial Intelligence and Learning Algorithms. The system is validated using Validation Data Set.
The webcam takes input gesture from live streaming video. This gesture is fed to the trained artificial neural network which compares the feature of input gesture with the trained samples and generates an output showing the alphabet recognized and the system generates speech which indicates the output gesture vocally.