NUST Institutional Repository

Automated sign language recognition system

Show simple item record

dc.contributor.author Samee, Rida
dc.contributor.author Mumtaz, Zainub
dc.contributor.author Shahzad, Hira
dc.contributor.author Supervised by Dr. Fahim Arif and Dr. Imran Siddiqi
dc.date.accessioned 2020-11-11T07:16:59Z
dc.date.available 2020-11-11T07:16:59Z
dc.date.issued 2013-06
dc.identifier.other Pcs-235
dc.identifier.other BESE-15
dc.identifier.uri http://10.250.8.41:8080/xmlui/handle/123456789/11375
dc.description.abstract The aim of Automated Sign Language Recognition System is to facilitate communication between normal people and hearing impaired people. C # (C-sharp) is used as Implementation Language as it requires less computational power and less memory. C # is used under .net Framework 4 as it supports System Threading and Parallel Implementation of Video. The solution is cost effective as it does not require a Data Glove as Input Device rather it uses a Camera & Image Processing Algorithms for Tracking and Localizing Hand Postures. Image Binarization and Contour Extraction is applied to extract hand from background. The system is trained using Supervised Learning Technique in order to recognize gestures with accuracy using Artificial Neural Networks. Aforge Library is used as it supports Artificial Intelligence and Learning Algorithms. The system is validated using Validation Data Set. The webcam takes input gesture from live streaming video. This gesture is fed to the trained artificial neural network which compares the feature of input gesture with the trained samples and generates an output showing the alphabet recognized and the system generates speech which indicates the output gesture vocally. en_US
dc.language.iso en en_US
dc.publisher MCS en_US
dc.title Automated sign language recognition system en_US
dc.type Technical Report en_US


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search DSpace


Advanced Search

Browse

My Account