NUST Institutional Repository

From Signs to Words and Speech: Multilingual Sign Language Conversion

Show simple item record

dc.contributor.author MUHAMMAD HASEEB RAJPOOT, AMNA AKRAM , MALIK MOIZ ASGHAR , SHERYAR BALOCH
dc.date.accessioned 2025-02-13T07:26:57Z
dc.date.available 2025-02-13T07:26:57Z
dc.date.issued 2024
dc.identifier.uri http://10.250.8.41:8080/xmlui/handle/123456789/49846
dc.description Advisor: DR. IMRAN USMAN en_US
dc.description.abstract Good communication holds immense importance when we talk about the progress of society. However, not everyone is fortunate enough to communicate with each other directly. Some people use sign language as their only mode of communication but, unfortunately, sign language is not a universally understood language. We provide a novel approach—a deep learning model for sign language to text conversion—aimed at addressing the complexity of sign language variation. This technology combines computer vision and deep learning algorithms and recognizes hand movements via webcam while translating them into equivalent text. Furthermore, by adapting a multilingual strategy, we aim to reduce the barriers that are often faced by hearing impaired individuals. Our goal is to transform communication for the deaf by removing barriers that prevent them from expressing their ideas and views and promote a society where hearing impaired individuals would feel less alienated compared to the rest of world. By combining technology and empathy, this project will promote inclusiveness by creating such a world where every individual’s voice will be heard and respected, regardless of any disabilities. en_US
dc.language.iso en en_US
dc.title From Signs to Words and Speech: Multilingual Sign Language Conversion en_US
dc.type Thesis en_US


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search DSpace


Advanced Search

Browse

My Account