NUST Institutional Repository

Hear Smart- Your Hearing Assistant

Show simple item record

dc.contributor.author PROJECT SUPERVISOR DR. SAJID GUL KHAWAJA, PC MEHWISH SIDDIQUE PC HIRA KHALID
dc.date.accessioned 2025-03-12T07:30:35Z
dc.date.available 2025-03-12T07:30:35Z
dc.date.issued 2020
dc.identifier.other DE-COMP-38
dc.identifier.uri http://10.250.8.41:8080/xmlui/handle/123456789/50937
dc.description PROJECT SUPERVISOR DR. SAJID GUL KHAWAJA en_US
dc.description.abstract People with hearing disabilities may not tune in to their desire voices, so HearSmart provides an elegant solution to this challenge. Using deep learning, HearSmart can isolate any voice from a mixed audio source containing various noises and sounds, including multiple speakers. With HearSmart, anyone can easily tune into the voice they want to hear, and filter out distracting ambient noise. Since speech is masked, by both background noise and reverberation, which negatively affect perceptual quality and intelligibility therefore, we perform de-reverberation and de-noising using generative adversarial networks (GANs). To circumvent these issues, deep networks are being increasingly used, thanks to their ability to learn complex functions from large example sets. In contrast to current techniques, we operate at the waveform level, training the model end-to-end, and incorporate 28 speakers and 40 different noise conditions into the same model, such that model parameters are shared across them. We evaluate the proposed model using an independent, unseen test set with two speakers and 20 alternative noise conditions. The enhanced samples confirm the viability of the proposed model, and both objective and subjective evaluations confirm the effectiveness of it. With that, we open the exploration of generative architectures for speech enhancement, which may progressively incorporate further speech-centric design choices to improve their performance. A speaker separation algorithm is then quickly processed in the cloud, where a neural network learns to extract the target’s voice. Once training is complete, the neural network model is sent back to the system, enabling instantaneous on-device voice isolation to this denoised speech signal, which separate outs the speeches of different speakers present in the input speech, and makes them available to listener. The user can now listen to its targeted speaker easily by tuning (in/out) a single or multiple speakers en_US
dc.language.iso en en_US
dc.publisher College of Electrical & Mechanical Engineering (CEME), NUST en_US
dc.title Hear Smart- Your Hearing Assistant en_US
dc.type Project Report en_US


Files in this item

This item appears in the following Collection(s)

  • BS [175]

Show simple item record

Search DSpace


Advanced Search

Browse

My Account