NUST Institutional Repository

Brain-computer interface for synthesized speech communication

Show simple item record

dc.contributor.author Faisal Umer, Atif Arshad Amna Sadiq
dc.date.accessioned 2021-07-01T13:01:31Z
dc.date.available 2021-07-01T13:01:31Z
dc.date.issued 2018
dc.identifier.uri http://10.250.8.41:8080/xmlui/handle/123456789/24512
dc.description Supervisor: Dr. Ahmad Salman en_US
dc.description.abstract Brain Computer Interface (BCI) systems have been widely used to develop viable assistive technology to address physical inability in e.g., speech impaired people. This project aims to design and develop a BCI-based system for generation of synthesized speech, which works by acquiring Electroencephalogram (EEG) signals of the user and following the event-related potential phenomenon. The user is provided with a P300 screen, patient focuses on the word which he wants to spell. Such a system is particularly useful for patients suffering from motor neuron disorders, who can use this interface to communicate with their caretakers. This system enables patients to communicate by thinking a particular word on a P300 screen, which is then converted to synthesized speech by a text-to-speech API. en_US
dc.publisher SEECS, National University of Sciences and Technology, Islamabad en_US
dc.subject Electrical Engineering en_US
dc.title Brain-computer interface for synthesized speech communication en_US
dc.type Thesis en_US


Files in this item

This item appears in the following Collection(s)

  • BS [835]

Show simple item record

Search DSpace


Advanced Search

Browse

My Account