Abstract:
Brain Computer Interface (BCI) systems have been widely used to develop viable assistive technology to address physical inability in e.g., speech impaired people. This project aims to design and develop a BCI-based system for generation of synthesized speech, which works by acquiring Electroencephalogram (EEG) signals of the user and following the event-related potential phenomenon. The user is provided with a P300 screen, patient focuses on the word which he wants to spell. Such a system is particularly useful for patients suffering from motor neuron disorders, who can use this interface to communicate with their caretakers. This system enables patients to communicate by thinking a particular word on a P300 screen, which is then converted to synthesized speech by a text-to-speech API.