NUST Institutional Repository

Co-occuring Disfluencies in Stuttered Speech Detection and Classification via Deep Learning Framework

Show simple item record

dc.contributor.author Ameer, Huma
dc.date.accessioned 2024-08-29T11:47:25Z
dc.date.available 2024-08-29T11:47:25Z
dc.date.issued 2024
dc.identifier.other 329386
dc.identifier.uri http://10.250.8.41:8080/xmlui/handle/123456789/46162
dc.description Supervisor: Dr Seemab Latif en_US
dc.description.abstract Advancements in the field of speech processing have led to cutting-edge deep learning algorithms with immense potential for real-world applications. Addressing the challenge of automated identification of stuttered speech, researchers have leveraged Wav2vec2.0 for the respective task. Despite its commendable outcomes, Wav2vec2.0 exhibits cer- tain limitations in generalizing across the disfluency types. Therefore, in this study, Whisper, a weakly supervised model is employed as a novel approach to classify the disfluencies in stuttered speech. In addition, by strategic enhancements to the SEP-28k benchmark dataset and the implementation of an encoder layer freezing strategy, an im- pressive average F1-score of 0.81 is achieved on the FluencyBank external test dataset. Furthermore, the research unveils the pivotal role of deeper encoder layers in identifying disfluency types, illustrating their substantial contribution compared to initial layers, and reducing the trainable parameters by 46%. Notably, it can be inferred that Whisper has outperformed in evaluation metrics, time complexity and resource utilization, estab- lishing itself as state-of-the-art for the task at hand. Despite notable advancements in the field, the cases in which multiple disfluencies occur in speech requires attention. We have taken a progressive approach to fill this gap by classifying multi-stuttered speech more efficiently. The problem has been addressed by firstly curating a dataset of multi- stuttered disfluencies from SEP-28k audio clips. Secondly, employing Whisper, a state of the speech recognition model has been leveraged by using its encoder and taking the problem as multi label classification. Thirdly, using a 6 encoder layer Whisper and experimenting with various layer freezing strategies, a computationally efficient config- uration of the model was identified. The proposed configuration achieved micro, macro, and weighted F1 scores of 0.88, 0.85, and 0.87, correspondingly on an external test dataset i.e. Fluency-Bank. In addition, through layer freezing strategies, we were able to achieve the aforementioned outcomes by fine-tuning only a single encoder layer, con- vii sequently, reducing the model’s trainable parameters from 20.27 million to 3.29 million. This research study unveils the contribution of the last encoder layer in the identifica- tion of disfluencies in stuttered speech. Consequently, it has led to a computationally efficient approach which makes the model more adaptable for various dialects and lan- guages.This research presents substantial contributions, shifting the emphasis towards an efficient solution, thereby thriving towards prospective innovation. en_US
dc.language.iso en en_US
dc.publisher NUST School of Electrical Engineering and Computer Science (NUST SEECS) en_US
dc.title Co-occuring Disfluencies in Stuttered Speech Detection and Classification via Deep Learning Framework en_US
dc.type Thesis en_US


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search DSpace


Advanced Search

Browse

My Account