NUST Institutional Repository

A Framework to Extract Summarized Text and its Brail Conversion

Show simple item record

dc.contributor.author Khan, Farasat ullah
dc.date.accessioned 2023-08-31T10:34:37Z
dc.date.available 2023-08-31T10:34:37Z
dc.date.issued 2023-09
dc.identifier.other 317523
dc.identifier.uri http://10.250.8.41:8080/xmlui/handle/123456789/38029
dc.description Supervisor: Dr. Sajid Gul Khawaja Co-Supervisor: Dr. Muhammad Usman Akram en_US
dc.description.abstract Text summarization involves distilling essential sentences from a large collection of articles or documents and condensing them into a shorter rendition. This process utilizes a variety of techniques, including statistical, graphical, and deep learningbased methods. Nowadays, these techniques are employed in the field to create summaries. However, existing approaches face several challenges, such as accurately identifying crucial information, handling diverse document types (like news articles, research papers, and online reviews), and crafting coherent, grammatically accurate summaries. This thesis presents an automated extractive text summarization framework that aims to address the existing challenges in text summarization. This framework seeks to reduce time, costs, and effort while also converting the summarized content into Braille language. Multiple summarization models are trained to evaluate the performance of BERT and its variants in the summarization task. Additionally, a BERT-based architecture for Braille conversion is proposed to translate machine-generated English summaries into Braille. The proposed model generates summarized text and evaluates its performance using metrics like precision, recall, and F-score. Among the different BERT variants, the SqueezeBERT model, which is designed specifically for text summarization, maintains 98% of the original BERT model’s performance while utilizing 49% fewer trainable parameters. SqueezeBERT emerges as a promising choice for training a summarizer that is nearly half the size of the original model, with only minimal reductions in summarization performance. Upon assessment at the 30,000-step mark, the models RoBERTa Small, and SqueezeBERT exhibit better R1, R2, and RL values compared to other models. en_US
dc.language.iso en en_US
dc.publisher College of Electrical & Mechanical Engineering (CEME), NUST en_US
dc.subject Extractive text summarization; Braille; BERT; SqueezeBERT; RoBERTa Small en_US
dc.title A Framework to Extract Summarized Text and its Brail Conversion en_US
dc.type Thesis en_US


Files in this item

This item appears in the following Collection(s)

  • MS [441]

Show simple item record

Search DSpace


Advanced Search

Browse

My Account