NUST Institutional Repository

Manzareh – A Helping Tool for Visually Impaired People

Show simple item record

dc.contributor.author PROJECT SUPERVISOR DR. ALI HASSAN DR. UMER FAROOQ, NS RUFAIDA KASHIF NS USAMA SARFRAZ KHAN NS TAHA NADEEM DAR
dc.date.accessioned 2025-03-12T07:12:28Z
dc.date.available 2025-03-12T07:12:28Z
dc.date.issued 2020
dc.identifier.other DE-COMP-38
dc.identifier.uri http://10.250.8.41:8080/xmlui/handle/123456789/50929
dc.description PROJECT SUPERVISOR DR. ALI HASSAN DR. UMER FAROOQ en_US
dc.description.abstract Sight is one of the biggest blessings of Lord. Unfortunately, so many people in the world are visually impaired. Vision loss as it is referred to as, is a reduced potential to see to an extent that causes problems not resolvable by ordinary methods, such as contact lenses or glasses. According to the statistics of World Health Organization (WHO), approximately 2.2 Billion people in the world suffer from some or the other form of vision impairment. Common causes of sight loss include cataract, glaucoma, macula degeneration. However, a number of cases are a result of physical accidents as well. As per Journal of Pakistan Medical Association (JPMA), 2 Million people in Pakistan are suffering from vision impairment. Visually impaired people usually require someone’s guidance to tell them the happenings in their surroundings as well as the about the existence of people around them. Keeping these aspects and needs in view, we made a solution. In order to assist them, we this system will read and interpret the surroundings of the Visually Impaired people to them. For this, we will be using a cellular phone and a cloud service. As soon as the user takes the image of what is in front of them, the image will be sent as an input to the model trained using the concepts of machine learning and AI. The model is deployed on the cloud as well as on the local system. After adequate processing, the model will generate the caption for that image in the form of a structured sentence. By built in libraries that converts the text to speech the generated ‘caption’ – or simply the sentence will be read out via phone’s speaker (audio output). To implement this we build a model, enhanced the accuracy by multiple training phases and changing particular requirements. Later the system was implemented on an android app for easy usability. Afterwards, the online connection was established that gave the results in real time. The system will operate only upon the desire and requirement of the user that is, only when the user gives command. Henceforth, causing no additional burden for them by constantly giving input. So, it will reduce the user’s dependency on other person as their guide, making them independent in a lot of matters. en_US
dc.language.iso en en_US
dc.publisher College of Electrical & Mechanical Engineering (CEME), NUST en_US
dc.title Manzareh – A Helping Tool for Visually Impaired People en_US
dc.type Project Report en_US


Files in this item

This item appears in the following Collection(s)

  • BS [175]

Show simple item record

Search DSpace


Advanced Search

Browse

My Account