NUST Institutional Repository

Audio Visual Authentication

Show simple item record

dc.contributor.author Talha Yousuf, supervised by Dr Hasan Sajid
dc.date.accessioned 2022-07-25T07:27:27Z
dc.date.available 2022-07-25T07:27:27Z
dc.date.issued 2022
dc.identifier.uri http://10.250.8.41:8080/xmlui/handle/123456789/29939
dc.description.abstract In bio-related applications privacy is an essential element. While most of the techniques in Deep Learning rely on single modality, spoofing attacks can be minimized by em ploying multi-modal approaches. Purpose of this research is to develop a technique in which a person will be given some sentences to speak, audio-visual features will be merged and using this amalgam of both modalities, language model will validate if the text read actually validates against the passage given to read. This can be used as an authentication method to check if the user is actually live and hence can prevent the print attacks in case of mobile applications. en_US
dc.language.iso en en_US
dc.publisher SMME en_US
dc.subject Audio Visual Authentication en_US
dc.title Audio Visual Authentication en_US
dc.type Thesis en_US


Files in this item

This item appears in the following Collection(s)

  • MS [342]

Show simple item record

Search DSpace


Advanced Search

Browse

My Account