NUST Institutional Repository

Orientation Aware Weapons Detection in Visual Data

Show simple item record

dc.contributor.author Haq, Nazeef Ul
dc.date.accessioned 2023-08-19T11:26:58Z
dc.date.available 2023-08-19T11:26:58Z
dc.date.issued 2020
dc.identifier.other 275438
dc.identifier.uri http://10.250.8.41:8080/xmlui/handle/123456789/36950
dc.description Supervisor: Dr. Muhammad Moazam Fraz en_US
dc.description.abstract Automatic detection of weapons is significant for improving security and well being of individuals, nonetheless, it is a difficult task due to large variety of size, shape and appearance of weapons. View point variations and occlusion also are reasons which makes this task more difficult. Horizontal object detection methods have greater background information along with object thus it will have less foreground to background ratio. While Oriented aware methods will have very less background information because it will be aligned according to object width and height, thus it will have greater foreground to background ratio. Classification accuracy of object detection will also improve using oriented aware methods because of greater foreground to background ratio. The current object detection algorithms process rectangular areas, however a slender and long rifle may really cover just a little portion of area and the rest may contain unessential details. To overcome these problems we present two approaches for oriented aware weapon detection in visual data. One is using classification of angle and other is using regression of angle. Our architecture is inspired by Faster R-CNN. For angle classification approach we divided angle into 8 different classes. Then we trained another parameter angle in Faster R-CNN and soft max loss is used for angle classification. Now at the end of Faster R-CNN we got one extra parameter which is angle class. Then at inference time we use Linear transformations to get oriented bounding box. For angle regression technique every thing goes same except now we take original angle as input and trained angle parameter using smooth L1 loss function. We have trained both approaches on our own created OAWD data set. OAWD dataset has 6400 manually annotated images which has oriented bounding box as ground truth. Results on both approaches shows that it gives better performance than horizontal detection of bounding box. We achieved 1% and 0.8% improvement in mean average precision on proposed two models than base model Faster R-CNN which is 72.98%. en_US
dc.language.iso en en_US
dc.publisher School of Electrical Engineering and Computer Science NUST SEECS en_US
dc.title Orientation Aware Weapons Detection in Visual Data en_US
dc.type Thesis en_US


Files in this item

This item appears in the following Collection(s)

  • MS [375]

Show simple item record

Search DSpace


Advanced Search

Browse

My Account