NUST Institutional Repository

Multi-Radar and Multi-Camera Sensor Fusion for Enhanced Object Classification and Detection

Show simple item record

dc.contributor.author Zafar, Ahtsham
dc.date.accessioned 2022-10-03T07:28:50Z
dc.date.available 2022-10-03T07:28:50Z
dc.date.issued 2022
dc.identifier.uri http://10.250.8.41:8080/xmlui/handle/123456789/30726
dc.description.abstract Cameras have been extensively used for partial and fully autonomous vehicle (AV) im plementation meanwhile radars are still in the early stages of practical implementation in the AV industry. Cameras struggle to provide accurate object classifications in extreme rain, snow, and foggy conditions however data streams from cameras can be fused with radar data to counter the effects caused by poor surrounding conditions. The fusion of this data requires an efficient and self-learning methodology to cater to unseen scenarios with reliable accuracy such as Deep Learning (DL). This research thesis demonstrates Feature Based Early Sensor Fusion (FB-ESF). The implementation in-essence depicts that features extracted from processed radar data fused with the camera data can en hance the classification accuracy of deep learning models in harsh environments. The data set has been collected using a 77Ghz mmWave radar and a DX-Format CMOS camera sensor. Both the sensors are mounted next to each other on top of a car and the focal length of the camera has been adjusted in such a way to best match the FOV of the radar module. The dataset contains two object categories namely human and car. Data has been collected from two viewpoints (front and back of the car) to add to the diversity in the data set. Adding depth to the analysis, three levels of Synthetic Environmental Profiles (SEP) have been generated which enables development of 16 datasets, furthermore this helps in the generation of datasets that mimic a rainy/snowy situation and helps to evaluate the performance in such scenarios. The work utilizes convolutional neural networks, FixResNeXt, PSANet and CoAtNet with confusion ma trix as a performance parameter. Precision and recall for each scenario have also been evaluated. In total 80 scenarios have been evaluated, results show that the classification accuracy, which was degraded to below a usable threshold and reached 16% for Level-3 SEP (extreme rain/snow) on camera data, can be increased by fusing radar data. Using radar data the accuracy reached above 80% making model a reliable classifier. en_US
dc.description.sponsorship Dr. Shahzad Younis en_US
dc.language.iso en en_US
dc.publisher School of Electrical Engineering and Computer Sciences (SEECS) NUST en_US
dc.title Multi-Radar and Multi-Camera Sensor Fusion for Enhanced Object Classification and Detection en_US
dc.type Thesis en_US


Files in this item

This item appears in the following Collection(s)

  • MS [882]

Show simple item record

Search DSpace


Advanced Search

Browse

My Account