dc.description.abstract |
Self-driving cars, also known as autonomous vehicles, have the potential to revolutionize
transportation by offering safer and more efficient mobility. Even though autonomous
vehicles seem like they are right around the corner with the advent of AI and powerful
deep learning models, but they still have not been deployed because of model’s inability
to adapt to foggy weather or detect occluded objects. To address this issue and avoid
road collisions, this research work discusses the use of deep learning models for 3D object
detection in clear and foggy weather conditions, with a focus on the KITTI benchmark
dataset.
The feasibility study involves training deep learning models to detect objects in adverse
weather conditions, particularly fog. Synthetic data, generated through fog simulation
techniques and reduced lidar beams, is employed to address the limited availability of
real-world data and enhance the model’s adaptability.
Various state-of-the-art deep learning architectures, such as PartA2, PV-RCNN, PointPillars, PointRCNN, SECOND, and SECOND multihead, are employed for 3D object detection using LiDAR data. The training process incorporates simulated adverse weather
conditions to improve the model’s realism and robustness. Evaluation is performed using metrics like average precision (AP) and intersection over union (IoU) to assess the
models’ performance.
The results have shown 5.27% improvement in car class and 8.11% improvement in
average precision for cyclist class by the integration of synthetic fog data augmentation
while training. 4.76%, 2.92% and 3% increase in Mean Average Precision (mAP) has
been proven over easy, moderate and hard objects over all three classes respectively.
Future work in this study can involve point cloud generation from sparse point clouds
so detection can be done more accurately even in the presence of fog or occlusion. |
en_US |