Abstract:
With the rise of Artificial Intelligence and recent technological advancements, there has been considerable progress towards the development of autonomous vehicles. Autonomous driving can be broken down into four major phases: perception, object detection, path planning, and actuation. Our project focuses on the perception stage of the self-driving problem, specifically on developing a sensor fusion system that combines data from both Lidar and Stereo sensors to form a robust depth map for long ranges.
This project intends to develop a platform onto which other sensors can be added for increased precision and accuracy. The fusion technique being used is the Extended Kalman filter, which will combine the outputs of LiDAR and Stereo sensors.
The entire algorithm will be ported onto an embedded platform, the Jetson Nano, which is designed to manage the computational requirements of the fusion algorithm. This will enable our system to run in real time, making it suitable for use in autonomous vehicles.
The plan is to assess and evaluate the performance of this sensor fusion system using realworld data and scenarios. The project aims to contribute to the development of autonomous vehicles by providing a more accurate and reliable perception system.