NUST Institutional Repository

Multisensor Fusion for Visual Inertial Odometry

Show simple item record

dc.contributor.author Rasool, Momen
dc.date.accessioned 2024-01-29T10:35:25Z
dc.date.available 2024-01-29T10:35:25Z
dc.date.issued 2024
dc.identifier.other 327662
dc.identifier.uri http://10.250.8.41:8080/xmlui/handle/123456789/42037
dc.description Supervisor: Dr. Muhammad Latif Anjum en_US
dc.description.abstract The fusion of data from multiple sensors and the optimization of the 6-DoF pose add significant challenges in Visual Inertial SLAM, impacting the localization and mapping processes and necessitating human supervision for the robot. Ideally, a robot engaging in Visual Inertial SLAM should be capable of navigating through an unknown environment, avoiding obstacles, and comprehensively mapping the surroundings. The success of SLAM algorithms hinges entirely on the input from on-board sensors. In the event of noisy data output by these sensors, these algorithms fail, causing the robot to lose its way, as the image features are not matched frame-by-frame, a crucial requirement for uninterrupted SLAM operation. Once the tracking is lost, the mapping process is disrupted, and even though the robot is unaware, it continues to move, posing a problem. Various sensors have inherent limitations, such as IMU being susceptible to high drifts and LIDAR requiring line of sight. Therefore, there is a need for a method to efficiently fuse data from all sensors to produce highly accurate readings. Some SLAM algorithms employ full graph optimization, but this is computationally expensive, requiring expensive on-board computers for implementation. Hence, there must be an approach to efficiently fuse sensor data that can be integrated into SLAM algorithms suitable for resource-constrained platforms like Jetson Nano or Smart Phones. Our research addresses the challenge of creating a more comprehensive and accurate perception of the environment by introducing a new framework for vision-aided inertial navigation. This framework fuses information from multiple sensors, including the camera and IMU, in both tightly and loosely coupled fashion. We employ Semi Sparse ORB-Assisted MSCKF (Multi-State Constraint Kalman Filter), Corner-based ISPKF (Iterated Sigma Point Kalman Filter) State Estimator, and UKF (Unscented Kalman Filter) to enhance accuracy and redundancy. Additionally, these methods handle nonlinearities in the system dynamics and measurement equations simultaneously. en_US
dc.language.iso en en_US
dc.publisher School of Electrical Engineering and Computer Science, (SEECS), NUST en_US
dc.title Multisensor Fusion for Visual Inertial Odometry en_US
dc.type Thesis en_US


Files in this item

This item appears in the following Collection(s)

  • MS [881]

Show simple item record

Search DSpace


Advanced Search

Browse

My Account