Abstract:
When training an agent for either path planning or Simultaneous Localization and Mapping
(SLAM), datasets that include various scenarios according to the agent’s physical limitations are
required. Although there are a variety of datasets available, training is limited to the perspectives
provided by the datasets given. To combat this, we have used a virtual environment called
Habitat Simulator [30, 25, 17]; a virtual space that can load various indoor places which resemble
real-life places and where the agents can be trained for path planning or object retrieval.
We have used these datasets to capture omnidirectional images; top, bottom, front, back, left,
and right views for an agent, generated a cube map and finally converted it to fisheye images
with a Field of View (FOV) of 360-degrees. These datasets are then used on ORB-SLAM3 [23],
a widely used SLAM algorithm, to determine the SLAM’s performance on said datasets to see
where the SLAM succeeds and where it encounters challenges.