Abstract:
This thesis presents a new fine-grained analysis on large-scale point cloud
semantic segmentation through a mechanism which fuses a local and global
context aware feature representation strategy into a semantic segmentation
model. Real point cloud scenes can acquire intrinsic details of the object
present in the scene, but due to raw structure of point cloud data, some im portant details can be lost in the process semantic segmentation. Moreover,
effective feature learning for large-scale point clouds is inherently a complex
task. In this thesis, spatial contextual features are learned locally and glob ally and then fused into a semantic segmentation model which augments the
geometric and semantic features in the point cloud for the task of semantic
segmentation of point clouds. Further, several multi-resolution point cloud
scenes have been adaptively fused to leverage better semantic segmentation
results. Benchmark dataset S3DIS of large scale point clouds have been used
for training and evaluation.