Abstract:
3D models like point clouds are very widespread these days and are being
used in numerous fields. Despite their broad availability, there is still a perti nent need of automatic approaches for semantic segmentation of data in 3D.
Semantic segmentation is an important job and is a primary step towards
scene understanding. Segmentation of 3D point clouds is a difficult task es pecially handling large scale point clouds is still an open challenge. Many
deep learning architectures have been designed which can take unordered
point clouds as input but a very few addressed the problem of segmenting
large scale point clouds. Hence it is needed to design a segmentation archi tecture which is efficient enough to handle point-clouds in large scale while
keeping model fast and accurate. Recently, Landrieu et al. [24] proposed an
impressive model for large scale point set segmentation. The model showed
remarkable performance but used fixed adjacency super point graphs for vari able density point-clouds, decreasing their embedding quality. Hence efficient
partitioning of large point-clouds is needed, while keeping local geometric re lationships in mind. In this regard, we have proposed a model for semantic
segmentation of point clouds in large scale, which is a decent combination of
unsupervised and supervised machine learning approaches. The model works
on eloquent partitioning of point-clouds, which are based on local geometric
dependencies between points. Our model not only outperforms other state
of the art architectures in time and memory but also has shown comparable
accuracy with other successful architectures.