NUST Institutional Repository

PyraContrast- Unsupervised Pre training on 3D Point

Show simple item record

dc.contributor.author Manzoor, Muhammad Hasnat
dc.date.accessioned 2023-07-25T10:48:12Z
dc.date.available 2023-07-25T10:48:12Z
dc.date.issued 2022
dc.identifier.other 277277
dc.identifier.uri http://10.250.8.41:8080/xmlui/handle/123456789/35100
dc.description Supervisor: Dr. Muhammad Shahzad en_US
dc.description.abstract Arguably, one of the deep learning’s greatest achievements is transfer learning. It is proven that if we use a rich source dataset to pre-train a network (ImageNet in 2D) and after we use a smaller target dataset to be fine-tuned, it can help to boost performance. It has been used in many applications including language and vision. In 3D scene understanding, there have been few works using this method. Because annotating 3D data is difficult. In this work, we aim to further facilitate research on 3D representation learning. To achieve this goal, we select different datasets and downstream tasks to find the effectiveness of unsupervised pre training on a large and rich source dataset of a 3D scene. The results we obtain are encouraging. we are using a unified backbone, source dataset, and contrastive objective for unsupervised pre-training, and further supervised downstream tasks are performed. Our method is achieving the almost same result in half time of recent works in both pre-training and downstream tasks like semantic segmentation, and instance segmentation. en_US
dc.language.iso en en_US
dc.publisher School of Electrical Engineering and Computer Science (SEECS), NUST en_US
dc.subject Unsupervised learning, Representation learning, 3D scene understanding en_US
dc.title PyraContrast- Unsupervised Pre training on 3D Point en_US
dc.type Thesis en_US


Files in this item

This item appears in the following Collection(s)

  • MS [375]

Show simple item record

Search DSpace


Advanced Search

Browse

My Account