NUST Institutional Repository

Bidirectional Transformers driven Contextual Sequential Recommendation with Contrastive Learning

Show simple item record

dc.contributor.author Bashir, Asima
dc.date.accessioned 2024-08-01T07:06:17Z
dc.date.available 2024-08-01T07:06:17Z
dc.date.issued 2024-08-01
dc.identifier.other 00000431940
dc.identifier.uri http://10.250.8.41:8080/xmlui/handle/123456789/45123
dc.description Supervised by Prof Dr. Naima Iltaf en_US
dc.description.abstract Contrastive learning (CL) with Transformer-based sequence encoders offers a robust framework for sequential recommendation by effectively addressing data noise and sparsity issues. By utilizing the advantages of CL, these models are able to learn rich representations from sequential user interactions, leading to improved recommendation and user satisfaction. However, recent CL methods are affected by two limitations. Firstly, CL approaches are mainly process input sequences in single direction i.e left to-right which is sub-optimal for sequential prediction tasks because user historical interactions might not be in a fixed single direction. Secondly, these models focus on designing CL objectives based solely on input sequence, overlooking the valuable self-supervision signals available as contextual information of descriptive text. To address these limitations, this research proposes a novel framework called Bidirectional Transformers driven Contextual sequential Recommendation with Contrastive Learning (CCLRec). Specifically, bidirectional Transformers are extended to incorporate auxiliary information by using sentence embedding formulated from item’s textual description. Next, we introduce the rolling glass step technique for handling lengthy user sequence and descriptive features of respective item, which enables more refined partitioning of user sequences. Next the cloze task mask,random occlusion and the dropout mask are fused for producing high standard of positive samples to demonstrate better performance for contrastive learning objective. Comprehensive experiments upon three benchmark datasets show remarkable improvements when correlating with other similar contemporary models. en_US
dc.language.iso en en_US
dc.publisher MCS en_US
dc.title Bidirectional Transformers driven Contextual Sequential Recommendation with Contrastive Learning en_US
dc.type Thesis en_US


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search DSpace


Advanced Search

Browse

My Account