dc.description.abstract |
Contrastive learning (CL) with Transformer-based sequence encoders offers a robust
framework for sequential recommendation by effectively addressing data noise and sparsity
issues. By utilizing the advantages of CL, these models are able to learn rich representations
from sequential user interactions, leading to improved recommendation and
user satisfaction. However, recent CL methods are affected by two limitations. Firstly,
CL approaches are mainly process input sequences in single direction i.e left to-right
which is sub-optimal for sequential prediction tasks because user historical interactions
might not be in a fixed single direction. Secondly, these models focus on designing
CL objectives based solely on input sequence, overlooking the valuable self-supervision
signals available as contextual information of descriptive text. To address these limitations,
this research proposes a novel framework called Bidirectional Transformers
driven Contextual sequential Recommendation with Contrastive Learning
(CCLRec). Specifically, bidirectional Transformers are extended to incorporate auxiliary
information by using sentence embedding formulated from item’s textual description.
Next, we introduce the rolling glass step technique for handling lengthy user
sequence and descriptive features of respective item, which enables more refined partitioning
of user sequences. Next the cloze task mask,random occlusion and the dropout
mask are fused for producing high standard of positive samples to demonstrate better
performance for contrastive learning objective. Comprehensive experiments upon
three benchmark datasets show remarkable improvements when correlating with other
similar contemporary models. |
en_US |